Blog
Filter by tag

How Machine Learning Can Help Us Understand Childhood Development

January 25, 2018

, ,

When we talk machine learning in medicine, chances are you’re talking about computer vision. And when you’re talking about computer vision in medicine, chances are you’re talking about medical imagery. After all, cellular imaging, microscopy, radiology, brain scans, and similar imaging frameworks are a contained, knowable space. One hand X-ray is quite similar to the next hand X-ray, especially when you compare their collation to, say, two different pictures of a city street. Past that, there’s simply a ton of similar images that have been scored and annotated by professionals that provide machine learning algorithms with high quality training data.

But while medical imagery is indeed the most well-known use case for machine learning medicine, it certainly isn’t the only one. We’ve seen researchers like Andrew Su build large databases of rare diseases on CrowdFlower. One of our AI for Everyone Challenge winners, in fact, is doing something quite similar. But a member of our machine learning team here at CrowdFlower introduced us to research we hadn’t yet seen before. Her name is Qazaleh Mirsharif and her research at the University of Houston has the potential to change the way we evaluate early childhood development.

The research centers around a technology called ego-centric cameras. In layman’s terms, that just means first-person cameras. What makes this research really interesting is that it leverages ego-centric cameras to understand how infants understand the world.

Essentially, what Qazaleh’s team did was observe parents and children interact and play with toy objects naturally in a controlled environment. They used data collected by Dr. Hanako Yoshida from the University of Houston where both the parent and child would wear a head camera during the play session. Qazaleh’s team used camera inputs and computer vision tools to analyze the footage. Specifically, they were focused on visual focus of the baby’s attention on objects. But what exactly can we learn from egocentric video analysis using computer vision and machine learning? It turns out, quite a lot.

For starters, by analyzing where a child is looking and how much time a child is spending with objects visually, the team was able to understand what actions taken by parents capture and keep a child’s attention at different ages to gauge development and growth. An example? Shaking an object or moving it side-to-side doesn’t hold an infant’s attention nearly as well as “zooming” an object. By that we mean taking something like a stuffed bunny and moving it closer to the child’s face until it takes up most of her field of vision. This may suggest visual focus of attention strongly correlates with what a child actually understands, knowing which motions capture and keep a child’s attention is a lot like knowing the best way to introduce an infant or toddler to objects and concepts. Which is pretty cool in its own right.

But what makes this research really special is that it establishes a baseline behavior for “normal” visual development in children between 6 and 18 months. After all, in children that age, all we can do is observe. They can’t really tell us much at all. But we can glean a ton from the way they pay attention, what concepts that helps them understand, and, then, use that information to make sure other growing children are progressing apace.

Qazaleh’s team used computer vision tools to extract the most visually dominant objects in the child’s view from long video sessions. They used those to estimate the size and location of objects in child’s visual field. Their computer vision tools also helped analyze motions and gestures in those videos and extracted parts of the footage where important motion and gestures happen.

These techniques together provide tools for scientists to analyze visual behavior of infants and potential correlations with parent’s gestures and motion in a quantitative, objective way. That can reveal patterns that are simply hard to see by human observation. Essentially, the algorithms in question can easily understand what a child is focusing on and compare him or her to that baseline to gauge if they’re following a normal developmental pattern.

Research like this could allow doctors and clinicians to explore disorders in the development of infants identify developmental disorders like autism, or even help train visual systems and object detection in robots.

It all comes down, of course, to the themes we’ve discussed time and time again: good data and smart machine learning experts. In this case, Qazaleh’s team partnered with the developmental scientists at the University of Houston, keeping the data quality high and the data quality robust.

At the end of the day, she’s the best person to explain how to build these algorithms and how they work. She’ll be speaking about her research at Data Day Texas, so stop by! And if you’d like to learn a bit about how to create computer vision algorithms, from soup to nuts, both she and our colleague Humayun Irshad will be throwing a 4-hour seminar at Data Day Texas in a few days. Sign up here!

Justin Tenuto

Justin Tenuto

Justin writes about data at CrowdFlower. He enjoys books about robots and aliens, baking bread, and is pretty sure he can beat you at pingpong.