Research & Insights

By Dole Oleson, November 4, 2011

Crowdsourcing Scientific Research: Leveraging the Crowd for Scientific Discovery

 

Cell counts | Flickr - Photo Sharing!

Lab scientists spend countless hours manually reviewing and annotating cells. What if we could give these hours back, and replace the tedious parts of science with a hands-off, fast, cheap, and scalable solution?

That’s exactly what we did when we used the crowd to count neurons, an activity that computer vision can’t yet solve. Building on the work we recently did with the Harvard Tuberculosis lab, we were able to take untrained people all over the world (people who might never have learned that DNA Helicase unzips genes…), turn them into image analysts with our task design and quality control, and get results comparable to those provided by trained lab workers.

Here’s how:

We took cortex slide images from mice provided by a neuroscience lab at Harvard University. We cut each image into smaller pieces, so they’d be easier for people to work on.

After a brief set of instructions, contributors were instructed to count the neurons in the slide by clicking on each individual neuron.

 

Fig 1: example slide

Contributors were given examples of edge cases (i.e. Is this a cell or not? Is this two or three cells?). Contributors identified and clicked on every cell, which placed a green marker on top of each cell. An automated counter kept track of the number of clicks.

We controlled quality using Gold Standard (“Gold”) units, which were images with known cell counts that we added to the task. The benefits here are threefold. First, Gold questions provide training and feedback to our contributors, so that they can get better at the task over time. Second, contributors don’t know which questions are Gold, forcing them to honestly answer all questions. Finally, if a contributor fails to answer enough Gold correctly, we remove them from the job.

 

Fig 2: Map of all contributors on this task. Step it up Iceland!

Visit the batchgeo.com navigable map.

After removing all of our “untrusted” contributors, we are left with our “trusted” contributors. Each image had four trusted contributors count the neurons. We took the average count less any outliers in order to get the most accurate results.

How did this experiment actually turn out? How did our results compare to those of trained lab workers? In short, we performed extremely well. As you can see in our results below, the average difference between the Crowd and professional lab counts is 2.0%, which can be chalked up to ambiguity in certain clusters of cells.

Image Lab Count Avg. Crowd Count Difference % Difference
Sox6ko2a-1 239 229 -10 (4.0%)
Sox6ko2a-3 157 153 -4 (2.4%)
Sox6ko2a-4 161 160 -1 (0.6%)
Sox6ko2b-1 250 240 -11 (4.2%)
Sox6ko2b-2 179 173 -6 (3.2%)
Sox6ko2b-3 134 130 -4 (3.0%)
Sox6ko2b-4 153 152 -2 (1.0%)
Sox6ko3a-1 209 209 0 (0.1%)
Sox6ko3a-2 147 149 2 1.2%
Sox6ko3a-3 134 129 -5 (3.9%)
Sox6ko3a-4 138 136 -2 (1.6%)
Sox6ko3b-1 213 212 -1 (0.5%)
Sox6ko3b-3 78 75 -4 (4.5%)
sox6ko3b-4 54 54 0 0.0

Future iterations of this will include breaking the images into smaller pieces, as well as using automated solutions to tag the high confidence units, leaving only the edge case cells that require human eyes.

We hope that results like these will encourage more scientists, labs, and biotech firms to crowdsource pieces of their research. We believe this could free up their time for more complicated work, decrease the latency of results for experiments, and quicken the pace of scientific discovery.

Yesterday it was TB Cells, today its neuron cells; tomorrow: cancer cells? We’re issuing an open call for any university lab, biotechnology, or pharmaceutical company with large image data sets: contact us to utilize the crowd to shorten the life cycle from idea to major scientific advancement.