I’m going to be delving into the topic of quality in the crowd at Crowdsortium on Thursday, September 13, and I figured I’d give a preview of some of the things we’re thinking about at CrowdFlower.
Chris and I started CrowdFlower on the premise that we could use smarter statistical analysis to collect high quality data from the crowd. What we’ve observed over and over again is that while math goes a long way, it isn’t enough. Quality in crowdsourcing comes from clearly communicating the problem we are asking someone to solve, and motivating the worker to do a great job.
Over the past few years, we’ve built several critical tools to communicate with our contributors. We use training data, called “Gold”, that immediately gives contributors feedback on whether they got an answer right or wrong. We show contributors more Gold when they start a new task, and we tailor Gold around edge cases where we think someone might make a mistake. We also have a technology called CrowdFlower Markup Language that helps us and our customers build beautiful job interfaces for the individuals doing our tasks.
Our most recent developments are focused on motivating contributors. We’ve created a dashboard for each contributor that tracks progress over time on CrowdFlower jobs:
As you can see, the dashboard includes badges that reward strong work, and we’re using these badges to give our best contributors access to higher paying work. We’ve also built a leaderboard for our most prolific contributors (it’s something that we’ll be adding to our Teams page, a constant reminder of the individual contributors who are at the core of what we do). So far, our contributors are writing in that they love the changes, and we’re incorporating their feedback into the product.
I’m looking forward to learning about other cool ways of approaching quality in the crowd at Crowdsortium on Thursday.