Research & Insights

By Ram Rampalli, July 26, 2011

Crowdsourcing Thought Leadership: Building a successful portfolio of crowdsourcing projects (Part 3)

 

Analysis & Optimization
via: www.jamorama.com

This is part of a series of guest posts by Ram Rampalli, our crowdsourcing partner at eBay.
Part I – Assessment Stage
Part II – Pilot Stage
Part III – Analysis Stage
Part IV – Production Stage

About the author: Ram Rampalli created and leads the crowdsourcing program within the Selling & Catalogs team at eBay Inc. You can follow him on Twitter (@ramrampalli)

Building a successful portfolio of crowdsourcing projects – Part 3

In the first two parts of this series, we discussed the Assessment and Pilot stages. Now that the pilot task is finished and you have a copy of the results file, it’s time to analyze these results and plan for the next steps.

What can you expect to get from CrowdFlower?

CrowdFlower can provide you both the aggregated judgment file (a single “consensus” judgment per unit) and the full judgment file (all the judgments collected for that unit). Each judgment is annotated with additional data, including date/time collected, labor channel and geographic origin.

Analysis

Post pilot, we analyze the results of the data. Many new projects that we pilot with CrowdFlower are also operational through one of our outsourcing partners. Therefore, we compare the performance of CrowdFlower with that of our outsourcer on three metrics:

  • Performance (Speed)
  • Cost
  • Quality (Accuracy)

The project sponsor marks the project metrics using traffic light icons: green (met/exceeded standard), yellow (did not meet standard but is acceptable for now), red (did not meet standard). Upon completion of this analysis, we re-engage the CrowdFlower team to review the results and determine next steps in any areas that might need improvement.

Optimization

The CrowdFlower platform is setup in such a way that it can adapt to changing business requirements. This adaptability is especially important as you gain insight into the data that you are collecting. The platform offers optimizations such as geographic segmentation, runtime quality checks, and increased levels of throughput.

Here are some examples from projects that we ran:

Geographic Segmentation
  • In one test, we realized that the quality of work completed by US based contributors was significantly higher than the rest of the contributors because of contextual knowledge. Therefore, we restricted the task to a US based workforce only.
Runtime Quality Checks
  • In another test, we were able to identify a couple of runtime quality checks that can be performed on the task. Implementing these changes helped us improve the quality metric – the only lagging metric for this test.
Increased Throughput
  • In a third test, the speed of the work done was slower than anticipated. Therefore, we opened up the work to more contributor channels for this task and the speed significantly increased.

When we implemented one or more of these optimization techniques, we ran a few tests to ensure that these changes gave us the right results. Once that was set, the next step was to move these projects to the final stage – Production.