Research & Insights

By Emma Ferneyhough, January 2, 2014

Generate Geo-fences with CrowdFlower’s Platform

What is a geo-fence? A geo-fence is a virtual perimeter around real-world locations that make these locations “visible” to any GPS-capable device. Why would one need geo-fences? As mobile smartphone usage continues to grow, it can often make sense to use their location-aware capabilities to trigger certain events. Let’s say you want to offer real-time coupons via text message on mobile devices depending on your customer’s proximity to your store. Or you want to track the frequency or amount of time a GPS-enabled device was in an area, perhaps for shipping purposes, monitoring workforces or even patients and pets.

CrowdFlower has geo-fenced more than 60,000 locations across the United States. How do we do it? Using custom Javascript and the Google Maps API we design microtasks in which workers place a re-sizeable circle around a location on a map. Upon submission of the task, we receive the latitude, longitude and radius of the geo-fenced location.

Example location

Target, 6601 Grand Ave, Gurnee, IL 60031

Latitude: 42.382865 Longitude: -87.966399 Radius: 75.583541


Now, let’s get into the nitty-gritty of how we aggregate judgments, or responses, from our on-demand workforce in this task.

How CrowdFlower Aggregates Results for a Typical Microtasking Job

Most tasks have a set number of possible answers. We use test questions to ensure quality results from workers. As such, each person in CrowdFlower’s workforce builds up a “trust score” as he or she completes test questions within tasks. The score represents a worker’s performance over time. In a typical CrowdFlower microtask, several people submit judgments on each question, and the trust scores of the workers who participated determine an answer’s “confidence.” The answer with the highest confidence is returned.

How CrowdFlower Aggregates Geo-fencing Results

Geofencing tasks require asking the workers to select an area on a map that provides us with three numbers – latitude, longitude and radius. All of these data points are continuous, meaning there are an infinite number of possible responses that could be made. The only way to aggregate responses in a geofencing task is to take the average. But which responses do we average? How do we know which responses are reasonable and which are outliers? There are three possible scenarios:

Scenario 1: Everything is peachy.

In this case taking the simple average would end up with an acceptable centroid and radius. (A centroid in this case is a point on the map defined by latitude and longitude coordinates.)

Scenario 2: One of these things is not like the other.

In this case, the average centroid would be off the building or location and the radius would be too small. Our solution needs to remove outlier judgments.

Scenario 3: Far and wide.

In this case, the average might be close to what we want. However, we shouldn’t trust the average if all the judgments are far from the average. Our solution needs to throw out all responses if there is low “agreement” between judgments, even if none are technically outliers.

To accomplish our goal of returning the most accurate data possible we created a highly specialized aggregation process to account for scenarios in which our dataset contained (1) outliers and (2) coordinates spread far and wide where there are no statistical outliers.


Step 1.

For every location in the task, we first take the judgments submitted by each trusted worker. We make sure there are at least 3 centroids (latitude-longitude pairs).


Step 2.

We then calculate the haversine distance between each of the points, which is simply the distance between two points on the globe, accounting for the Earth’s curvature. In this chart, we’ve made up some numbers to illustrate the process.


Step 3.

After taking the average haversine distance (mean1), we then go through a process of dropping the maximum distance and recalculating the mean (mean2).

If mean1 and mean2 are similar enough, we keep mean1. If they are too different, we discard the next largest distance and repeat the process until the difference between means is minimized.

Iteration Mean Distance Difference Segment Dropped (Distance)
Mean1 31.4 N/A None
Mean2 28.1 3.35 E to F (75)
Mean3 24.6 3.49 B to F (70)
Mean4 21.4 3.22 C to F (60)
Mean5 17.5 3.86 A to F (60)
Mean6 16.1 1.39 * D to E (30)
Mean7 14.4 1.74 B to E (30)
Mean8 12.1 2.23 A to E (30)
Mean9 10.8 1.31 C to E (20)
Mean10 10 0.83 B to C (15)
Mean11 10 0 ** C to D (10)
*Local Minimum **Global Minimum

meandifference_line


Eventually we come up with a set of coordinates whose haversine distances are within a specified range.In this example, we discard points E and F because those points resulted in large mean differences.
5judgments_A-D

We are left with points A, B, C and D


Step 4.

The mean latitude and longitude is then calculated for all remaining coordinates, as well as the standard deviation. Any coordinates that are more than 2 standard deviations from the mean are dropped, and a new mean latitude and longitude is calculated.

(In another scenario, we might have initially kept point E because there was a local minimum difference at iteration 6, but it likely would have been dropped in the following step where the 2 standard deviation cutoff is calculated).

5judgments_A-D_mean Mean centroid in magenta


Step 5.

The very last step is to make sure that the coordinates we have left are all within a small distance of one another. As long as most of the coordinates are all fairly close to each other (half a building width, for example), we return the resulting mean latitude, longitude and radius. If most of them are far away, then we don’t return anything for that particular retail location because it means there is less agreement among workers. Better to return nothing than misleading information!

Good

Bad

Conclusions

This simple strategy works pretty well to handle most situations, although it can be somewhat conservative. We err on the side of returning fewer, more accurate locations rather than returning more, but slightly less accurate locations. This method also assumes that if the majority of judgments are clustered together, then the store must be located near them. It is possible that in a particularly tricky situation, most people will think location A is correct, and one very careful person found the actual correct location B. This would result in the wrong location being returned. Due to this possibility, we increase the number of minimum judgments per unit from three to seven to allow the wisdom of the crowd to emerge.

In the end, using our platform and the methodology mentioned above, we can help our customers map locations at a much faster pace and larger scale and for less money than hiring a temporary workforce.

Read up on geofences and the haversince formula.