App developers must respect the quirks of smartphone culture when crowdsourcing
Want to do something really big and comprehensive, but don’t have the time or the resources to do it yourself? The solution has long been crowdsourcing—thousands of volunteers contributed to the Oxford English Dictionary in the 19th century, for example. Now smartphones are making such projects easier than ever. Instant updates from users can allow for services that once we could only dream of, such as Waze, Google’s real-time traffic monitoring app.
But would-be mobile crowdsourcers must be aware of smartphone users’ particular concerns or they are likely to fail. First, choose the right type of task. Christian Rozsenich, managing director at crowdsourcing provider Clickworker, based in Essen, Germany, says short tasks suit mobile users best. They can easily handle image processing and pattern recognition tasks that require fewer than 10 seconds during a lull in the day, for instance. Assignments such as snapping a photo at a place the user is likely to be anyway, such as a supermarket, also tend to be more popular than those that require special plans.
Next, remember that crowdsourcing clutters phone bandwidth and erodes battery life. So, adjusting an app to record GPS coordinates every 10 minutes instead of every 3 seconds increases the odds that users will stick around.
Retaining users is a problem for virtually every crowdsourcing project. On average, participants at Clickworker assist with a project for three months before moving on. Incentives can keep people around longer. For example, price aggregator GasBuddy.com rewards users who upload snapshots of prices at the pump with gas coupons.
But there’s a catch with that approach. Incentives tend to attract participants who try to game the system to earn rewards. Mobile crowdsourcing projects that rely on data such as daily activity logs that can easily be fabricated are particularly vulnerable.
There are a few ways to protect projects from such attacks. For tasks based on numerical scores, developers can assign a lower weight to outliers to discount their contributions. For others, Tony T. Luo, of the Institute for Infocomm Research, in Singapore, has developed an algorithm to track reputation, so users build credibility over time but quickly lose it if they begin to act erratically.
Another feature that developers should incorporate is a privacy portal, says Salil Kanhere, who researches mobile networking at the University of New South Wales, in Sydney. Mobile projects that collect location or other sensitive data can spook potential participants. Private portals that give users control over exactly how their data is shared can help put them at ease.
Anonymizing data is another important feature, but Kanhere says this protection is often trickier to implement than developers assume. The most popular method, called spatial cloaking, links a user to a general region but mixes his or her data with that of other users in the area. However, participants in areas with few users can still be identified. Kanhere is working on a technique called collaborative path hiding, which allows even a handful of users to swap data so that it is impossible to tell who generated it.
After a project’s launch, says Kurt Luther, director of the Crowd Lab at Virginia Tech, in Blacksburg, collaborators must maintain open lines of communication and remember that contributors are real human beings. Developers, he says, tend to forget.
“Many project owners are software developers who think of the crowdsourced human intelligence in their systems as just another resource, like disk space or bandwidth,” Luther says. But if users are dehumanized and not treated well, word spreads fast through online forums.
For that reason, he adds, a good way to start any mobile crowdsourcing project is to actually participate in one. Adopting the crowd’s perspective also helps developers avoid what both Luther and Rozsenich say is one of the most common mistakes—designers failing to clearly explain how to complete tasks. “Project owners love to blame low-quality results on lazy or incompetent workers, but the problem is usually poor task design,” says Luther.