Skip to main navigationSkip to main content
The University of Southampton
News

Making crowdsourcing more reliable

Published: 12 October 2012

Researchers from the University of Southampton are designing incentives for collection and verification of information to make crowdsourcing more reliable.

Crowdsourcing is a process of outsourcing tasks to the public, rather than to employees or contractors. In recent years, crowdsourcing has provided an unprecedented ability to accomplish tasks that require the involvement of a large number of people, often across wide-spread geographies, expertise, or interests.

The world's largest encyclopaedia, Wikipedia, is an example of a task that can only be achieved through crowd participation. Crowdsourcing is not limited to volunteer efforts. For example, Amazon Mechanical Turk (AMT) and CrowdFlower are ‘labour on demand’ markets that allow people to get paid for micro-tasks, as simple as labelling an image or translating a piece of text.

Recently, crowdsourcing has demonstrated effectiveness in large-scale, information-gathering tasks, across very wide geographies. For example, the Ushahidi platform allowed volunteers to perform rapid crisis mapping in real-time in the aftermath of disasters such as the Haiti earthquake.

One of the main obstacles in crowdsourcing information gathering is reliability of collected reports. Now Dr Victor Naroditskiy and Professor Nick Jennings from the University of Southampton, together with Masdar Institute’s Professor Iyad Rahwan and Dr Manuel Cebrian, Research Scientist at the University of California, San Diego (UCSD), have developed novel methods for solving this problem through crowdsourcing. The work, which is published in the academic journal PLOS ONE, shows how to crowdsource not just gathering, but also verification of information.

The research will make crowdsourcing more reliable
Making crowdsourcing more reliable

Dr Victor Naroditskiy of the Agents, Interaction and Complexity group at the University of Southampton, and lead author of the paper, says: “The success of an information gathering task relies on the ability to identify trustworthy information reports, while false reports are bound to appear either due to honest mistakes or sabotage attempts. This information verification problem is a difficult task, which, just like the information-gathering task, requires the involvement of a large number of people.”

Sites like Wikipedia have existing mechanisms for quality assurance and information verification. However, those mechanisms rely partly on reputation, as more experienced editors can check whether an article conforms to the Wikipedia objectivity criteria, has sufficient citations, etc. In addition, Wikipedia has policies for resolving conflicts between editors in cases of disagreement.

However, in time-critical tasks, there is no established hierarchy of participants, and little basis for judging credibility of volunteers who are recruited on the fly. In this kind of scenario, special incentives are needed to carry out verification. The research presented in the PLOS ONE paper provides such incentives.

Professor Iyad Rahwan of Masdar Institute in Abu Dhabi and a co-author of the paper, explains: “We showed how to combine incentives to recruit participants to verify information. When a participant submits a report, the participant's recruiter becomes responsible for verifying its correctness. Compensations to the recruiter and to the reporting participant for submitting the correct report, as well as penalties for incorrect reports, ensure that the recruiter will perform verification.”

Incentives to recruit participants have previously been proposed by Dr Manuel Cebrian from UCSD, and a co-author of the paper, to win the DARPA Red Balloon Challenge, where teams had to locate 10 weather balloons positioned at random locations throughout the United States. In that scheme, where the person who found the balloons received a pre-determined compensation, for example $1,000, his recruiter received $500 and the recruiter of the recruiter got $250. Dr Manuel Cebrian says: “The results on incentives to encourage verification provide theoretical justification for the incentives used to win the Red Balloon Challenge.”

Notes for editors

In the DARPA Red Balloon Challenge, a team from MIT was able to locate 10 weather balloons positioned at random locations throughout the United States in under nine hours. In the follow-up Tag Challenge, sponsored by the US State Department, a team from the University of Southampton, Masdar Institute in Abu Dhabi and the University of California, San Diego (USCD), managed to locate three individuals in New York, Washington D.C. and Bratislava within 12 hours using only a single photograph.

  • Theoretical findings in the paper can be adapted into practical methods for encouraging verification in scenarios like the Red Balloon challenge. For instance, the following implementation would incentivise the referrer to check reports of his friends. As soon as a report is submitted, the referrer of the person submitting the report is requested to verify it. If the referrer does not perform verification, the compensation he receives for correct reports and other referrals will be decreased by an amount proportional to the cost of verification (e.g., £20 for a verification that takes one hour) regardless of the correctness of the report. If it further turns out that a report is false, and we know on average that two-thirds of the reports are false (as was the case in the Red Balloon challenge), the results suggest that the penalty of (1 + (1/3)/(2/3))£20 = £30 should be imposed.
  • Privacy Settings