Skip to content

Frontier development economics inspiring young researchers.

  • About Us
  • Interviews
    • FAQ for Grad Students
  • Topics
  • Food & Ag
    • Food and Agriculture
    • Livestock
    • Food Security
    • Nutrition
  • Health
    • Health
    • Nutrition
  • Environment
    • Environment
    • Climate Change
  • Education
  • ICT
    • Technology
  • More Economics That Really Matters
    • Migration
    • Labor
    • STAARS
    • Editorial
    • Grants, Fellowships, and Proposals
      • Conferences
      • Aid
    • Behavioral Economics
      • Firms
    • Methods
    • Location-Specific
      • Comparative Studies of Countries
      • Mozambique
      • Colombia
      • Somalia
      • Myanmar
      • Indonesia
      • Guatemala
      • Niger
      • Peru
      • Philippines
      • Cambodia
      • Burkina Faso
      • Ghana
      • Pakistan
      • Ivory Coast
      • South Asia
      • Latin America
      • Malawi
      • China
      • DRC
      • Tanzania
      • Mexico
      • Uganda
      • Ethiopia
      • Kenya
      • Sub-Saharan Africa
      • India
    • Job Market Paper
    • Summaries and Reviews
    • Fieldwork
    • Conflict
    • Gender
    • Public Sector & Governance
    • Social
    • Resilience
    • Human Capital
    • Risk
    • Insurance
    • Poverty

From pastoralists to Mechanical Turks: Using the crowd to validate crowdsourced data

<a href="https://www.econthatmatters.com/byline/nathan-jensen/" rel="tag">Nathan Jensen</a>, <a href="https://www.econthatmatters.com/byline/ronan-le-bras/" rel="tag">Ronan Le Bras</a>June 1, 2015May 3, 2023Uncategorized

Post navigation

Previous
Next

Nathan Jensen is a Postdoctoral Associate at Cornell’s Dyson School who is working with the International Livestock Research Institute (ILRI). Ronan Le Bras is a doctoral candidate at Cornell’s Department of Computer Science.

How do you collect expert information on forage conditions in remote arid and semi-arid regions? A group of researchers from Cornell University and the International Livestock Research Institute have developed a mobile application for describing forage conditions and are recruiting Kenyan pastoralists to participate by submitting surveys of local vegetation conditions as they move through their daily routine of livestock herding. The survey information will be used to supplement remotely sensed data to develop near real-time maps of forage availability, a key resource for the local transhumant population that depends mostly on cattle for income.

Photo by Nathan Jensen in Wajir, Kenya.
Photo by Nathan Jensen in Wajir, Kenya.

Although pastoralists are likely to understand forage conditions well due to a lifetime of experience judging local forage conditions, their submissions may be of poor quality for a variety of (unobserved) reasons. The Crowd Sourcing Rangeland Conditions project is running a set of experiments to test whether participant effort or understanding (of the technology and the survey application) are key factors limiting data quality. The findings from these experiments will be important to other similar citizen science projects and speak more broadly to issues associated with classic principal-agent scenarios.

To do so, we have developed two treatments and randomly assigned participants into two treatment and one control group. Each treatment involves repeated (every five days) and ongoing phone calls from a field supervisor. The content of that call varies by treatment.  One treatment signals that we are closely monitoring submissions. Participants in this group receive a report of summary statistics on their submissions from the previous day. We are testing whether this simple and extremely affordable reminder that we are monitoring their submissions increases data quality. Arguably, the signal should work by highlighting the importance of individual submissions and making those of us on the backend of the project (e.g., data users, supervisors, field technicians) more salient, which may increase participants’ effort to provide high quality submissions.

The second treatment involves more intensive and costly feedback on the quality of the submissions and a short discussion and training with respect to specific low quality submissions. This treatment will identify whether knowledge type issues (e.g., inadequate training, issues with the technology, or poor understanding of the survey) are a key factor causing participants’ errors. To increase the likelihood that a participant can recall and discuss the details of specific submissions, this feedback and discussion is also with respect to submissions from the previous day.

For that to be possible, all daily survey submissions from the feedback group must be validated against a rangeland photo also submitted as a part of each survey in a short period of time. To overcome this challenge, we rely on the crowd to validate the pastoralists’ submissions as soon as we receive them.1 We have 10 hours to check about 400 submissions each day. We are also validating an additional 50,000 submission by the same process, so the pot is large. We are using Amazon’s Mechanical Turk for some of the effort, but that approach may suffer from the very principal-agent issues that we are trying to study and we prefer a diversity of participants.

We encourage you to consider a venture into the field of citizen science. If you would like to contribute your efforts to our project by classifying images, please visit our site anytime between now and June 17th. If citizen science is appealing but categorizing rangeland conditions is not your cup of tea, consider visiting Zooniverse, where you can do anything from describing the shape of galaxies to monitoring nematodes for egg laying.2

 

This project is funded by the Academic Venture Fund from the Atkinson Center for a Sustainable Future at Cornell University, by the International Livestock Research Institute, and by the National Science Foundation.

1 While computer algorithms are getting better and better at image recognition, this is a task that humans are naturally good at, and continue to outperform computers.
2 The CSRC project is not affiliated with Zooniverse.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
Environment, Fieldwork, Kenya

Post navigation

Previous Resilience: Who are the non-resilient?
Next Job announcements: Research positions with the Tata-Cornell Agriculture and Nutrition Initiative (TCi)

Published by Nathan Jensen, Ronan Le Bras

View all posts by Nathan Jensen, Ronan Le Bras

"Most of the people in the world are poor, so if we knew the economics of being poor, we would know much of the economics that really matters."
Theodore Schultz
Nobel Lecture, 1979
Receive email notifications when new posts are added to the blog.
Loading

Contact Us: econthatmatters@gmail.com or fdf25@cornell.edu or hz399@cornell.edu

Know more about our Authors!

Proudly powered by WordPress
Theme: Rebalance by WordPress.com.