Supervisors Transmit Tacit Production Knowledge

Ritwika Sen is Job Market Candidate from Managerial Economics & Strategy at Northwestern University. Ritwika’s research is focused on development and organizational economics.

Click here for Ritwika’s job market paper Elevator Pitch.

Organizations routinely deploy frontline supervisors to oversee worker performance. While supervisors do many different things, economists tend to focus on their contribution to gathering information on worker performance to provide incentives (Alchian and Demsetz, 1972; Mookherjee 2013). However, supervisors also process information on how workers perform, diagnose weaknesses, and provide targeted feedback. In other words, supervisors coach workers on-the-job and transmit tacit production knowledge to improve worker performance. Understanding how supervisors add value matters as this has different implications for how organizations should select, nurture, and plan career paths for supervisors.

I design and run a field experiment to detect and quantify the role of supervisors as coaches. I partner with a leading research organization in Uganda to study how frontline supervision impacts the speed and quality of output produced by its workers (i.e., survey enumerators) whose job it is to conduct household interviews. I find that supervision leads to persistent changes in worker performance even on days, production tasks, and performance metrics where they are not directly supervised. Moreover, the effects are systematically targeted to characteristics that predict workers’ initial weaknesses. These findings are consistent with the notion that supervisors coach workers.

Supervising the Speed and Quality of Data Collection

The field experiment was embedded within a 30-day household agricultural survey run by my partner organization for purposes of a distinct study. Their primary aim was to gather high-quality data on household crop production at a high speed to minimize survey costs. I study the effects of field supervision on the performance of 68 workers who conducted around 50 household interviews each.

Figure 1 portrays a typical household farm plot from the study region, intercropped with cassava, maize, and beans. The picture underscores the key skills that workers required to administer household crop surveys. On the one hand, they had to interview farmers who had difficulty recalling what and how much they harvested from each plot in the preceding season. To obtain reliable information, workers needed to be attentive, asking probing questions when necessary. On the other hand, they needed to be fast to swiftly administer a lengthy questionnaire, as farmers typically had competing demands on their time. Thus, the data production task was multi-dimensional, involving a trade-off between production speed and output quality.

Figure 1

Measuring Survey Enumerator Performance

Measuring the speed at which workers conduct interviews is relatively simple. Measuring data quality is much harder. There are, however, ways to predict where workers are most likely to cut corners with some knowledge of the questionnaire design. An important data-quality measure I study is the number of plot-and-crop observations that workers reported in each interview. This measure requires a little explanation:

To ensure data accuracy, the questionnaire required workers to record crop harvests at the plot-level using survey rosters. This implied that each plot-crop reported (such as ‘beans were grown on plot A’) triggered approximately four minutes’ worth of interview questions. Underreporting the plot-crops grown by a household was a well-understood trick for workers to expedite interviews. For instance, a worker interviewing the household that cultivates our Figure 1 plot could have saved time by omitting to report the beans and hence avoid asking any questions about their harvests, sales, and so on.

The fieldwork protocol ensured that households were randomly assigned to workers. Consistently high (or low) interview times or plot-crop reports are therefore indicative of workers’ performance, rather than the characteristics of the households they interviewed.

The Role of Field Supervision

An important feature of fieldwork is that the quality of data that workers collect is not directly observed by the survey firm, whereas the quantity (or speed) of interviews is easier to monitor remotely. Supervisors fill this important informational gap by conducting unannounced “spot checks” in the field, where they silently observe a worker conducting an interview, evaluate their performance, and often advise them on areas for improvement afterwards. This is a useful feature for my study as workers receive individualized attention from their direct (and more experienced) supervisors during checks. [1] Supervisors’ assessments of worker performance inform dismissal, rehiring, and promotion decisions by senior management which are the primary tools to incentivize worker performance (as they are paid fixed wages).

I designed an experiment with three key components to estimate the effects of supervision on worker performance. In this post, I focus on the first component which experimentally varied the assignment of workers to spot checks each day (sampling with replacement). This procedure generated random variation in the identity of workers assigned to supervision daily, and independently, workers who were assigned to high-intensity supervision in the past. I leverage this feature to estimate the effects of cumulative supervision exposure in the first week of the project on worker performance in weeks two to five. By design, whether a worker is supervised on a particular day in weeks two to five is independent of their supervision exposure in the first week, which enables me to distinguish the persistent and contemporaneous effects of supervision.

Supervisors Provide Targeted On-the-Job Coaching

If supervisors provide targeted on-the-job coaching, we anticipate heterogeneity as the effects of supervision will be targeted to individual performance weaknesses. Given my focus on tacit knowledge, I consider heterogeneity by workers’ prior experience of conducting a similar survey. Those without prior “know how” were predictably slower in administering interviews during the first week, even after controlling for variation in output quality (with a 7-minute difference in time per plot-crop reported). Consequently, I refer to the workers with no task experience as “low-speed” workers. I refer to experienced workers as “high-speed” workers to signify their relative emphasis on speed at the outset of the survey.

I find that, following a one-standard-deviation increase in week 1 supervision intensity (or 1 additional spot check):

  • Low-speed workers tend to improve on speed. They decrease their interview completion times by 6.8% (7 minutes, p-value: 0.00) through weeks 2-5. The time-savings are more pronounced in weeks 2-3, although they persist through weeks 4-5 of the survey (Figure 2a).
  • Low-speed workers do not record lower quality data. There are no statistically significant changes in the number of plot-crops they recover (Figure 2b).
  • High-speed workers tend to improve on quality. The number of plot-crops they record increases by 5.3% per interview (p-value: 0.013) through weeks 2-5. The average high-speed worker recovers 31 additional plot-crops, and 4 minutes’ worth of survey questions corresponding to each, over the 40 interviews they conduct in weeks 2-5.
  • High-speed workers tend to slow down. They increase their interview completion times by 3.9% (3.6 minutes, p-value: 0.03) through weeks 2-5. This effect is largely driven by a slowdown in weeks 4-5 (Figure 2a).
  • On average, we observe a significant decrease in interview times in weeks 2-3 of the survey, but not in weeks 4-5 (when high-speed workers slow down). The number of plot-crops recorded by the average worker increases by 3% through weeks 2-5 (p-value: 0.08).
Figure 2.a
Figure 2.a

 

Figure 2.b
Figure 2.b

Notably, the above effects are present even on days when workers are not directly supervised. This is consistent with the notion that supervisors add value by transmitting production knowledge which leads to persistent changes in worker performance. (The paper uses two additional components of the experimental design to distinguish between alternate mechanisms such as incentives). To construct a bound on just how much value, I conduct a simple back-of-the-envelope calculation. I estimate that for every supervisor-hour invested in coaching all workers during the first week, the firm obtains an increase in value worth 2.3 worker-hours (whereas each supervisor-hour only costs 15% more than a worker-hour). The gains could be even larger if supervisor time were allocated optimally across workers.

Key Takeaways

I uncover three important features regarding the causal effects of supervision on worker performance: The effects are persistent, targeted to individual worker attributes, and economically meaningful. I conclude with some suggestions for how organizations should select and nurture supervisors, considering that they coach workers and transmit tacit knowledge.

  1. Effective supervisors require a deep understanding of the industry, the specific firm, and the tasks they oversee. Tacit knowledge is a cornerstone for delivering tailored coaching.
  2. Organizations should design career paths that nurture supervisors’ coaching skills. It may be a mistake to reward supervisors who foster talent with promotions to strategy roles that do not harness these skills.
  3. Finally, my results also suggest that early coaching on-the-job for new employees may yield substantial performance

[1] It is worth noting that the supervisors are “home-grown” – the average supervisor has over three years (40 months) of prior experience working as a survey enumerator at the parent organization.

Works Cited:

Alchian, A. A., & Demsetz, H. (1972). Production, Information Costs, and Economic Organization. The American Economic Review, 62(5), 777–795. http://www.jstor.org/stable/1815199
Mookherjee, D. (2013). 19. Incentives in Hierarchies. In R. Gibbons & J. Roberts (Ed.), The Handbook of Organizational Economics (pp. 764-798). Princeton: Princeton University Press. https://doi.org/10.1515/9781400845354-021