We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Eliciting Human Judgment for Prediction Algorithms.
- Authors
Ibrahim, Rouba; Kim, Song-Hee; Tong, Jordan
- Abstract
Even when human point forecasts are less accurate than data-based algorithm predictions, they can still help boost performance by being used as algorithm inputs. Assuming one uses human judgment indirectly in this manner, we propose changing the elicitation question from the traditional direct forecast (DF) to what we call the private information adjustment (PIA): how much the human thinks the algorithm should adjust its forecast to account for information the human has that is unused by the algorithm. Using stylized models with and without random error, we theoretically prove that human random error makes eliciting the PIA lead to more accurate predictions than eliciting the DF. However, this DF-PIA gap does not exist for perfectly consistent forecasters. The DF-PIA gap is increasing in the random error that people make while incorporating public information (data that the algorithm uses) but is decreasing in the random error that people make while incorporating private information (data that only the human can use). In controlled experiments with students and Amazon Mechanical Turk workers, we find support for these hypotheses. This paper was accepted by Charles Corbett, operations management.
- Subjects
HUMAN error; FORECASTING; ALGORITHMS; OPERATIONS management; HUMAN beings
- Publication
Management Science, 2021, Vol 67, Issue 4, p2314
- ISSN
1526-5501
- Publication type
Academic Journal
- DOI
10.1287/mnsc.2020.3856