We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Resolving power: a general approach to compare the distinguishing ability of threshold-free evaluation metrics.
- Authors
Beam, Colin
- Abstract
Selecting an evaluation metric is fundamental to model development, but uncertainty remains about when certain metrics are preferable and why. This paper introduces the concept of resolving power to describe the ability of an evaluation metric to distinguish between binary classifiers of similar quality. This ability depends on two attributes: 1. The metric's response to improvements in classifier quality (its signal), and 2. The metric's sampling variability (its noise). The paper defines resolving power generically as a metric's sampling uncertainty scaled by its signal. A simulation study compares the area under the receiver operating characteristic curve (AUROC) and the area under the precision–recall curve (AUPRC) in a variety of contexts. It finds that the AUROC generally has greater resolving power, but that the AUPRC is better when searching among high-quality classifiers applied to low prevalence outcomes. The paper also proposes an empirical method to estimate resolving power that can be applied to any dataset and any initial classification model. The AUROC is useful for developing the resolving power concept, but it has been criticized for being misleading. Newer metrics developed to address its interpretative issues can be easily incorporated into the resolving power framework. The best metrics for model search will be both interpretable and high in resolving power. Sometimes these objectives will conflict and how to address this tension remains an open question.
- Subjects
OPEN-ended questions; EMPIRICAL research; CLASSIFICATION
- Publication
Machine Learning, 2025, Vol 114, Issue 1, p1
- ISSN
0885-6125
- Publication type
Academic Journal
- DOI
10.1007/s10994-024-06723-8