We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Hellinger distance decision trees for PU learning in imbalanced data sets.
- Authors
Ortega Vázquez, Carlos; vanden Broucke, Seppe; De Weerdt, Jochen
- Abstract
Learning from positive and unlabeled data, or PU learning, is the setting in which a binary classifier can only train from positive and unlabeled instances, the latter containing both positive as well as negative instances. Many PU applications, e.g., fraud detection, are also characterized by class imbalance, which creates a challenging setting. Not only are fewer minority class examples compared to the case where all labels are known, there is also only a small fraction of unlabeled observations that would actually be positive. Despite the relevance of the topic, only a few studies have considered a class imbalance setting in PU learning. In this paper, we propose a novel technique that can directly handle imbalanced PU data, named the PU Hellinger Decision Tree (PU-HDT). Our technique exploits the class prior to estimate the counts of positives and negatives in every node in the tree. Moreover, the Hellinger distance is used instead of more conventional splitting criteria because it has been shown to be class-imbalance insensitive. This simple yet effective adaptation allows PU-HDT to perform well in highly imbalanced PU data sets. We also introduce PU Stratified Hellinger Random Forest (PU-SHRF), which uses PU-HDT as its base learner and integrates a stratified bootstrap sampling. Our empirical analysis shows that PU-SHRF substantially outperforms state-of-the-art PU learning methods for imbalanced data sets in most experimental settings.
- Subjects
DECISION trees; RANDOM forest algorithms; FRAUD investigation; SUPERVISED learning
- Publication
Machine Learning, 2024, Vol 113, Issue 7, p4547
- ISSN
0885-6125
- Publication type
Article
- DOI
10.1007/s10994-023-06323-y