We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Exploiting patterns to explain individual predictions.
- Authors
Jia, Yunzhe; Bailey, James; Ramamohanarao, Kotagiri; Leckie, Christopher; Ma, Xingjun
- Abstract
Users need to understand the predictions of a classifier, especially when decisions based on the predictions can have severe consequences. The explanation of a prediction reveals the reason why a classifier makes a certain prediction, and it helps users to accept or reject the prediction with greater confidence. This paper proposes an explanation method called Pattern Aided Local Explanation (PALEX) to provide instance-level explanations for any classifier. PALEX takes a classifier, a test instance and a frequent pattern set summarizing the training data of the classifier as inputs, and then outputs the supporting evidence that the classifier considers important for the prediction of the instance. To study the local behavior of a classifier in the vicinity of the test instance, PALEX uses the frequent pattern set from the training data as an extra input to guide generation of new synthetic samples in the vicinity of the test instance. Contrast patterns are also used in PALEX to identify locally discriminative features in the vicinity of a test instance. PALEX is particularly effective for scenarios where there exist multiple explanations. In our experiments, we compare PALEX to several state-of-the-art explanation methods over a range of benchmark datasets and find that it can identify explanations with both high precision and high recall.
- Subjects
FORECASTING; EXPLANATION; CONFIDENCE
- Publication
Knowledge & Information Systems, 2020, Vol 62, Issue 3, p927
- ISSN
0219-1377
- Publication type
Article
- DOI
10.1007/s10115-019-01368-9