We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Counterfactual explanations as interventions in latent space.
- Authors
Crupi, Riccardo; Castelnovo, Alessandro; Regoli, Daniele; San Miguel Gonzalez, Beatriz
- Abstract
Explainable Artificial Intelligence (XAI) is a set of techniques that allows the understanding of both technical and non-technical aspects of Artificial Intelligence (AI) systems. XAI is crucial to help satisfying the increasingly important demand of trustworthy Artificial Intelligence, characterized by fundamental aspects such as respect of human autonomy, prevention of harm, transparency, accountability, etc. Within XAI techniques, counterfactual explanations aim to provide to end users a set of features (and their corresponding values) that need to be changed in order to achieve a desired outcome. Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations, and in particular, they fall short of considering the causal impact of such actions. In this paper, we present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations capturing by design the underlying causal relations from the data, and at the same time to provide feasible recommendations to reach the proposed profile. Moreover, our methodology has the advantage that it can be set on top of existing counterfactuals generator algorithms, thus minimising the complexity of imposing additional causal constrains. We demonstrate the effectiveness of our approach with a set of different experiments using synthetic and real datasets (including a proprietary dataset of the financial domain).
- Subjects
ARTIFICIAL intelligence; MACHINE learning; COUNTERFACTUALS (Logic); CAUSATION (Philosophy); TRUST
- Publication
Data Mining & Knowledge Discovery, 2024, Vol 38, Issue 5, p2733
- ISSN
1384-5810
- Publication type
Article
- DOI
10.1007/s10618-022-00889-2