An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information
Please use this identifier to cite or link to this item:
http://hdl.handle.net/10347/29483
Files in this item
Metadata
Title: | An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information |
Author: | Stepin, Ilia Alonso Moral, Jose María Catalá Bolos, Alejandro Pereira Fariña, Martín |
Affiliation: | Universidade de Santiago de Compostela. Centro de Investigación en Tecnoloxías da Información Universidade de Santiago de Compostela. Departamento de Electrónica e Computación Universidade de Santiago de Compostela. Departamento de Filosofía e Antropoloxía |
Subject: | Explainable artificial intelligence | Interpretable fuzzy modeling | Fuzzy rule-based classification | Counterfactual explanation | Human evaluation | |
Date of Issue: | 2022 |
Publisher: | Elsevier |
Citation: | Information Sciences 618 (2022). https://doi.org/10.1016/j.ins.2022.10.098 |
Abstract: | The explanatory capacity of interpretable fuzzy rule-based classifiers is usually limited to offering explanations for the predicted class only. A lack of potentially useful explanations for non-predicted alternatives can be overcome by designing methods for the so-called counterfactual reasoning. Nevertheless, state-of-the-art methods for counterfactual explanation generation require special attention to human evaluation aspects, as the final decision upon the classification under consideration is left for the end user. In this paper, we first introduce novel methods for qualitative and quantitative counterfactual explanation generation. Then, we carry out a comparative analysis of qualitative explanation generation methods operating on (combinations of) linguistic terms as well as a quantitative method suggesting precise changes in feature values. Then, we propose a new metric for assessing the perceived complexity of the generated explanations. Further, we design and carry out two human evaluation experiments to assess the explanatory power of the aforementioned methods. As a major result, we show that the estimated explanation complexity correlates well with the informativeness, relevance, and readability of explanations perceived by the targeted study participants. This fact opens the door to using the new automatic complexity metric for guiding multi-objective evolutionary explainable fuzzy modeling in the near future |
Publisher version: | https://doi.org/10.1016/j.ins.2022.10.098 |
URI: | http://hdl.handle.net/10347/29483 |
DOI: | 10.1016/j.ins.2022.10.098 |
E-ISSN: | 0020-0255 |
Rights: | ©2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Attribution-NonCommercial-NoDerivatives 4.0 Internacional |
Collections
-
- CiTIUS-Artigos [177]
- EC-Artigos [146]
- FAS-Artigos [192]
The following license files are associated with this item: