In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model’s decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.

Prado-Romero, M. A.; Prenkaj, B.; Stilo, Giovanni; Celi, A.; Estevanell-Valladares, E.; Valdes-Perez, D. A.. (2022). Ensemble approaches for Graph Counterfactual Explanations. In CEUR Workshop Proceedings (pp. 88- 97). https://ceur-ws.org/Vol-3277/paper6.pdf.

Ensemble approaches for Graph Counterfactual Explanations

Stilo G.
Supervision
;
2022

Abstract

In recent years, Graph Neural Networks have reported outstanding performances in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the model’s decisions is essential. Explainable AI, or Explainable Machine Learning, is artificial intelligence in which humans can understand the decisions or predictions made by the AI. A special case is the Counterfactual examples which provide suggestions on the steps the system needs to take to change its decision. Historically ensemble learning and explainability have been jointly exploited to explain the decision of ensemble models. Contrarily, in this work, we focus on the ensemble mechanisms of the explainers to improve the quality of explanations. In this work, we explore, thus, which are the possible ensemble mechanism that can be adopted in several explainability scenarios. Furthermore, we introduce and discuss a new explainability problem where a single coherent counterfactual explanation must be provided for a set of input instances and their explanations.
2022
Counterfactual Explanations; Ensemble; Explainable AI; Machine Learning
Prado-Romero, M. A.; Prenkaj, B.; Stilo, Giovanni; Celi, A.; Estevanell-Valladares, E.; Valdes-Perez, D. A.. (2022). Ensemble approaches for Graph Counterfactual Explanations. In CEUR Workshop Proceedings (pp. 88- 97). https://ceur-ws.org/Vol-3277/paper6.pdf.
File in questo prodotto:
File Dimensione Formato  
paper6.pdf

Open Access

Tipologia: Versione dell'editore
Licenza: Creative commons
Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11385/252639
Citazioni
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact