Recent advancements in graph neural networks (GNNs) have significantly enhanced the performance of AI systems in tasks such as community detection, user friendship prediction, and drug discovery. However, the opaque nature of these models undermines user trust, especially in sensitive domains like health and finance. Graph Counterfactual Explanation (GCE) methods aim to mitigate this issue by providing insights into model predictions and suggesting user actions for alternative outcomes. Yet, GCEs produced by different methods often vary in quality, diversity, and alignment with the original model’s predictions. This work introduces an ensemble-based approach designed to address these inconsistencies by leveraging multiple GCE methods. Our approach comprises two main strategies: Selection, employing multi-criteria optimization to choose the optimal base explanation for each case, and Aggregation, combining multiple explanations to form a more robust overall explanation. We propose three selection strategies and six aggregation strategies. Our experimental evaluation demonstrates that these ensemble methods, particularly Ideal-Point Multi-Criteria Selection, consistently outperform individual GCE methods across diverse datasets in terms of quality, thereby significantly improving the interpretability of GNNs.

Prado-Romero, M. A.; Prenkaj, B.; Stilo, Giovanni. (2025). Exploring Ensemble Strategies for Graph Counterfactual Explanations. In Communications in Computer and Information Science (pp. 177- 201). Isbn: 9783032083296. Isbn: 9783032083302. Doi: 10.1007/978-3-032-08330-2_9.

Exploring Ensemble Strategies for Graph Counterfactual Explanations

Stilo G.
Supervision
2025

Abstract

Recent advancements in graph neural networks (GNNs) have significantly enhanced the performance of AI systems in tasks such as community detection, user friendship prediction, and drug discovery. However, the opaque nature of these models undermines user trust, especially in sensitive domains like health and finance. Graph Counterfactual Explanation (GCE) methods aim to mitigate this issue by providing insights into model predictions and suggesting user actions for alternative outcomes. Yet, GCEs produced by different methods often vary in quality, diversity, and alignment with the original model’s predictions. This work introduces an ensemble-based approach designed to address these inconsistencies by leveraging multiple GCE methods. Our approach comprises two main strategies: Selection, employing multi-criteria optimization to choose the optimal base explanation for each case, and Aggregation, combining multiple explanations to form a more robust overall explanation. We propose three selection strategies and six aggregation strategies. Our experimental evaluation demonstrates that these ensemble methods, particularly Ideal-Point Multi-Criteria Selection, consistently outperform individual GCE methods across diverse datasets in terms of quality, thereby significantly improving the interpretability of GNNs.
2025
9783032083296
9783032083302
Counterfactual
Ensemble Learning
Explainable AI
Graph Neural Networks
Machine Learning
Prado-Romero, M. A.; Prenkaj, B.; Stilo, Giovanni. (2025). Exploring Ensemble Strategies for Graph Counterfactual Explanations. In Communications in Computer and Information Science (pp. 177- 201). Isbn: 9783032083296. Isbn: 9783032083302. Doi: 10.1007/978-3-032-08330-2_9.
File in questo prodotto:
File Dimensione Formato  
unpaywall-bitstream-1593194669.pdf

Open Access

Tipologia: Versione dell'editore
Licenza: Creative commons
Dimensione 596.02 kB
Formato Adobe PDF
596.02 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11385/255138
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact