Robust Explainable AI: the Case of Counterfactual Explanations
Abstract
Counterfactual explanations (CXs) are routinely used to shed light on the decisions of machine learning models; however, CX generation strategies often lack robustness, which may jeopardise their explanatory function.
This tutorial aims at introducing Robust Explainable AI, a rapidly growing field that offers novel solutions to alleviate this problem and improve the trustworthiness of CXs.
About the Speaker
Francesco Leofante is a Research Fellow within the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four year effort devoted to the formal study of robustness issues arising in XAI.