Persuasive Contrastive Explanations for Bayesian Networks
Presented at the 10th Annual BayesiaLab Conference on Thursday, October 27, 2022.
Abstract
Explanation in Artificial Intelligence is often focused on providing reasons for why a model under consideration and its outcome are correct. Recently, research in explainable machine learning has initiated a shift in focus on including so-called counterfactual explanations. In this presentation, we present our recent proposal to combine both types of explanation in the context of explaining Bayesian networks. To this end, we introduce “persuasive contrastive explanations” that aim to provide an answer to the question “Why outcome X instead of Y?” posed by a user. In addition, we discuss an algorithm for computing persuasive contrastive explanations and suggest how these explanations could be used in an interactive session with the user.
Presentation Video
Presentation Slides
About the Presenter
Silja Renooij (Utrecht University) is a member of the Intelligent Systems group and is interested in Probabilistic Graphical Models. Her research focuses on understanding the effects of various precision-complexity tradeoffs in the specification of such models on model output, for the purpose of facilitating the construction and explanation of Bayesian networks.