๐Ÿ‡ณ๐Ÿ‡ฑPersuasive Contrastive Explanations for Bayesian Networks

Presented at the 10th Annual BayesiaLab Conference on Thursday, October 27, 2022.


Explanation in Artificial Intelligence is often focused on providing reasons for why a model under consideration and its outcome are correct. Recently, research in explainable machine learning has initiated a shift in focus on including so-called counterfactual explanations. In this presentation, we present our recent proposal to combine both types of explanation in the context of explaining Bayesian networks. To this end, we introduce โ€œpersuasive contrastive explanationsโ€ that aim to provide an answer to the question โ€œWhy outcome X instead of Y?โ€ posed by a user. In addition, we discuss an algorithm for computing persuasive contrastive explanations and suggest how these explanations could be used in an interactive session with the user.

Presentation Video

Presentation Slides

About the Presenter

Silja Renooij (Utrecht University) is a member of the Intelligent Systems group and is interested in Probabilistic Graphical Models. Her research focuses on understanding the effects of various precision-complexity tradeoffs in the specification of such models on model output, for the purpose of facilitating the construction and explanation of Bayesian networks.

Last updated


Bayesia USA


Bayesia S.A.S.


Bayesia Singapore


Copyright ยฉ 2024 Bayesia S.A.S., Bayesia USA, LLC, and Bayesia Singapore Pte. Ltd. All Rights Reserved.