Everybody knows the meaning of "importance," right? "What is important?" is a common question in daily life, and it is presumably the most common question in research. It's all about attempting to understand what matters within the context of a given domain.
Upon entering the world of statistics and analytics, we encounter a myriad of measures all related to importance, e.g., correlation, weight, significance, indirect/direct effect size, temporal/contemporaneous effects, unit effect, normalized effect, Bayes Factor, Mutual Information, KL-Divergence, contribution, elasticity, etc. Additionally, some of these measures should not be used in isolation but instead need to be seen in conjunction with other quantities, such as joint probability, for decision-making purposes. This highlights that "importance" is not at all a narrowly-defined concept, but that it instead covers a broad and diverse spectrum of notions.
While none of these measures are tied to Bayesian networks, we employ this framework to explain major and minor differences between these concepts. More specifically, we attempt to develop an intuition for all of the above concepts using machine-learned Bayesian network models. Our objective is to understand in which contexts what measures of importance are most appropriate to use.