Expected Log-Loss
Context
The Log-Loss reflects the number of bits required to encode an n-dimensional piece of evidence (or observation) given the current Bayesian network . As a shorthand for "the number of bits required to encode," we use the term "cost" in the sense that "more bits required" means computationally "more expensive."
where is the joint probability of the evidence computed by the network :
Furthermore, one of the key metrics in Information Theory is Entropy:
As a result, Entropy can be considered the sum of the Expected Log-Loss values of each state of variable given network .
where
Usage
To illustrate these concepts, we use the familiar Visit Asia network:
VisitAsia.xbl (opens in a new tab)
In BayesiaLab, the Expected Log-Loss values can be shown in the context of the Monitors.
Monitors
We consider the nodes and in the VisitAsia.xbl network.
- On the left, the Monitors of the two nodes show their marginal distributions.
- On the right, we set , which updates the probability of , i.e., .
- On the Monitor of , we now select
Monitor Context Menu > Show Expected Log-Loss
that the Expected Log-Loss values for the states of are shown instead of their probabilities. - This is an interesting example because setting does reduce the Entropy of , but does not seem to change the Expected Log-Loss of .
Visual Illustration
The following plot illustrates , , and . For a compact representation in the plot, we substituted for .
In this binary case, the curves show how the can be decomposed into and .
The blue curve also confirms that the Expected Log-Loss values are identical for the two probabilities of , i.e., 80.54% and 42.52%.
Monitor Tooltip
Instead of replacing the states' probabilities with the Expected Log-Loss values in a Monitor, you can bring up the Expected Log-Loss values ad hoc as a Tooltip.
- Click on the Information Mode icon in the Toolbar.
- Then, when you hover over any Monitor with your cursor, a Tooltip shows the Expected Log-Loss values.