BayesiaLab
Expected Log Loss

Expected Log-Loss

Context

The Log-Loss LL(E)LL(E) reflects the number of bits required to encode an n-dimensional piece of evidence (or observation) EE given the current Bayesian network BB. As a shorthand for "the number of bits required to encode," we use the term "cost" in the sense that "more bits required" means computationally "more expensive."

LLB(E)=log2(PB(E)),L{L_B}(E) = - {\log _2}\left( {{P_B}(E)} \right),

where PB(E)PB(E) is the joint probability of the evidence EE computed by the network BB:

Furthermore, one of the key metrics in Information Theory is Entropy:

H(X)=xXp(x)log2(p(x))H(X) = - \sum\limits_{x \in X} {p(x){{\log }_2}\left( {p(x)} \right)}

As a result, Entropy can be considered the sum of the Expected Log-Loss values of each state xx of variable XX given network BB.

H(X)=xXLLxH(X) = \sum\limits_{x \in X} {L{L_x}}

where

LLx=pB(x)log2(pB(x))L{L_x} = - {p_B}(x){\log _2}\left( {{p_B}(x)} \right)

Usage

To illustrate these concepts, we use the familiar Visit Asia network:

VisitAsia.xbl (opens in a new tab)

In BayesiaLab, the Expected Log-Loss values can be shown in the context of the Monitors.

Monitors

We consider the nodes DyspneaDyspnea and BronchitisBronchitis in the VisitAsia.xbl network.

  • On the left, the Monitors of the two nodes show their marginal distributions.
  • On the right, we set p(Bronchities=True)=100%p\left( {Bronchities = True} \right) = 100\%, which updates the probability of DyspneaDyspnea, i.e., p(Dyspnea=True)=80.54%p\left( {Dyspnea = True} \right) = 80.54\%.
  • On the Monitor of DyspneaDyspnea, we now select Monitor Context Menu > Show Expected Log-Loss that the Expected Log-Loss values for the states of DyspneaDyspnea are shown instead of their probabilities.
  • This is an interesting example because setting Bronchitis=TrueBronchitis=True does reduce the Entropy of Dyspnea=TrueDyspnea=True, but does not seem to change the Expected Log-Loss of Dyspnea=FalseDyspnea=False.

Visual Illustration

The following plot illustrates H(Dyspnea)H(Dyspnea), LL(Dyspnea=True)LL(Dyspnea=True), and LL(Dyspnea=False)LL(Dyspnea=False). For a compact representation in the plot, we substituted XX for DyspneaDyspnea.

In this binary case, the curves show how the EntropyH(X)Entropy H(X) can be decomposed into LL(x=True)LL(x=True) and LL(x=False)LL(x=False).

The blue curve also confirms that the Expected Log-Loss values are identical for the two probabilities of Dyspnea=FalseDyspnea=False, i.e., 80.54% and 42.52%.

  • LL(p(Dyspnea=False)=10.1946=0.8054)=0.459LL(p(Dyspnea=False)=1-0.1946=0.8054)=0.459
  • LL(p(Dyspnea=False)=10.5748=0.4252)=0.459LL(p(Dyspnea=False)=1-0.5748=0.4252)=0.459

Monitor Tooltip

Instead of replacing the states' probabilities with the Expected Log-Loss values in a Monitor, you can bring up the Expected Log-Loss values ad hoc as a Tooltip.

  • Click on the Information Mode icon in the Toolbar.
  • Then, when you hover over any Monitor with your cursor, a Tooltip shows the Expected Log-Loss values.

Workflow Animation


For North America

Bayesia USA

4235 Hillsboro Pike
Suite 300-688
Nashville, TN 37215, USA

+1 888-386-8383
info@bayesia.us

Head Office

Bayesia S.A.S.

Parc Ceres, Batiment N 21
rue Ferdinand Buisson
53810 Change, France

For Asia/Pacific

Bayesia Singapore

1 Fusionopolis Place
#03-20 Galaxis
Singapore 138522


Copyright © 2024 Bayesia S.A.S., Bayesia USA, LLC, and Bayesia Singapore Pte. Ltd. All Rights Reserved.