Log-Loss

Definition

The Log-Loss LL(E)LL(E) reflects the number of bits required to encode an n-dimensional piece of evidence (or observation) EE given the current Bayesian network BB. As a shorthand for "the number of bits required to encode," we use the term "cost" in the sense that "more bits required" means computationally "more expensive."

LLB(E)=āˆ’logā”2(PB(E)),L{L_B}(E) = - {\log _2}\left( {{P_B}(E)} \right),

where PB(E){{P_B}(E)} is the joint probability of the evidence EE computed by the network BB:

PB(E)=PB(e1,...,en)P_B(E) = P_B({e_1},...,{e_n})

In other words, the lower the probability of EE given the network BB, the higher the Log-Loss LL(E)LL(E).

Note that EE refers to a single piece of n-dimensional evidence, not an entire dataset.

Last updated

Logo

Bayesia USA

info@bayesia.us

Bayesia S.A.S.

info@bayesia.com

Bayesia Singapore

info@bayesia.com.sg

Copyright Ā© 2024 Bayesia S.A.S., Bayesia USA, LLC, and Bayesia Singapore Pte. Ltd. All Rights Reserved.