bayesia logo
BayesiaLab
Log-Loss

Log-Loss

Definition

The Log-Loss LL(E)LL(E) reflects the number of bits required to encode an n-dimensional piece of evidence (or observation) EE given the current Bayesian network BB. As a shorthand for "the number of bits required to encode," we use the term "cost" in the sense that "more bits required" means computationally "more expensive."

LLB(E)=log2(PB(E)),LL_B(E) = - \log_2\left(P_B(E)\right),

where PB(E){{P_B}(E)} is the joint probability of the evidence EE computed by the network BB:

PB(E)=PB(e1,...,en)P_B(E) = P_B({e_1},...,{e_n})

In other words, the lower the probability of EE given the network BB, the higher the Log-Loss LL(E)LL(E).

Note that E refers to a single piece of n-dimensional evidence, not an entire dataset.


Copyright © 2024 Bayesia S.A.S., Bayesia USA, LLC, and Bayesia Singapore Pte. Ltd. All Rights Reserved.