Information Gain
The Information Gain regarding evidence
is the difference between the:
- Log-Loss, given an unconnected network, i.e., a so-called straw model, in which all nodes are marginally independent;
- Log-Lossgiven a reference network.
In earlier versions of BayesiaLab, Information Gain was named Consistency.
The Log-Loss reflects the "cost" in bits of applying the network
to evidence
, i.e., the number of bits that are needed to encode evidence
. The lower the probability of evidence
, the higher the Log-Loss.
As a result, a positive value of Information Gain would reflect a "cost-saving" for encoding evidence
by virtue of having network
. In other words, encoding
with network
is less "costly" than encoding it with the straw model
. Therefore, evidence
would be consistent with network
.
Conversely, a negative Information Gain indicates a so-called conflict, Log-Loss of evidence
is higher with the straw model
compared to the reference network
. Note that conflicting evidence does not necessarily mean that the reference network is wrong. Rather, it probably indicates that such a set of evidence belongs to the tail of the distribution that is represented by the reference network
.
However, if evidence
is drawn from the original data on which the reference network
was originally learned, the probability of observing conflicting evidence should be smaller than the probability of observing consistent evidence.
So, for a network model to be useful, there should generally be more sets of evidence with a positive Information Gain, i.e., consistent observations, than sets of evidence with a negative Information Gain, i.e., conflicting observations.
Therefore, the mean value of the Information Gain of a reference network
compared to a straw model
is a useful performance indicator of the reference network
.
- Information Gain and Evidence Analysis
- Network Performance
Last modified 1mo ago