**Recent Clients**

# Bayesian Networks

## Evidential Reasoning

From the product rule (or chain rule), one can express the probability of any desired proposition in terms of the conditional probabilities specified in the network. For example, the probability that the *Sprinkler* is on given that the *Pavement* is slippery is:

These expressions can often be simplified in ways that reflect the structure of the network itself. The first algorithms proposed for probabilistic calculations in Bayesian networks used a local distributed message-passing architecture, typical of many cognitive activities. Initially, this approach was limited to tree-structured networks but was later extended to general networks in Lauritzen and Spiegelhalter’s (1988) method of junction tree propagation. A number of other exact methods have been developed and can be found in recent textbooks.

It is easy to show that reasoning in Bayesian networks subsumes the satisfiability problem in propositional logic and, therefore, exact inference is NP-hard. Monte Carlo simulation methods can be used for approximate inference (Pearl, 1988) giving gradually improving estimates as sampling proceeds. These methods use local message propagation on the original network structure, unlike junction-tree methods. Alternatively, variational methods provide bounds on the true probability.

## Causal Reasoning

Most probabilistic models, including general Bayesian networks, describe a joint probability distribution (JPD) over possible observed events, but say nothing about what will happen if a certain intervention occurs. For example, what if I turn the *Sprinkler* on instead of just observing that it is turned on? What effect does that have on the *Season*, or on the connection between *Wet* and *Slippery*? A causal network, intuitively speaking, is a Bayesian network with the added property that the parents of each node are its direct causes, as in Figure 2.4. In such a network, the result of an intervention is obvious: the *Sprinkler* node is set to *X _{3}=on* and the causal link between the

*Season*

*X*and the

_{1}*Sprinkler*

*X*is removed (Figure 2.5). All other causal links and conditional probabilities remain intact, so the new model is:

_{3}Notice that this differs from observing that *X _{3}=on*, which would result in a new model that included the term

*P(X*. This mirrors the difference between seeing and doing: after observing that the

_{3}=on|x_{1})*Sprinkler*is on, we wish to infer that the

*Season*is dry, that it probably did not rain, and so on. An arbitrary decision to turn on the

*Sprinkler*should not result in any such beliefs.

Causal networks are more properly defined, then, as Bayesian networks in which the correct probability model—after intervening to fix any node’s value—is given simply by deleting links from the node’s parents. For example, *Fire → Smoke* is a causal network, whereas *Smoke → Fire* is not, even though both networks are equally capable of representing any joint probability distribution of the two variables. Causal networks model the environment as a collection of stable component mechanisms. These mechanisms may be reconfigured locally by interventions, with corresponding local changes in the model. This, in turn, allows causal networks to be used very naturally for prediction by an agent that is considering various courses of action.