Decision Nodes
A Decision Node is a node that represents possible actions, decisions, or policies.
By including one or more Decision Nodes in a network, BayesiaLab can formally search for optimal decisions with regard to maximizing the overall utility, which is represented by Utility Nodes.
The parents of a Decision Node define its context, which means that a decision can be subject to the states of its parent nodes.
Example
The following, simple example describes a typical problem in oil exploration based on Raiffa, 1968, and Shenoy, 1996. It deals with maximizing the expected utility of exploring an oilfield despite the unknown size of the potential oil reservoir. In the context of the exploration process, decisions must be made, which we formally encode as Decision Nodes in a Bayesian network.
Decision Nodes
In this oilfield exploration problem domain, there are two decisions that have to be made:
- Whether to perform a seismic sounding to examine the geological structure at the site. The corresponding Decision Node is Seismic Test (True, False).
- The test outcome is represented by Test Result (N/A, No Structure, Open Structure, Closed Structure).
- Depending on Test Result, we must decide on whether to move forward with drilling a well. So, the second Decision Node is Drill (True, False).
- Drilling a well will ultimately establish the ground truth regarding the amount of oil present in the field: Oil (Dry, Wet, Soaking).
Utility Nodes
- Of course, finding a reservoir of oil would produce an economic gain, while drilling a dry hole would not. We encode this potential gain in the Utility Node Gain, which is a child node of Drill and Oil.
- The Utility Node Gain also accounts for the cost associated with drilling a well.
- Finally, the seismic testing also has a known cost, which we capture as in the Utility Node, Test Cost.
Static Policy Learning & Decision Evaluation
- With the domain now encoded as a Bayesian network, we switch into Validation Mode and, for illustration, bring up all
Monitors
in the Monitor Panel. - Given that we have Decision Nodes and Utility Nodes in this network, we can now employ the Static Policy Learning function:
Learning > Learn Static Policy
- This function computes all possible permutations of decisions and allows us to evaluate the expected utilities at each step.
- To gain insight into the recommended decisions, we need to return to the Modeling Mode and go into the Node Editor of the Decision Nodes.
- Under the Expected Values tab, we find a so-called Quality Table that shows the utilities corresponding to the respective parent node states plus the decision taken.
- The recommended decisions, i.e., the ones with the highest expected utilities, are highlighted with a turquoise background.
- Note that the displayed utilities assume that all subsequent decision recommendations are followed.
- The Monitors of the Decision Nodes also highlight the set of optimal decisions. The states of the recommended decisions are marked in turquoise color.
References
Raiffa, H. (1968). Decision analysis: introductory lectures on choices under uncertainty. Addison-Wesley.
Shenoy P.P. (1996) Representing and Solving Asymmetric Decision Problems Using Valuation Networks. In: Fisher D., Lenz HJ. (eds) Learning from Data. Lecture Notes in Statistics, vol 112. Springer, New York, NY.