
of the network structure
.
Given the decision network model for the real-estate investment problem, let’s see
how it can be evaluated to give the expected utilities and hence make decisions.
4.4.3 Evaluation using a decision tree model
In order to show the evaluation of the decision network, we will use a decision tree
representation. The nonleaf nodes in a decision tree are either decision nodes or
chance nodes and the leaves are utility nodes. The nodes are represented using the
same shapes as in decision networks. From each decision node, there is a labeled
link for each alternative decision, and from each chance node, there is a labeled link
for each possible value of that node. A decision tree for the real estate investment
problem is shown in Figure 4.9.
To understand a decision tree, we start with the root node, which in this case is the
first decision node, whether or not to inspect the house. Taking a directed path down
the tree, the meaning of the link labels are:
From a decision node, it indicates which decision is made
From a chance node, it indicates which value has been observed
At any point on a path traversal, there is the same assumption of “no-forgetting,”
meaning the decison maker knows all the link labels from the root to the current
position. Each link from a chance node has a probability attached to it, which is the
probability of the variable having that value given the values of all the link labels to
date. That is, it is a conditional probability. Each leaf node has a utility attached to
it, which is the utility given the values of all the link labels on its path from the root.
In our real estate problem, the initial decision is whether to inspect (decision node
I), the result of the inspection report, if undertaken, is represented by chance node R,
the buying decision by BH and the house condition by C. The utilities in the leaves
are combinations of the utilities in the U and V nodes in our decision network. Note
that in order to capture exactly the decision network, we should probably include
the report node in the “Don’t Inspect” branch, but since only the “unknown” branch
would have a non-zero probability, we omit it. Note that there is a lot of redundancy
in this decision tree; the decision network is a much more compact representation.
A decision tree is evaluated as in Algorithm 4.4. Each possible alternative scenario
(of decision and observation combinations) is represented by a path from the root to a
leaf. The utility at that leaf node is the utility that would be obtained if that particular
scenario unfolded. Using the conditional probabilities, expected utilities associated
with the chance nodes can be computed as a sum of products, while the expected
utility for a decision assumes that the action returning the highest expected utility
will be chosen (shown with BOLD, with thicker arcs, in Figure 4.9). These expected
utilities are stored at each non-leaf node in the tree (shown in Figure 4.9 in underlined
italics) as the algorithm works its way recursively back up to the root node.
Some BN software, such as Netica and GeNIe, will add such precedence links automatically.
© 2004 by CRC Press, LLC
© 2004 by Chapman & Hall/CRC Press LLC