Издательство Prentice Hall, 2003, -704 pp.
Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and doing probabilistic inference with those variables. During the 1980’s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and establish a textbook and reference for the field which has come to be known as ‘Bayesian networks.’ The 1990’s saw the emergence of excellent algorithms for leaing Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for ‘leaing Bayesian networks.’ Similar to my purpose a decade ago, the goal of this text is to provide such a source.
In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables, namely Pearl’s message passing algorithm and D’Ambrosio and Li’s symbolic probabilistic inference algorithm. It may seem odd that I present Pearl’s algorithm, since it is one of the oldest. I have two reasons for doing this: 1) Pearl’s algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and 2) Pearl’s algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.
The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; and Chapter 2 discusses further relationships between DAGs and probability distributions such as d-separation, the faithfulness condition, and the minimality condition. Chapters 3-5 conce inference. Chapter 3 covers Pearl’s message-passing algorithm, D’Ambrosio and Li’s symbolic probabilistic inference, and the relationship of Pearl’s algorithm to human causal reasoning. Chapter 4 shows an algorithm for doing inference with continuous variable, an approximate inference algorithm, and finally an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 6-10 address leaing. Chapters 6 and 7 conce parameter leaing. Since the notation for these leaing algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter
6. I then generalize to multinomial variables in Chapter
7. Furthermore, in Chapter 7 I discuss leaing parameters when the variables are continuous. Chapters 8, 9, and 10 conce structure leaing. Chapter 8 shows the Bayesian method for leaing structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraint-based method for leaing structure. Chapter 10 compares the Bayesian and constraint-based methods, and it presents several real-world examples of leaing Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.
This is a text on leaing Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeaste Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.
Basics.
ntroduction to Bayesian Networks.
More DAG/Probability Relationships.
nference.
nference: Discrete Variables.
nfluence Diagrams.
Leaing.
Parameter Leaing: Binary Variables.
More Parameter Leaing.
Bayesian Structure Leaing.
Approximate Bayesian Structure Leaing.
Constraint-Based Leaing.
More Structure Leaing.
Applications.
Applications.
Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and doing probabilistic inference with those variables. During the 1980’s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and establish a textbook and reference for the field which has come to be known as ‘Bayesian networks.’ The 1990’s saw the emergence of excellent algorithms for leaing Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for ‘leaing Bayesian networks.’ Similar to my purpose a decade ago, the goal of this text is to provide such a source.
In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables, namely Pearl’s message passing algorithm and D’Ambrosio and Li’s symbolic probabilistic inference algorithm. It may seem odd that I present Pearl’s algorithm, since it is one of the oldest. I have two reasons for doing this: 1) Pearl’s algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and 2) Pearl’s algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.
The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; and Chapter 2 discusses further relationships between DAGs and probability distributions such as d-separation, the faithfulness condition, and the minimality condition. Chapters 3-5 conce inference. Chapter 3 covers Pearl’s message-passing algorithm, D’Ambrosio and Li’s symbolic probabilistic inference, and the relationship of Pearl’s algorithm to human causal reasoning. Chapter 4 shows an algorithm for doing inference with continuous variable, an approximate inference algorithm, and finally an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 6-10 address leaing. Chapters 6 and 7 conce parameter leaing. Since the notation for these leaing algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter
6. I then generalize to multinomial variables in Chapter
7. Furthermore, in Chapter 7 I discuss leaing parameters when the variables are continuous. Chapters 8, 9, and 10 conce structure leaing. Chapter 8 shows the Bayesian method for leaing structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraint-based method for leaing structure. Chapter 10 compares the Bayesian and constraint-based methods, and it presents several real-world examples of leaing Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.
This is a text on leaing Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeaste Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.
Basics.
ntroduction to Bayesian Networks.
More DAG/Probability Relationships.
nference.
nference: Discrete Variables.
nfluence Diagrams.
Leaing.
Parameter Leaing: Binary Variables.
More Parameter Leaing.
Bayesian Structure Leaing.
Approximate Bayesian Structure Leaing.
Constraint-Based Leaing.
More Structure Leaing.
Applications.
Applications.