<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=648880075207035&amp;ev=PageView&amp;noscript=1">

Free Seminar:

Bayesian Networks for
Intelligence Analysis

Virginia Tech Applied Research Center — Arlington
West Falls Church Room
900 N Glebe Rd, Arlington, VA 22203
September 11, 2018, 2 p.m. - 5 p.m.


"Currently, Bayesian Networks have become one of the most complete, self-sustained and coherent formalisms used for knowledge acquisition, representation and application through computer systems."[1]

“Bayesian inference is important because it provides a normative and general-purpose procedure for reasoning under uncertainty.”[2]

A Blind Spot in Intelligence Analysis?

If these wide-ranging claims were justified, one would have to wonder why references to Bayesian inference and Bayesian networks are so rare in the literature on intelligence analysis. 

In this seminar, we address this potential blind spot of the intelligence community and explore fundamental elements of reasoning, and propose a conceptual framework using three dimensions:

  • Probabilistic vs. Deterministic Inference
  • Observational vs. Causal Inference
  • Building a Knowledge Base from Data vs. Theory

We explain how many analytic methods used in the intelligence community can be subsumed under this schema. 

Cognitive Constraints and the Limits of Logic

To illustrate the need for improving upon current intelligence analysis practice, we present several inference tasks that seem trivial but have counter-intuitive solutions. The examples reveal, for instance, that formal deductive logic is of limited use in real-world reasoning. And, we must recognize that human intuition is typically wrong with regard to probabilistic diagnostic inference:

  • Reasoning under uncertainty: 
    • What is the taxi color? Diagnostic reasoning from effect to cause.
    • Where is my bag? Probabilistic inter-causal reasoning. 

We conclude that probabilistic inference tasks can rarely be performed correctly through deliberate reasoning. Rather, the normative approach requires the application of Bayes' Rule for obtaining correct results. Using a Bayesian network and BayesiaLab's inference algorithms, this computation becomes as straightforward as performing basic arithmetic with a spreadsheet.

Baggage Claim

The Fallacy of Data-Driven Decisions

As a next step, we highlight how commonly-used (and technically-correct) statistical summaries of data can be utterly misleading. In our example, Simpson's Paradox rears its ugly head, leading to catastrophically false interpretations of the effects within a problem domain.

Unfortunately, neither lots of data nor clever statistical techniques can resolve the paradox. The only way forward is to employ causal assumptions from human experts. A Bayesian network facilitates this process as causal arcs encode the analyst's causal assumptions and the available data allows estimating the network's "parameters." The combination of the qualitative and the quantitative part of the network serve as the basis for performing inference.

So, in this is day and age of "Big Data," domain knowledge remains critical.

Simpson's Paradox

Human-Machine Teaming

Beyond encoding the knowledge of individual domain experts, as in the above examples, we present a knowledge elicitation workflow for a group of experts. Our proposed methodology is derived from the Delphi Method and utilizes the Bayesian network paradigm plus the BayesiaLab software platform. We illustrate this approach with a case study about developing universal policies under extreme uncertainty and without any data available from the underlying domain:

  • Reasoning and decision-making without data:
    • Encoding a qualitative Bayesian network structure.
    • Systematic knowledge elicitation from experts using the Bayesia Expert Knowledge Elicitation Environment (BEKEE).
    • Finding optimal policies using BayesiaLab's Policy Learning function with the "elicited and quantified" Bayesian network.


Knowledge Discovery Through Machine Learning

Finally, we employ BayesiaLab's innovative machine-learning algorithms to discover knowledge from high-dimensional data and represent it in the form of a Bayesian network. First, we obtain a directly interpretable structure—not a black box. Secondly, such a Bayesian network compactly represents the joint probability distribution of the underlying data, which facilitates anomaly detection in domains with hundreds or even thousands of variables. Our seminar's final example illustrates this use case:

  • Recognizing deceit in seemingly normal observations using Bayesian networks
  • What makes an observation odd? BayesiaLab's optimization algorithms reveal what's anomalous about anomalies


Who Should Attend?

This seminar is geared toward the intelligence and law enforcement communities, which would include intelligence analysts, military planners, policymakers, strategy analysts, knowledge managers, forensic analysts, investigators, plus students and teachers in related fields.

[1] Heni Bouhamed, Afif Masmoudi, Thierry Lecroq, and Ahmed Rebaï. 2015. Structure space of Bayesian networks is dramatically reduced by subdividing it in sub-networksJ. Comput. Appl. Math. 287, C (October 2015), 48-62. 

[2] Feeney, Aidan & Heit, Evan (eds.) (2007). Inductive Reasoning: Experimental, Developmental, and Computational Approaches. Cambridge University Press.