<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=648880075207035&amp;ev=PageView&amp;noscript=1">

Seminar Recording

Intelligence Analysis with Artificial Intelligence and Bayesian Networks

Recorded on September 11, 2018, at the Virginia Tech Applied Research Center — Arlington.


Seminar Materials

Seminar Program


  • "Currently, Bayesian Networks have become one of the most complete, self-sustained and coherent formalisms used for knowledge acquisition, representation and application through computer systems."[1]
  • “Bayesian inference is important because it provides a normative and general-purpose procedure for reasoning under uncertainty.”[2]

Even a casual observer would presumably agree that intelligence analysis is a quintessential example of reasoning under uncertainty. So, if the above propositions are justified, one has to wonder why references to Bayesian inference and Bayesian networks are so rare in the literature on intelligence analysis.[3] As the only mathematically-correct procedure for probabilistic inference, how can Bayes' Rule be nearly non-existent in the current practice of the U.S. intelligence community? While we can only speculate about the reasons for the present predicament, we endeavor to address this apparent deficit with this seminar and future workshops.

Objective: Explicit Reasoning

In this seminar, we recommend how the intelligence community can potentially enhance its intelligence products by using Bayesian concepts and "human-machine teaming" with Bayesian networks as a type of Artificial Intelligence (AI). 

In this context, we propose using AI in perhaps unexpected ways, namely for the "nuts and bolts" of elementary reasoning. Our objective is to directly observe all the "gears turning" throughout the inference process, rather than deferring to a latent reasoning process inside the analyst's mind or a black-box algorithm.

N.B. In the spirit of utmost transparency in reasoning, all examples in this seminar are presented in the form of practical software demonstrations with the intention that participants can replicate the case studies independently after the event. All datasets, models, and presentation materials are available for that purpose.

Bayesian Network Workflow

A Conceptual Map of Analytic Modeling and Reasoning

To begin our exploration, we present a conceptual framework using three dimensions, i.e., a "3D map" of analytic modeling and reasoning:

  • X — Inference Type: Probabilistic vs. Deterministic
  • Y  Model Purpose: Observational vs. Causal Inference
  • Z  Model Source: Data vs. Theory

We explain how many existing analytic methods fit into this schema, including parametric modeling and machine learning. 

Critical Thinking, Cognitive Constraints, and the Limits of Logic

Furthermore, we need to understand — at least conceptually — the capabilities and limitations of intelligence analysts in terms of their critical thinking. Thus, we propose another framework to guide our discussion about the limitations of human reasoning:

  • X  Strength of Argument (Probabilistic vs. Deterministic)
  • Y  Number of Dimensions (negative X: diagnostic inference, positive X: causal inference)
  •  Observed Human Inference Error (Delta vs. Normative Inference)

To illustrate the general difficulty of probabilistic reasoning, we present several inference tasks that seem trivial at first glance but turn out to have counterintuitive solutions:

  • Friend or Foe? Diagnostic reasoning from effect to cause.
  • "Either night or the Prussians will come!" — Probabilistic inter-causal reasoning at Waterloo.
  • Monty Hall in the Military: Target selection in the presence of decoys.

Seminar participants have the opportunity to share their personal assessments as we go through these examples.

Bayesian Networks as a Universal Reasoning Framework

We must conclude that probabilistic inference tasks can rarely be performed reliably through deliberate human reasoning. Rather, a normative approach requires the application of Bayes' Rule for obtaining correct results. Thus, we introduce Bayesian networks as a modeling framework and BayesiaLab as the modeling environment and inference engine. As a result, we can encode the available domain knowledge in a human-friendly way, i.e., graphically with directed arcs, and use the "artificial intelligence" of BayesiaLab to perform the sometimes counterintuitive inference computations.

Baggage Claim

Quantifying Uncertainty, Evidential Strength, and Contradiction 

Representing a problem domain as a Bayesian network has many advantages. Among them is the ability to compute information-theoretic concepts easily, such as Entropy, Mutual Information, Bayes Factor, and Kullback-Leibler Divergence. Why is this important? Measuring uncertainty in terms of entropy, we can optimize our approach for reducing the uncertainty regarding a target variable most efficiently. Calculating the Bayes Factor for multiple pieces of evidence — real or hypothetical — allows us to determine how they support or contradict a given hypothesis.

The Fallacy of Data-Driven Decisions — Bayesian Networks to the Rescue

The promise of Big Data has created the illusion that data, through elaborate visualization and machine learning, can ultimately speak for itself. As it turns out, this is a fallacy. We highlight how querying perfectly-recorded data from a completely-observed domain can sometimes lead to catastrophically false interpretations. In our example, Simpson's Paradox rears its ugly head.

Unfortunately, neither more data nor advanced statistical techniques can resolve the paradox. The only way forward is to employ causal assumptions from human experts. A Bayesian network facilitates this process as causal arcs encode the analyst's causal assumptions and the available data allows estimating the network's "parameters." The combination of the qualitative and the quantitative part of the network serves as the basis for performing inference. So, even in this is day and age of Big Data, human domain knowledge remains critical.

Simpson's Paradox

Human-Machine Teaming for Reasoning

Beyond encoding the knowledge of individual domain experts, as in the above examples, we present a knowledge elicitation workflow for a group of experts. Our proposed methodology is derived from the Delphi Method and utilizes the Bayesian network paradigm plus the BayesiaLab software platform. We illustrate this approach with a case study about developing universal policies under extreme uncertainty and without any data available from the underlying domain:

  • Encoding a qualitative Bayesian network structure.
  • Reinventing the Delphi Method: web-based knowledge elicitation using the Bayesia Expert Knowledge Elicitation Environment (BEKEE).
  • Finding optimal policies using BayesiaLab's Policy Learning function with the "elicited and quantified" Bayesian network.



Knowledge Discovery Through Artificial Intelligence

In the final part of the seminar, we employ BayesiaLab's innovative machine-learning algorithms to discover knowledge from high-dimensional data and represent it in the form of a Bayesian network. First, we obtain a directly interpretable structure—not a black box. Any subject matter expert can intuitively review and validate (or critique) the network structure.

Not a Black Box

Recognizing Anomalies, Detecting & Planning Deceit

Secondly, such a Bayesian network compactly represents the joint probability distribution of the underlying data, which facilitates anomaly detection in domains with hundreds or even thousands of variables. Our seminar's final example illustrates this attractive property of Bayesian networks in the context of adversarial reasoning:

  • Recognizing deceit in the seemingly normal behavior of an adversary.
  • What makes an observation unusual? BayesiaLab's optimization algorithms reveal what is anomalous about anomalies, which facilitates the interpretation by domain experts.
  • Deceiving an adversary about true intentions by optimizing the high-dimensional distribution of misleading evidence.

Knowledge Discovery

BayesiaLab Courses

May 8–10, 2019 Singapore Introductory Course (3 Days)
May 13–15, 2019 Sydney, Australia Introductory Course (3 Days)
May 21–23, 2019 Paris, France Advanced Course (3 Days, in French) 
June 5, 2019 Washington, D.C. BayesiaLab 101 Short Course (1 Day)
June 12–14, 2019 Seattle, WA Introductory Course (3 Days)
June 17–19, 2019 Seattle, WA Advanced Course (3 Days)

Upcoming Seminars, Webinars, and Conferences

Live Webinar May 16, 2019 11:00 – 12:00 (CDT, UTC-5) Human-Machine Teaming
Live Webinar May 30, 2019 11:00 – 12:00 (CDT, UTC-5) Causal Counterfactuals for Contribution Analysis — Explaining a Misunderstood Concept with Bayesian Networks
Live Webinar June 13, 2019 11:00 – 12:00 (CDT, UTC-5) Black Swans & Bayesian Networks — Jointly Representing Common and Rare Events
Please check out our archive of recordings of previous events.

7th Annual BayesiaLab Conference

October 7–9, 2019 Durham, NC 3-Day Introductory Course
October 10–11, 2019 Durham, NC 7th Annual BayesiaLab Conference
October 14–16, 2019 Durham, NC 3-Day Advanced Course