Inference: Diagnosis, Prediction, and Simulation
Part of the BayesiaLab exploration path. Start with the BayesiaLab Overview.
Bayesian networks model uncertainty explicitly. In BayesiaLab, diagnosis, prediction, and simulation are all forms of evidence-conditioned inference.
- Diagnosis (abduction): infer likely causes from observed effects.
- Prediction/simulation: infer likely effects from observed causes.
- The interpretation depends on research perspective, while the underlying computation remains the same.
Observational Inference
- Bayesian networks represent a Joint Probability Distribution and support omnidirectional inference.
- Given evidence on any subset of nodes, BayesiaLab computes posterior probabilities for all other nodes.
- Both exact and approximate observational inference algorithms are available.
Evidence Types
- Hard Evidence: one state observed with certainty.
- Likelihood/Virtual Evidence: state-specific likelihoods.
- Probabilistic/Soft Evidence: evidence as marginal distributions.
- Numerical Evidence: numeric evidence for numeric or value-encoded symbolic variables.
Causal Inference
- Beyond observation, BayesiaLab can compute intervention effects.
- BayesiaLab includes Pearl’s Graph Surgery and Jouffe’s Likelihood Matching for causal estimation.
Effects Analysis
- In this nonparametric framework, effects are estimated through simulation rather than fixed coefficients.
- Because relationships are encoded in Conditional Probability Tables (CPT), effect size depends on simulated conditions.
- Functions such as Total Effects Analysis and Target Mean Analysis support nonlinear effect and interaction analysis.
Optimization
- Target Optimization searches for value combinations that optimize a target criterion.
- Combined with Direct Effects, this helps evaluate nonlinear trade-offs under dependence.
- Genetic Target Optimization can identify high-performing scenarios within constraints.