Maximum Likelihood Estimation with Priors

Maximum Likelihood Estimation with Priors


  • BayesiaLab can also take into account Priors when estimating parameters using Maximum Likelihood Estimation.
  • Priors reflect any a priori knowledge of an analyst regarding the domain, in other words, expert knowledge. See also Prior Knowledge for Structural Learning.
  • These priors are expressed with an analyst-specified, initial Bayesian network (structure and parameters) plus analyst-specified Prior Samples.
  • Prior Samples represent the analyst's subjective degree of confidence in the Priors.

P^(X=xiPa=pai)=N(X=xi,Pa=pai)+M0×P0(X=xi,Pa=pai)j(N(X=xj,Pa=paj)+M0×P0(X=xj,Pa=pai))\hat P(X = {x_i}|Pa = p{a_i}) = \frac{{N(X = {x_i},Pa = p{a_i}) + {M_0} \times {P_0}(X = {x_i},Pa = p{a_i})}}{{\sum\nolimits_j {\left( {N(X = {x_j},Pa = p{a_j}) + {M_0} \times {P_0}(X = {x_j},Pa = p{a_i})} \right)} }}


  • M0{M_0} is the degree of confidence in the Prior.
  • P0{P_0} is the joint probability returned by the prior Bayesian network.
  • BayesiaLab uses these two terms to generate virtual samples that are subsequently combined with the observed samples from the dataset.

Virtual Database Generator

  • With your current Bayesian network, you can generate Priors
  • Select Menu > Data > Prior Samples > Generate.
  • You can specify M0{M_0} by setting the number of Prior Samples.

  • BayesiaLab uses the current Bayesian network to compute P0{P_0}.

  • The existence of a new Virtual Database is indicated by an icon in the lower right corner of the graph window, next to the "real dataset" icon .

  • Right-clicking on the Virtual Database icon displays the structure of the prior knowledge that was used for generating the Virtual Samples.

  • These Virtual Samples will be combined with the observed "real" samples during the learning process.

Number of Uniform Prior Samples

  • Edit Number of Uniform Prior Samples allows you to define prior knowledge in such a way that all the variables are marginally independent (fully unconnected network), and the marginal probability distributions of all nodes are uniform.

  • For instance, if the number of Prior Samples is set to 1, one observation ("occurrence") would be "spread across" all states of each node, essentially assigning a "fraction of an observation" to each node's states.

  • To apply Smoothed Probability Estimation, select Menu > Edit > Edit Smoothed Probability Estimation

  • Specify the number of Prior Samples.

For North America

Bayesia USA

4235 Hillsboro Pike
Suite 300-688
Nashville, TN 37215, USA

+1 888-386-8383

Head Office

Bayesia S.A.S.

Parc Ceres, Batiment N 21
rue Ferdinand Buisson
53810 Change, France

For Asia/Pacific

Bayesia Singapore

1 Fusionopolis Place
#03-20 Galaxis
Singapore 138522

Copyright © 2024 Bayesia S.A.S., Bayesia USA, LLC, and Bayesia Singapore Pte. Ltd. All Rights Reserved.