If A represents the prior belief and B represents the new evidence then:
P(A) is known as the prior probability. A prior probability can be informative meaning we have a strong prior belief, or uninformative meaning we have a much more uncertain prior understanding of the parameter’s true value.
P(B|A) is known as the likelihood function. The probability of the recent data/evidence given that A is true. This allows us to quantify to what degree the evidence agrees with our prior beliefs.
P(A|B) is known as the posterior probability. The probability of A after taking into account the evidence
Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected (no path connects one node to another) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node.
Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from.
The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variable. Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of conditional distributions.
Given an input vector v we are using p(h|v) for prediction of the hidden values h. Knowing the hidden values we use p(v|h)for prediction of new input values v. This process is repeated k times. After k iterations we obtain an other input vector v_k which was recreated from original input values v_0
Measurements of any kind, in any experiment, are always subject to uncertainties or errors, as they are more often called. Measurement process is, in fact, a random process described by an abstract probability distribution whose parameters contain the information desired. The results of a measurement are then samples from this distribution which allow an estimate of the theoretical parameters. In this view, measurement errors can be seen then as sampling errors.