Bayesian data analysis: Difference between revisions

Jump to navigation Jump to search
no edit summary
m (Reverted edits by Otihizuv (Talk) to last revision by Admin)
No edit summary
Line 13: Line 13:
* the handling of error propagation is more sophisticated within IDA, allowing non-Gaussian error distributions and absolutely general parameter interdependencies; and  
* the handling of error propagation is more sophisticated within IDA, allowing non-Gaussian error distributions and absolutely general parameter interdependencies; and  
* additionally, it provides a systematic way to include prior knowledge into the analysis.
* additionally, it provides a systematic way to include prior knowledge into the analysis.
== Bayes' Theorem ==
The method is based on Bayes' Theorem, expressed as follows:
:<math>
P(\vec {\alpha} |\vec {d},\vec{\sigma},I)
= \frac{{L}(\vec {d} | \vec {\alpha}, \vec{\sigma}, I)\pi(\vec {\alpha} |I)}{\int  {L}(\vec {d}|\vec {\alpha}, \vec{\sigma}, I)\pi(\vec {\alpha} |I)d\vec {\alpha}}
</math>
Here, ''&alpha;'' are a set of model parameters, and ''P'' is the probability distribution of these parameters, ''given'' the experimental data ''d'', their errors ''&sigma;'', and additional information ''I''.
Bayes' Theorem expresses this probability distribution as a product of the ''likelihood L'' of obtaining the cited experimental data ''given'' some values of the model parameters ''&alpha;'' as well as ''&sigma;'' and ''I'', and the ''prior distribution'' ''&pi;'' that expresses the knowledge concerning the model parameters preceding any measurement.
The likelihood ''L'' is computed using a ''forward model'' of the experiment, returning the value of simulated measurements while ''assuming'' a given physical state of the experimental system. It should be noted that this forward model (from system parameters to measurements) is often much easier to compute than the reverse mapping (from measurements to system parameters), as the latter is often the inverse of a projection, which is therefore typically ill-determined.
The normalization of the equation serves to maintain its character of a probability distribution, although it is not important for the determination of the best values of the parameters and their errors.
The optimum reconstruction is determined by ''maximizing'' the posterior ''P'', varying the parameters ''&alpha;''.


== See also ==
== See also ==

Navigation menu