Bayesian data analysis: Difference between revisions

From FusionWiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
Bayesian data analysis is based on [[:Wikipedia:Bayesian inference|Bayesian inference]].
Bayesian data analysis is based on [[:Wikipedia:Bayesian inference|Bayesian inference]].
<ref>D.S. Sivia, ''Data Analysis: A Bayesian Tutorial'', Oxford University Press, USA (1996) ISBN 0198518897</ref>
<ref>D.S. Sivia, ''Data Analysis: A Bayesian Tutorial'', Oxford University Press, USA (1996) {{ISBN|0198518897}}</ref>
<ref>P. Gregory, ''Bayesian Logical Data Analysis for the Physical Sciences'', Cambridge University Press, Cambridge (2005) ISBN 052184150X</ref>
<ref>P. Gregory, ''Bayesian Logical Data Analysis for the Physical Sciences'', Cambridge University Press, Cambridge (2005) {{ISBN|052184150X}}</ref>
Briefly, this approach is based on the following straightforward property of probability distributions. Let ''p(x,y)'' be the joint probability of observing ''x'' and ''y'' simultaneously. Let ''p(x|y)'' be the [[:Wikipedia:conditional probability|conditional probability]] of observing ''x'', given ''y''. Then, by definition
Briefly, this approach is based on the following straightforward property of probability distributions. Let ''p(x,y)'' be the joint probability of observing ''x'' and ''y'' simultaneously. Let ''p(x|y)'' be the [[:Wikipedia:conditional probability|conditional probability]] of observing ''x'', given ''y''. Then, by definition
:<math>p(x|y)p(y) = p(x,y) = p(y|x)p(x)\,</math>
:<math>p(x|y)p(y) = p(x,y) = p(y|x)p(x)\,</math>
Line 36: Line 36:
:<math>p(\alpha|D) = \frac{p(D|\alpha)p(\alpha)}{p(D)}</math>
:<math>p(\alpha|D) = \frac{p(D|\alpha)p(\alpha)}{p(D)}</math>
where ''D'' represents the available data.
where ''D'' represents the available data.
The likelihood ''p(D|&alpha;)'' speficies the probability of a specific measurement outcome ''D'' for a given choice of parameters ''&alpha;''.
The likelihood ''p(D|&alpha;)'' specifies the probability of a specific measurement outcome ''D'' for a given choice of parameters ''&alpha;''.
The advantage of the parametric representation is that the abstract 'system state' is reduced to a finite set of parameters, greatly facilitating numerical analysis.
The advantage of the parametric representation is that the abstract 'system state' is reduced to a finite set of parameters, greatly facilitating numerical analysis.
This parametrization may involve, e.g., smooth (orthogonal) expansion functions such as [[:Wikipedia:Fourier-Bessel_series|Fourier-Bessel functions]], or discretely defined functionals on a grid.
<ref>J. Svensson et al, ''Current tomography for axisymmetric plasmas'', [[doi:10.1088/0741-3335/50/8/085002|Plasma Phys. Control. Fusion '''50''' (2008) 085002]]</ref>


=== Maximization ===
=== Maximization ===
Line 62: Line 64:


The goal of Integrated Data Analysis (IDA) is to combine the information from a set of diagnostics providing complementary information in order to recover the best possible reconstruction of the actual state of the system subjected to measurement. This goal overlaps with the goal of Bayesian data analysis, but IDA applies Bayesian inference in a relatively loose manner to allow incorporating information obtained with traditional or non-Bayesian methods.
The goal of Integrated Data Analysis (IDA) is to combine the information from a set of diagnostics providing complementary information in order to recover the best possible reconstruction of the actual state of the system subjected to measurement. This goal overlaps with the goal of Bayesian data analysis, but IDA applies Bayesian inference in a relatively loose manner to allow incorporating information obtained with traditional or non-Bayesian methods.
<ref>[http://dx.doi.org/10.1088/0741-3335/44/8/306 R. Fischer, C. Wendland, A. Dinklage, et al, '' Thomson scattering analysis with the Bayesian probability theory'', Plasma Phys. Control. Fusion '''44''' (2002) 1501]</ref>
<ref>R. Fischer, C. Wendland, A. Dinklage, et al, '' Thomson scattering analysis with the Bayesian probability theory'', [[doi:10.1088/0741-3335/44/8/306|Plasma Phys. Control. Fusion '''44''' (2002) 1501]]</ref>
<ref>[http://dx.doi.org/10.1088/0741-3335/45/7/304 R. Fischer, A. Dinklage, and E. Pasch, ''Bayesian modelling of fusion diagnostics'', Plasma Phys. Control. Fusion '''45''' (2003) 1095-1111]</ref>
<ref>R. Fischer, A. Dinklage, and E. Pasch, ''Bayesian modelling of fusion diagnostics'', [[doi:10.1088/0741-3335/45/7/304|Plasma Phys. Control. Fusion '''45''' (2003) 1095-1111]]</ref>
<ref>[http://link.aip.org/link/?RSINAK/75/4237/1 R. Fischer, A. Dinklage, ''Integrated data analysis of fusion diagnostics by means of the Bayesian probability theory'', Rev. Sci. Instrum. '''75''' (2004) 4237]</ref>
<ref>R. Fischer, A. Dinklage, ''Integrated data analysis of fusion diagnostics by means of the Bayesian probability theory'', [[doi:10.1063/1.1787607|Rev. Sci. Instrum. '''75''' (2004) 4237]]</ref>
<ref>[http://www.new.ans.org/pubs/journals/fst/a_575 A. Dinklage, R. Fischer, and J. Svensson, ''Topics and Methods for Data Validation by Means of Bayesian Probability Theory'', Fusion Sci. Technol. '''46''' (2004) 355]</ref>
<ref>A. Dinklage, R. Fischer, and J. Svensson, ''Topics and Methods for Data Validation by Means of Bayesian Probability Theory'', [http://www.new.ans.org/pubs/journals/fst/a_575 Fusion Sci. Technol. '''46''' (2004) 355]</ref>
<ref>[http://dx.doi.org/10.1109/WISP.2007.4447579 J. Svensson, A. Werner, ''Large Scale Bayesian Data Analysis for Nuclear Fusion Experiments'', IEEE International Symposium on Intelligent Signal Processing (2007) 1]</ref>
<ref>J. Svensson, A. Werner, ''Large Scale Bayesian Data Analysis for Nuclear Fusion Experiments'', [[doi:10.1109/WISP.2007.4447579|IEEE International Symposium on Intelligent Signal Processing (2007) 1]]</ref>
<ref>[http://www.new.ans.org/pubs/journals/fst/a_10892 R. Fischer, C.J. Fuchs, B. Kurzan, et al., ''Integrated Data Analysis of Profile Diagnostics at ASDEX Upgrade'', Fusion Sci. Technol. '''58''' (2010) 675]</ref>
<ref>R. Fischer, C.J. Fuchs, B. Kurzan, et al., ''Integrated Data Analysis of Profile Diagnostics at ASDEX Upgrade'', [http://www.new.ans.org/pubs/journals/fst/a_10892 Fusion Sci. Technol. '''58''' (2010) 675]</ref>
<ref>[http://link.aip.org/link/doi/10.1063/1.3608551 B.Ph. van Milligen, T. Estrada, E. Ascasíbar, et al, ''Integrated data analysis at TJ-II: the density profile'', Rev. Sci. Instrum. '''82''' (2011) 073503]</ref>
<ref>B.Ph. van Milligen, T. Estrada, E. Ascasíbar, et al, ''Integrated data analysis at TJ-II: the density profile'', [[doi:10.1063/1.3608551|Rev. Sci. Instrum. '''82''' (2011) 073503]]</ref>


== See also ==
== See also ==

Latest revision as of 11:41, 26 January 2023

Bayesian data analysis is based on Bayesian inference. [1] [2] Briefly, this approach is based on the following straightforward property of probability distributions. Let p(x,y) be the joint probability of observing x and y simultaneously. Let p(x|y) be the conditional probability of observing x, given y. Then, by definition

from which follows Bayes' theorem:

Interpretation of Bayes' Theorem

The interpretation of this expression in the framework of data interpretation is as follows. Given the initial knowledge of a system, quantified by the prior probability p(x) of system states x, one makes an observation y with probability (or degree of confidence) p(y), adding to the prior knowledge. The posterior probability p(x|y) quantifies this enhanced knowledge of the system, given the observation y. Thus, the Bayesian inference rule allows one to (a) gradually improve the knowledge of a system by adding observations, and (b) easily combining information from diverse sources by formulating the degree of knowledge of the system and the observations in terms of probability distributions. The quantity p(y|x) is a fundamental quantity linking the prior and posterior distributions, called the likelihood, and expresses the probability of observing y, given the prior knowledge x.

Forward modelling

The likelihood is evaluated using a forward model of the experiment, returning the value of simulated measurements while assuming a given physical state x of the experimental system. Mathematically, this forward model (mapping system parameters to measurements) is often much easier to evaluate than the reverse mapping (from measurements to system parameters), as the latter is often the inverse of a projection, which is therefore typically ill-determined. On the other hand, evaluating the forward model requires detailed knowledge of the physical system and the complete measurement process.

The Likelihood

The forward model is used to predict the measurements y, based on the physical state x of the system. The Likelihood p(y|x) specifies the most probable measurement outcome, which corresponds to the maximum of the distribution p(y|x), as well as its uncertainty, given by the width of the distribution.

In a typical case, assume that the model is such that the measurement outcomes are distributed like a Gaussian around a most probable value y0, with an error Δ y. Then the likelihood will be

Note that the negative logarithm of the likelihood is proportional to the χ2 of the fit of the data y to the model y0, and maximizing the likelihood will minimize χ2, thus establishing the link between the Bayesian approach and the common least squares fit. However, the Bayesian approach is more general than the standard least squares fit, as it can handle any type of probability distribution.

Parametric formulation

The description of the system state is usually done by defining a parametric representation, e.g., by defining density or temperature profiles as a function of space via a vector of N parameters, α: n(r,α) or T(r,α). The parameters obey a prior distribution p(α), expressing physical or other constraints. Applying Bayes theorem one obtains

where D represents the available data. The likelihood p(D|α) specifies the probability of a specific measurement outcome D for a given choice of parameters α. The advantage of the parametric representation is that the abstract 'system state' is reduced to a finite set of parameters, greatly facilitating numerical analysis. This parametrization may involve, e.g., smooth (orthogonal) expansion functions such as Fourier-Bessel functions, or discretely defined functionals on a grid. [3]

Maximization

The maximization of the posterior probability as a function of the parameters α yields the most likely value of the parameters, given the data D, which is the basic answer to the data interpretation problem.

Marginalization

The width of the posterior distribution yields the error in the parameters. To obtain the error in a given parameter αi, the posterior distribution is marginalized by integrating over the remaining N-1 parameters:

The width of this one-dimensional distribution is found using standard procedures.

Comparison with Function Parametrization

Function parametrization (FP) is another statistical technique for recovering system parameters from diverse measurements. Both FP and Bayesian data analysis require having a forward model to predict the measurement readings for any given state of the physical system, and the state of the physical system and the measurement process is parametrized. However

  • instead of computing an estimate of the inverse of the forward model (as with FP), Bayesian analysis finds the best model state corresponding to a specific measurement by a maximization procedure (maximization of the likelihood);
  • the handling of error propagation is more sophisticated within Bayesian analysis, allowing non-Gaussian error distributions and absolutely general and complex parameter interdependencies; and
  • additionally, it provides a systematic way to include prior knowledge into the analysis.

Typically, the maximization process is CPU intensive, so that Bayesian analysis is not usually suited for real-time data analysis (unlike FP).

Integrated Data Analysis

The goal of Integrated Data Analysis (IDA) is to combine the information from a set of diagnostics providing complementary information in order to recover the best possible reconstruction of the actual state of the system subjected to measurement. This goal overlaps with the goal of Bayesian data analysis, but IDA applies Bayesian inference in a relatively loose manner to allow incorporating information obtained with traditional or non-Bayesian methods. [4] [5] [6] [7] [8] [9] [10]

See also

References

  1. D.S. Sivia, Data Analysis: A Bayesian Tutorial, Oxford University Press, USA (1996) ISBN 0198518897
  2. P. Gregory, Bayesian Logical Data Analysis for the Physical Sciences, Cambridge University Press, Cambridge (2005) ISBN 052184150X
  3. J. Svensson et al, Current tomography for axisymmetric plasmas, Plasma Phys. Control. Fusion 50 (2008) 085002
  4. R. Fischer, C. Wendland, A. Dinklage, et al, Thomson scattering analysis with the Bayesian probability theory, Plasma Phys. Control. Fusion 44 (2002) 1501
  5. R. Fischer, A. Dinklage, and E. Pasch, Bayesian modelling of fusion diagnostics, Plasma Phys. Control. Fusion 45 (2003) 1095-1111
  6. R. Fischer, A. Dinklage, Integrated data analysis of fusion diagnostics by means of the Bayesian probability theory, Rev. Sci. Instrum. 75 (2004) 4237
  7. A. Dinklage, R. Fischer, and J. Svensson, Topics and Methods for Data Validation by Means of Bayesian Probability Theory, Fusion Sci. Technol. 46 (2004) 355
  8. J. Svensson, A. Werner, Large Scale Bayesian Data Analysis for Nuclear Fusion Experiments, IEEE International Symposium on Intelligent Signal Processing (2007) 1
  9. R. Fischer, C.J. Fuchs, B. Kurzan, et al., Integrated Data Analysis of Profile Diagnostics at ASDEX Upgrade, Fusion Sci. Technol. 58 (2010) 675
  10. B.Ph. van Milligen, T. Estrada, E. Ascasíbar, et al, Integrated data analysis at TJ-II: the density profile, Rev. Sci. Instrum. 82 (2011) 073503