Function parametrization

From FusionWiki
Revision as of 13:44, 24 April 2011 by Admin (talk | contribs) (→‎Method)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Function Parametrization (also spelt Parameterization) or FP is a technique to provide fast (real-time) construction of system parameters from a set of diverse measurements. It consists of the numerical determination, by statistical regression on a database of simulated states, of simple functional representations of parameters characterizing the state of a particular physical system, where the arguments of the functions are statistically independent combinations of diagnostic raw measurements of the system. The technique, developed by H. Wind for the purpose of momentum determination from spark chamber data, [1] [2] was introduced by B. Braams to plasma physics, where it was first applied to the analysis of equilibrium magnetic measurements on the circular cross-section ASDEX tokamak. [3] It was later extended to the non-circular cross-section ASDEX Upgrade tokamak[4] and the Wendelstein 7-AS stellarator. [5]


Method

The application of the technique requires that a model exists to compute the response of the measurements (q) to variations of the system parameters (p), i.e. the mapping q = M(p) is known. In doing so, all functional dependencies are parametrized (hence the name of the technique), e.g., spatially dependent functions f(r) are given in terms of an parametric expansion (such as a polynomial), and the corresponding parameters are included in the vector p.

The fast reconstruction of the system parameters is obtained by computing the inverse of the mapping M. To do so, the parameters p are varied over a range corresponding to the expected variation in actual experiments, the corresponding q are obtained, and the set of (p,q) data are stored in a database. This database is then subjected to a statistical analysis in order to recover the inverse of M. This analysis is typically a Principal Component Analysis. This procedure is also amenable to a rather detailed error analysis, so that errors in the recovered parameters p for the interpretation of actual data q can be obtained. [6]

Applications

Alternatives

  • Neural networks. Like with FP, most of the computational effort is concentrated in an analysis phase (network training), before the actual application to data. Therefore, this method is fast and suited for real-time applications. With FP, non-linear dependencies are limited by the degree of the polynomial expansions used, whereas neural networks allow more general non-linear dependencies, in principle.
  • Bayesian data analysis, which allows non-Gaussian error distributions and complex data dependencies. A very powerful method but not fast due to the need for maximization (not suited for real-time applications).

References