3 OpenTURNS' methods for Step B: quantification of the uncertainty sources

This section is organized in three parts. The first one gives the list of probabilistic uncertainty models proposed by OpenTURNS. The second part gives an overview of the content of the statistical toolbox that may be used to build these uncertainty models if data are available. The last part is dedicated to the mathematical description of each method.

3.1 Probabilistic models proposed in OpenTURNS

OpenTURNS proposes two different types of probabilistic models: non-parametric and parametric ones.

3.1.1 Non-parametric models

3.1.2 Parametric models

3.2 Classical statistical tools for uncertainty quantification

Building a dataset may require to aggregate several data sources; OpenTURNS offers some techniques to check beforehand if these data sources are indeed related to the same probability distribution.

Moreover, when a parametric model is used, OpenTURNS provide statistical tools to estimate the parameters, validate the resulting model and address the important issue of dependencies among uncertainty sources.

3.2.1 Aggregation of two samples

3.2.2 Estimation of a parametric models

3.2.3 Analysis of the goodness of fit of a parametric model

3.2.4 Detection and quantification of dependencies among uncertainty sources


3.3 Methods description

3.3.1 Step B  – Empirical cumulative distribution function

Mathematical description

Goal

The empirical cumulative distribution function provides a graphical representation of the probability distribution of a random vector without implying any prior assumption concerning the form of this distribution. It concerns a non-parametric approach which enables the description of complex behaviour not necessarily detected with parametric approaches.

Therefore, using general notation, this means that we are looking for an estimator F ^ N for the cumulative distribution function F X of the random variable X ̲=X 1 ,...,X n X :

F ^ N F X

Principle of the method for n X =1

Let us first consider the uni-dimensional case, and let us denote X ̲=X 1 =X. The empirical probability distribution is the distribution created from a sample of observed values x 1 ,x 2 ,...,x N . It corresponds to a discrete uniform distribution on x 1 ,x 2 ,...,x N : where X ' follows this distribution,

i1,...,N, Pr X ' =x i =1 N

The empirical cumulative distribution function F ^ N with this distribution is constructed as follows:

F N (x)=1 N i=1 N 1 x i x

The empirical cumulative distribution function F N (x) is defined as the proportion of observations that are less than (or equal to) x and is thus an approximation of the cumulative distribution function F X (x) which is the probability that an observation is less than (or equal to) x.

F X (x)= Pr Xx

The diagram below provides an illustration of an ordered sample 5,6,10,22,27.

Principle of the method for n X >1

The method is similar for the case n X >1. The empirical probability distribution is a distribution created from a sample x ̲ 1 ,x ̲ 2 ,...,x ̲ N . It corresponds to a discrete uniform distribution on x ̲ 1 ,x ̲ 2 ,...,x ̲ N : where X ̲ ' follows this distribution,

i1,...,N, Pr X ̲ ' =x ̲ i =1 N

Thus we have:

F N (x ̲)=1 N i=1 N 1 x i 1 x 1 ,...,x N n X x n X

in comparison with the theoretical probability density function F X :

F X (x)=X 1 x 1 ,...,X n X x n X
Other notations
This method is also referred to in the literature as the empirical distribution function.

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty". It enables us to obtain a representation of the distribution of the vector X ̲ of uncertain variables defined in step A "Specifying Criteria and the Case Study", without applying any a priori modelling hypotheses.
References and theoretical basics
This method has the advantage of depending only on the observed values, without any other modelling assumptions (as in the [kernel smoothing method] ). Nevertheless, in the case where little data is available, the estimation of the criteria defined in step A can be less precise with this non-parametric method than with a parametric approach (e.g. the models described in [standard parametric models] ).

The following bibliographical references provide main starting points for further study of this method:

  • Saporta G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon W.J. & Massey F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill


3.3.2 Step B  – Kernel Smoothing

Mathematical description

Kernel smoothing is a non parametric estimation method of the probability density function of a distribution.

In dimension 1, the kernel smoothed probability density function p ^ has the following expression, where K is the univariate kernel, n the numerical sample size and (X 1 ,,X n ) n the univariate random sample with i,X i :

p ^(x)=1 nh i=1 n Kx-X i h (1)

The kernel K is a function satisfying K(x)dx=1. Usually, K is chosen to be a unimodal probability density fucntion that is symmetric about 0.

The parameter h is called the bandwidth.

In dimension d>1, the kernel may be defined as a product kernel K d , as follows where x ̲=(x 1 ,,x d ) d :

K d (x ̲)= j=1 d K(x j )

which leads to the kernel smoothed probability density function in dimension d, where (X ̲ 1 ,,X ̲ n ) is the d-variate random sample which components are denoted X ̲ i =(X i1 ,,X id ) :

p ^(x ̲)=1 N j=1 d h j i=1 N K d x 1 -X i1 h 1 ,,x d -X id h d

Let's note that the bandwidth is the vector h ̲=(h 1 ,,h d ).

The quality of the approximation may be controlled by the AMISE (Asymptotic Mean Integrated Square error) criteria defined as :

AMISE(p ^)=twofirsttermsintheseriesexpansionwithrespecttoninMISE(p ^)MISE(p ^)=𝔼 X ̲ ||p ^-p|| L 2 2 =MSE(p ^,x ̲)dx ̲MSE(p ^,x ̲)=𝔼 X ̲ p ^(x ̲)-p(x ̲) 2 + Var X ̲ p ^(x ̲)

The quality of the estimation essentially depends on the value of the bandwidth h. The bandwidth that minimizes the AMISE criteria has the expression (given in dimension 1) :

h AMISE (K)=R(K) μ 2 (K) 2 R(p (2) ) 1 5 n -1 5 (2)

where R(K)=K(x ̲) 2 dx ̲ and μ 2 (K)=x ̲ 2 K(x ̲)dx ̲=σ K 2 .

If we note that R(p (r) )=(-1) r Φ 2r with Φ r =p (r) p(x)dx=𝔼 X ̲ p (r) , then relation (2) writes :

h AMISE (K)=R(K) μ 2 (K) 2 Φ 4 1 5 n -1 5 (3)

Several rules exist to evaluate the optimal bandwidth h AMISE (K) : all efforts are concentrated on the evaluation of the term Φ 4 . We give here the most usual rules :

  • the Silverman rule in dimension 1,

  • the plug-in bandwidth selection - Solve-the-equation method in dimension d,

  • the Scott rule in dimension d.

Silverman rule (dimension 1)

In the case where the density p is normal with standard deviation σ, then the term Φ 4 can be exactly evaluated. In that particular case, the optimal bandwidth of relation (3) with respect to the AMISE criteria writes as follows :

h AMISE p=normal (K)=8πR(K) 3μ 2 (K) 2 1 5 σn -1 5 (4)

An estimator of h AMISE p=normal (K) is obtained by replacing σ by its estimator σ ^ n , evaluated from the numerical sample (X 1 ,,X n ) :

h ^ AMISE p=normal (K)=8πR(K) 3μ 2 (K) 2 1 5 σ ^ n n -1 5 (5)

The Silverman rule consists in considering h ^ AMISE p=normal (K) of relation (5) even if the density p is not normal :

h Silver (K)=8πR(K) 3μ 2 (K) 2 1 5 σ ^ n n -1 5 (6)

Relation (6) is empirical and gives good results when the density is not far from a normal one.

Plug-in bandwidth selection - Solve-the-equation method (dimension 1)

Relation (3) requires the evaluation of the quantity Φ 4 . As a generale rule, we use the estimator Φ ^ r of Φ r defined by :

Φ ^ r =1 n i=1 n p ^ (r) (X i ) (7)

Derivating relation (1) leads to :

p ^ (r) (x)=1 nh r+1 i=1 n K (r) x-X i h (8)

and then the estimator Φ ^ r (h) is defined as :

Φ ^ r (h)=1 n 2 h r+1 i=1 n j=1 n K (r) X i -X j h (9)

We note that Φ ^ r (h) depends of the parameter h which can be taken in order to minimize the AMSE (Asymptotic Mean Square Error) criteria evaluated between Φ r and Φ ^ r (h). The optimal parameter h is :

h AMSE (r) =-2K (r) (0) μ 2 (K)Φ r+2 1 r+3 n -1 r+3 (10)

Given that preliminary results, the solve-the-equation plug-in method proceeds as follows :

  1. Relation (3) defines h AMISE (K) as a function of Φ 4 we denote here as :

    h AMISE (K)=t(Φ 4 ) (11)
  2. The term Φ 4 is approximated by its estimator defined in (9) evaluated with its optimal parameter h AMSE (4) defined in (10) :

    h AMSE (4) =-2K (4) (0) μ 2 (K)Φ 6 1 7 n -1 7 (12)

    which leads to a relation of type :

    Φ 4 Φ ^ 4 (h AMSE (4) ) (13)
  3. Relations (3) and (12) lead to the new one :

    h AMSE (4) =-2K (4) (0)μ 2 (K)Φ 4 R(K)Φ 6 1 7 h AMISE (K) 5 7 (14)

    which rewrites :

    h AMSE (4) =l(h AMISE (K)) (15)
  4. Relation (14) depends on both terms Φ 4 and Φ 6 which are evaluated with their estimators defined in (9) respectively with their AMSE optimal parameters g 1 and g 2 (see relation (10)). It leads to the expressions:

    g 1 =-2K (4) (0) μ 2 (K)Φ 6 1 7 n -1 7 g 2 =-2K (6) (0) μ 2 (K)Φ 8 1 7 n -1 9 (16)
  5. In order to evaluate Φ 6 and Φ 8 , we suppose that the density p is normal with a variance σ 2 which is approximated by the empirical variance of the numerical sample, which leads to :

    Φ ^ 6 =-15 16πσ ^ -7 Φ ^ 8 =105 ( 32πσ ^ -9 (17)

Then, to resume, thanks to relations (11), (13), (15), (16) and (17), the optimal bandwidth is solution of the equation :

h AMISE (K)=tΦ ^ 4 l(h AMISE (K)) (18)

Scott rule (dimension d)

The Scott rule is a simplification of the Silverman rule generalized to the dimension d which is optimal when the density p is normal with independent components. In all the other cases, it gives an empirical rule that gives good result when the density p is not far from the normal one. For examples, the Scott bandwidth may appear too large when p presents several maximum.

The Silverman rule given in dimension 1 in relation (6) can be generalized in dimension d as follows : if we suppose that the density p is normal with independent components, in dimension d and that we use the normal kernel N(0,1) to estimate it, then the optimal bandwidth vector h ̲ with respect to the AMISE criteria writes as follows :

h ̲ Silver (N)=4 d+2 1/(d+4) σ ^ i n n -1/(d+4) i (19)

where σ ^ i n is the standard deviation of the i-th component of the sample (X ̲ 1 ,,X ̲ n ), and σ K the standard deviation of the 1D kernel K.

The Scott proposition is a simplification of the Silverman rule, based on the fact that the coefficient 4 d+2 1/(d+4) remains in [0.924,1.059] when the dimension d varies. Thus, Scott fixed it to 1 :

4 d+2 1/(d+4) 1 (20)

which leads to the simplified expression :

h ̲ Silver (N)σ ^ i n n -1/(d+4) i (21)

Furthermore, in the general case, we have from relation (2) :

h AMISE (K 1 ) h AMISE (K 2 )=σ K 2 σ K 1 σ K 1 R(K 1 ) σ K 2 R(K 2 ) 1/5 (22)

Considering that σ K R(K)1 whatever the kernel K, relation (22) simplifies in :

h AMISE (K 1 )h AMISE (K 2 )σ K 2 σ K 1 (23)

If we consider the normal kernel N(0,1) for K 2 , then relation (23) writes in a more general notation :

h AMISE (K)h AMISE (N)1 σ K (24)

If h AMISE (N) is evaluated with the Silverman rule, (24) rewrites :

h Silver (K)h Silver (N)1 σ K (25)

At last, from relation (21) and (25) applied in each direction i, we deduce the Scott rule :

h ̲ Scott =σ ^ i n σ K n -1/(d+4) i (26)

Boundary treatment

In dimension 1, the boundary effects may be taken into account in OpenTURNS : the boundaries are automatically detected from the numerical sample (with the min and max functions) and the kernel smoothed PDF is corrected in the boundary areas to remain within the boundaries, according to the miroring technique :

  • the Scott bandwidth is evaluated from the numerical sample : h

  • two subsamples are extracted from the inital numerical sample, containing all the points within the range [min,min+h[ and ]max-h,max],

  • both subsamples are transformed into their symmetric samples with respect their respective boundary : its results two samples within the range ]min-h,min] and [max,max+h[,

  • a kernel smoothed PDF is built from the new numerical sample composed with the initial one and the two new ones, with the previous bandwidth h,

  • this last kernel smoothed PDF is truncated within the inital range [min,max] (conditional PDF).

Implementation in OpenTURNS

The choice of the kind of the kernel is free in OpenTURNS : it is possible to select any 1D distribution and to define it as a kernel. However, in order to optimize the efficiency of the kernel smoothing fitting (it means to minimise the AMISE error), it is recommended to select a symmetric distribution for the kernel.

All the distribution default constructors of OpenTURNS create a symmetric default distribution when possible. It is also possible to work with the Epanechnikov kernel, which is a Beta(r=2,t=4,a=-1,b=1).

The default kernel is a product of standard Normal distribution. The dimension of the product is automatically evaluated from the random sample.

The bandwidth h ̲ may be fixed by the User. However, it is recommended to let OpenTURNS evaluate it automatically from the numerical sample according to the following rules :

In dimension d, OpenTURNS automatically applies the Scott rule.

In dimension 1, the automatic bandwidth selection method depends on the size n of the numerical sample. As a matter of fact, the computation bottleneck is the estimation of the estimators Φ ^ r as it requires the evaluation of a double summation on the numerical sample, which has a cost of 𝒫(n 2 ).

  • if n250, the Solve-the-equation plug-in method is used on the entire numerical sample. The optimal bandwidth h AMISE (N) is first evaluated when considering a normal kernel, resolving equation (18) for the Normal kernel. Then relation (24) is applied in order to evaluate h AMISE (K).

  • if n>250, the Solve-the-equation plug-in method is too computationnally expensive. Then, OpenTURNS proceeds as follows :

    1. OpenTURNS evaluates the bandwidth h AMISE n 1 ,PI (N) with the plug-in method applied on the first n 1 =250 points of the numerical sample, using the Normal kernel N (by solving equation (18) with K=N);

    2. OpenTURNS evaluates the bandwidth h n 1 ,Silver (N) with the Silverman rule applied on the first n 1 =250 points of the numerical sample, using the Normal kernel N (relation (6) with K=N);

    3. OpenTURNS evaluates the bandwidth h n,Silver (N) with the Silverman rule applied on the entire numerical sample, using the Normal kernel N (relation (6) with K=N);

    4. Considering from relation (3) that :

      h Silver (K) h AMISE (K)=Φ 4 (p=normal) Φ 4 (p) 1 5 (27)

      which is independent of the size n, we have the final relation :

      h AMISE n,PI (N)=h AMISE n 1 ,PI (N) h n 1 ,Silver (N)h n,Silver (N) (28)

      Then, if the User has chosen the kernel K rather than the normal kernel N, relation (24) is used, which leads to :

      h AMISE n,PI (K)=1 σ K h AMISE n 1 ,PI (N) h AMISE n 1 ,Silver (N)h AMISE n,Silver (N) (29)

Other notations

-

Link with OpenTURNS methodology

This kernel smoothing method can be used to estimate the probability density function :
  • of the distribution of the input random variable (Step B of the global methodology),

  • of the distribution of the ouput variable of interest (Step C of the global methodology).

References and theoretical basics

The following references gives details on the method :
  • Kernel smoothing, M.P. Wand and M.C. Jones, Chapman & Hall/CRC edition, ISNB 0-412-55270-1.

  • Multivariate Density Estimation, practice and Visualisation, Theory, David W. Scott, Wiley edition.

Examples

Choice of the bandwidth h

This example illustrates the effect of the choice of the bandwidth h on the estimation of the pdf compared to the optimal one. Depending on the choice of h, one could observe for the same size N of input values over-smoothing effects or under-smoothing effects.

Oversmoothing effect

In this case, h is bigger than the optimal choice h opt . The effect of the values is more widely spread as in the optimal case.

Undersmoothing effect

In this case, h is smaller than the optimal choice h opt . The effect of the values is more locally focused on the values obtained in the data set than in the optimal case.

Optimal smoothing

Following the previous Silverman rule, for a Gaussian distribution.


3.3.3 Step B  – Standard parametric models

Mathematical description

Objective

Parametric models aim to describe probability distributions of a random variable with the aid of a limited number of parameters θ ̲. Therefore, in the case of continuous variables (i.e. where all possible values are continuous), this means that the probability density of X ̲=X 1 ,...,X n X can be expressed as f X (x ̲;θ ̲). In the case of discrete variables (i.e. those which take only discrete values), their probabilities can be described in the form X ̲=x ̲;θ ̲.

The available distributions of OpenTURNS are listed in this section. We start with continuous distributions.

  • Arcsine distribution: θ ̲=a,b, with the constraint a<b. The probability density function writes:

    1 πb-a 21-x-a+b 2 b-a 2 2 (30)

    The support is [a,b].

  • Beta distribution: Univariate distribution. θ ̲=r,t,a,b, with the constraints r>0, t>r, b>a. The probability density function writes:

    f X (x;θ ̲)=(x-a) r-1 (b-x) t-r-1 (b-a) t-1 B(r,t-r)1 axb (31)

    where B denotes the Beta function. The support is [a,b].

    Note that the Epanechnikov distribution is a particular Beta distribution : Beta(a=-1,b=1,r=2,t=4). It is usefull within the kernel smoothing theory (see [kernel smoothing] ).

    PDF of a Beta distribution.
    PDF of a Beta distribution.
    Figure 1
    PDF of a Beta distribution.
    PDF of a Beta distribution.
    Figure 2
    PDF of a Beta distribution.
    PDF of a Beta distribution.
    Figure 3
    PDF of a Beta distribution.
    PDF of a Epanechnikov distribution.
    Figure 4
  • Burr distribution: Univariate distribution. θ ̲=c,k, with the constraints c>0, k>0. The probability density function writes:

    f X (x;θ ̲)=ckx (c-1) (1+x c ) (k+1) 1 x>0 (32)
    PDF of a Burr distribution.
    PDF of a Burr distribution.
    Figure 5
  • Chi: Univariate distribution. θ ̲=ν with the constraint ν>0. The probability density function writes:

    f X (x;θ ̲)=x ν-1 e -x 2 /2 2 1-ν ( /2 Γ(ν/2) ( 1 [0,+[ (x) (33)
    PDF of a Chi distribution.
    Figure 6
  • ChiSquare: Univariate distribution. θ ̲=ν with the constraint ν>0. The probability density function writes:

    f X (x;θ ̲)=2 -ν ( /2 Γ(ν/2) ( x (ν/2-1) e -x/2 1 [0,+[ (x) (34)

    The support is [0,+[.

    PDF of a Chi Square distribution.
    PDF of a Chi Square distribution.
    Figure 7
    PDF of a Chi Square distribution.
    Figure 8
  • Dirichlet distribution: Multivariate d-dimensional distribution. θ ̲=(θ 1 ,,θ d+1 ), with the constraints d1 and θ i >0. The probability density function writes:

    f X (x ̲;θ ̲)=Γ( j=1 d+1 θ j ) j=1 d+1 Γ(θ j )1- j=1 d x j (θ d+1 -1) j=1 d x j (θ j -1) 1 Δ (x ̲) (35)

    with Δ={x ̲ d /i,x i 0, i=1 d x i 1}.

  • Epanechnikov distribution: Univariate distribution. The Epanechnikov distribution is a particular Beta distribution : Beta(a=-1,b=1,r=2,t=4). It is usefull within the kernel smoothing theory (see [kernel smoothing] ). See Figure 4 for the graph of its pdf.

  • Exponential distribution: θ ̲=λ,γ, with the constraint λ>0. The probability density function writes:

    f X (x;θ ̲)=λexp-λ(x-γ)1 γx (36)

    The support is [γ,+[, and is right skewed. The expected value of the distribution is γ+1/λ. The coefficient of variation (standard deviation / mean) is equal to 1 1+γλ and does not depend on λ if γ=0.

    PDF of an Exponential distribution.
    Figure 9
  • Fisher-Snedecor distribution: θ ̲=d 1 ,d 2 , with the constraint d i >0. The probability density function writes:

    f X (x;θ ̲)=1 xB(d 1 /2,d 2 /2)d 1 x d 1 x+d 2 d 1 /2 1-d 1 x d 1 x+d 2 d 2 /2 1 x0 (37)

    The support is [0,+[, and is right skewed.

    PDF of a Fisher-Snedecor distribution.
    PDF of a Fisher-Snedecor distribution.
    Figure 10
  • Gamma distribution: Univariate distribution. θ ̲=λ,k,γ, with the constraints λ>0, k>0. The probability density function writes:

    f X (x;θ ̲)=λ Γ(k)λ(x-γ) k-1 exp-λ(x-γ)1 γx (38)

    where Γ is the gamma function. The support is [γ,+[, and is right skewed.

    PDF of a Gamma distribution.
    PDF of a Gamma distribution.
    Figure 11
    PDF of a Gamma distribution.
    Figure 12
  • Generalized Pareto distribution: Univariate distribution. θ ̲=ξ,σ, with the constraints σ>0. The cumulative probability function writes:

    F X (x;θ ̲)=1-1+ξx σ ( -1/ξ ifξ01-exp-x σifξ=0 (39)

    The support is + if ξ0 and [0,-1 ξ] if ξ<0.

    PDF of a Generalized Pareto distribution.
    PDF of a Generalized Pareto distribution.
    Figure 13
  • Gumbel distribution: Univariate distribution. θ ̲=α,β, with the constraint α>0. The probability density function writes:

    f X (x;θ ̲)=αexp-α(x-β)-e -α(x-β) (40)

    The support is . β describes the most likely value, but this is less than the expected value of the distribution because the distribution is asymmetric (right skewed): the probability values in the distribution's right tail (i.e. values greater than β) decrease more gradually than those in the left tail (i.e. values less than β). a provides a measure of dispersion: the probability density function flattens as α decreases.

    PDF of a Gumbel distribution.
    Figure 14
  • Histogram distribution: Univariate distribution. θ ̲=(l i ,h i ) i , with the constraint h i >0 and l i >0. The probability density function writes:

    f X (x;θ ̲)= i=1 n H i 1 [x i ,x i+1 ] (x) (41)

    where

    • H i =h i /S is the normalized heights, with S= i=1 n h i l i being the initial surface of the histogram.

    • l i =x i+1 -x i , 1in

    • n is the size of the HistogramPairCollection

    PDF of a Histogram distribution.
    PDF of a Histogram distribution.
    Figure 15
  • Inverse ChiSquare distribution: Univariate distribution. θ ̲=ν,, with the constraint ν>0. The probability density function writes:

    f X (x;θ ̲)=exp-1 2x Γν 2λ ν 2 x ν 2+1 1 x>0 (42)

    A Inverse ChiSquare distribution parametered by ν is exactly the InverseGamma distribution parametered by (ν 2,2).

    PDF of an Inverse ChiSquare distribution.
    Figure 16
  • Inverse Gamma distribution: Univariate distribution. θ ̲=k,λ,, with the constraint k>0 and λ>0. The probability density function writes:

    f X (x;θ ̲)=exp-1 λx Γ ( (k)λ k x k+1 1 x>0 (43)
    PDF of an Inverse Gamma distribution.
    Figure 17
  • Inverse Normal distribution: Univariate distribution. θ ̲=λ,μ, with the constraint λ>0 and μ>0. The probability density function writes:

    f X (x;θ ̲)=λ 2πx 3 1/2 e -λ(x-μ) 2 /(2μ 2 x) 1 x>0 (44)
    PDF of a Inverse Normal distribution.
    PDF of a Inverse Normal distribution.
    Figure 18
  • Inverse Wishart distribution: Multivariate distribution. θ ̲=V ̲ ̲,ν where V ̲ ̲ is a symmetric positive definite matrix of dimension p and ν>p-1. The probability density function writes:

    f x ̲ (x ̲;θ ̲)=|V ̲ ̲| ν 2 e - tr (V ̲ ̲m(x ̲) -1 ) ( 2 2 νp 2 |m(x ̲)| ν+p+1 2 Γ p ν 2 ( 1 p + () (m(x ̲)) (45)

    where x ̲ p(p+1) 2 , p + () is the set of symmetric positive matrices of dimension p and m: p(p+1) 2 p + () is given by:

    m(x ̲)=x 1 x 2 x 1+p(p-1)/2 x 2 x 3 x 1+p(p-1)/2 x p(p+1)/2 (46)
  • Laplace distribution: Univariate distribution. θ ̲=λ,μ, with the constraint λ>0. The probability density function writes:

    f X (x;θ ̲)=λ ( 2 ( e -λ|x-μ| (47)

    The Laplace distribution is the generalisation of the Exponential distribution to the range .

    PDF of a Laplace distribution.
    Figure 19
  • Logistic distribution: Univariate distribution. θ ̲=α,β, with the constraint β0. The probability density function writes:

    f X (x;θ ̲)=exp-x-α β β1+exp-x-α β 2 (48)

    The support is . α describes the most likely value. β provides a measure of dispersion: the probability density function flattens as β decreases.

    PDF of a logistic distribution.
    Figure 20
  • LogNormal distribution: Univariate distribution. θ ̲=μ ,σ ,γ, with the constraint σ >0. The probability density function writes:

    f X (x;θ ̲)=1 σ (x-γ)2πexp-1 2 ln (x-γ)-μ σ 2 1 γx (49)

    The support is [γ,+[, and is right skewed.

    PDF of a LogNormal distribution.
    Figure 21
  • LogUniform distribution: Univariate distribution. θ ̲=a ,b , with the constraint b >a . The probability density function writes:

    f X (x;θ ̲)=1 x(b -a )1 a log(x)leqb (50)

    The support is [exp(a ),exp(b )], and is right skewed.

    PDF of a LogUniform distribution.
    Figure 22
  • Maximum entropy statistics distribution: Multivariate distribution, parametrized by d marginals, with bounds a,b verifying a i a i+1 and b i b i+1 . The probability density function writes:

    f X (x)=f 1 (x 1 ) k=2 d φ k (x k )exp- x k-1 x k φ k (s)ds1 x 1 x d (51)

    with

    φ k (x k )=f k (x k ) F k-1 (x k )-F k (x k ) (52)
  • MeixnerDistribution distribution: Univariate distribution. θ ̲=α,β,δ,μ, with the constraint α>0, β]-π,π[ and δ>0. The probability density function writes:

    f X (x;θ ̲)=2cos(β/2) 2δ 2απΓ(2δ)e β(x-μ) α |Γ(δ+ix-μ α)| 2 (53)

    where i 2 =-1. The support is .

    PDF of a MeixnerDistribution distribution.
    Figure 23
  • Non Central Chi Square: Univariate distribution. θ ̲=ν,λ, with the constraint ν>0 and λ0. The probability density function writes:

    f X (x;θ ̲)= j=0 e -λ λ j j!p χ 2 (ν+2j) (x) (54)

    where p χ 2 (q) is the probability density function of a χ 2 (q) random variate.

    PDF of a Non Central Chi Square distribution.
    PDF of a Non Central Chi Square distribution.
    Figure 24
  • Non Central Student: Univariate distribution. θ ̲=ν,δ,γ. Let's note that a random variable X is said to have a standard non-central student distribution 𝒯(ν,δ) if it can be written as:

    X=N C/ν (55)

    where N has the normal distribution 𝒩(δ,1) and C has the χ 2 (ν) distribution, N and C being independent.

    The non-central Student distribution in OpenTURNS has an additional parameter γ such that the random variable X is said to have a non-central Student distribution 𝒯(ν,δ,γ) if X-γ has a standard 𝒯(ν,δ) distribution.

    We explicitate here the probability density function of the Non Central Student :

    p NCS (x)=exp(-δ 2 /2) νπΓ(ν/2)ν ν+(x-γ) 2 (ν+1)/2 j=0 Γν+j+1 2 Γ(j+1)δ(x-γ)2 ν+(x-γ) 2 j (56)
    PDF of a Non Central Student distribution.
    Figure 25
  • Normal Gamma: Bivariate distribution. θ ̲=μ,κ,α,β.

    The Normal Gamma distribution is the distribution of the random vector (X,Y) where Y follows the distribution Γ(α,β) with α>0 and β>0, X|Y follows the distribution 𝒩(μ,1 κY).

    We explicitate here the probability density function of the Non Central Student :

    p NG (x,y)=Γ(α) β α 2π κy α-1/2 exp-y 2κ(x-μ) 2 +2β (57)
  • Normal distribution (or Gaussian distribution): Multivariate n-dimensional distribution. In the case n=1, θ ̲=μ,σ, with the constraint σ>0. The probability density is given as:

    f X (x;θ ̲)=1 σ2πexp-1 2x-μ σ 2 (58)

    The support is . μ provides the most likely value (for which the probability density function is at its highest), and the density function is symmetric around this value (the values μ-a and μ+a are equally likely); μ is also the expected value (mean) of this distribution. Whilst σ provides a measure of dispersion: the larger it is, the flatter the probability density function is (i.e. values far away from μ are still likely, or in other words possible values are more spread out).

    PDF of a Normal distribution.
    Figure 26

    In dimension n>1, the Multi-Normal Distribution (or Multivariate Normal Distribution) writes :

    f X (x;θ ̲)=1 (2π) n 2 ( det Σ ̲ ̲) 1 2 e -1 2(x ̲-μ ̲) t Σ ̲ ̲ -1 ( (x ̲-μ ̲) (59)

    where Σ ̲ ̲=Λ ̲ ̲ σ ̲ R ̲ ̲Λ ̲ ̲ σ ̲ , Λ ̲ ̲ σ ̲ = diag (σ ̲), R ̲ ̲ SPD, σ i >0. The distribution is parameterized by (μ ̲,σ ̲,R ̲ ̲) or (μ ̲,Σ ̲ ̲).

  • Random Mixture distribution: Univariate distribution. A Random Mixture Y is defined as an affine combination of random variables X i as follows:

    Y=a 0 + i=1 n a i X i (60)

    where (a i ) 0in n+1 and (X i ) 1in are some independent univariate distributions.

    For example,

    Y=2+5X 1 +X 2 (61)

    where :

    • X 1 follows a (λ=1.5),

    • X 3 follows a 𝒩(μ=4,Variance=1).

    The pdf and cdf of this distribution are drawn in Fig.27 and Fig.27.

    Probability density function of a Random Mixture.
    Cumulative density function of a Random Mixture.
    Figure 27
  • Rice distribution: Univariate distribution. θ ̲=σ,ν, with the constraint ν0 and σ>0. The probability density is given as:

    f X (x;θ ̲)=2x σ 2 p χ 2 (2,ν 2 σ 2 ) (x 2 σ 2 ) (62)

    where p χ 2 (ν,λ) is the probability density function of a Non Central Chi Square distribution.

    Probability density function of a Rice distribution.
    Cumulative density function of a Rice distribution.
    Figure 28
  • Rayleigh distribution: Univariate distribution. θ ̲=σ,γ, with the constraint σ>0.The probability density is given as:

    f X (x;θ ̲)=(x-γ) σ 2 e -(x-γ) 2 2σ 2 1 [γ,+[ (x) (63)
    PDF of a Rayleigh distribution.
    Figure 29
  • Student distribution: Univariate distribution. θ ̲=ν,μ ̲,σ ̲,R ̲ ̲, with the constraint ν>2.The Student distribution has the following probability density function, written en dimension d :

    p T (x ̲)=Γν+d 2 (πd) d 2 Γν 2 det (R ̲ ̲) -1/2 k=1 d σ k 1+z ̲ t R ̲ ̲ -1 z ̲ ν -ν+d 2 (64)

    where z ̲=Δ ̲ ̲ -1 x ̲-μ ̲ with Δ ̲ ̲= diag ̲ ̲(σ ̲).

    In dimension d=1, θ ̲=(ν,μ,σ) and the distribution writes :

    p T (x)=Γν+1 2 πΓν 21 σ1+(x-μ) 2 ν -ν+1 2 (65)

    The parameter μ describes the most likely value. ν is a measure of dispersion: the probability density function flattens as ν decreases.

    PDF of a Student distribution.
    Figure 30
  • Trapezoidal distribution: θ ̲=a,b,c,d, with the constraint ab<cd. The probability density function writes:

    f X (x;θ ̲)=hx-a b-a if axbh if bxchd-x d-c if cxd0 otherwise (66)

    with h=2 d+c-a-b

    The support is [a,d].

  • Triangular distribution: Univariate distribution. θ ̲=a,b,m, with the constraints am, mb, b>a. The probability density function writes:

    f X (x;θ ̲)=2x-a (m-a)(b-a) if axm2b-x (b-m)(b-a) if mxb0 otherwise (67)

    The support is [a,b]. m describes the most likely value.

    PDF of a Triangular distribution.
    Figure 31
  • Truncated Normal Distribution: Univariate distribution. θ ̲=μ n ,σ n ,a,b, with the constraints σ n >0, b>a. The probability density function writes:

    f X (x;θ ̲)=ϕ(x-μ n σ n )/σ n Φ(b-μ n σ n )-Φ(a-μ n σ n )1 axb (68)

    where ϕ and Φ represent the probability density and the cumulative distribution function respectively of the reduced centred Normal distribution (i.e. the mean μ zero and standard deviation σ equal to 1). The support is [a,b]. μ describes the most likely value. Whilst σ provides a measure of dispersion: the probability density function flattens as s increases (the probability density becomes zero for values outside the interval [a,b]).

    PDF of a TruncatedNormal distribution.
    PDF of a TruncatedNormal distribution.
    Figure 32
  • Uniform distribution: Univariate distribution. θ ̲=a,b, with the constraint a<b. The probability density function writes:

    f X (x;θ ̲)=1 b-a1 axb (69)

    The support is [a,b]. All values in this interval are equally-likely.

    PDF of a Uniform distribution.
    Figure 33
  • Weibull distribution: Univariate distribution. θ ̲=α,β,γ, with the constraints α>0, β>0. probability density function writes:

    f X (x;θ ̲)=β αx-γ α β-1 exp-x-γ α β 1 γx (70)

    The support is [γ,+[, and is right skewed. Both α and β influence the dispersion. We note that the distribution becomes more skewed as β decreases. In the case where β=1 this is corresponds to the Exponential distribution.

    PDF of a Weibull distribution.
    PDF of a Weibull distribution.
    Figure 34
  • Wishart distribution: Multivariate distribution. θ ̲=V ̲ ̲,ν where V ̲ ̲ is a symmetric positive definite matrix of dimension p and ν>p-1. The probability density function writes:

    f x ̲ (x ̲;θ ̲)=|m(x ̲)| ν-p-1 2 e - tr (V ̲ ̲ -1 m(x ̲)) ( 2 2 νp 2 |V ̲ ̲| ν 2 Γ p ν 2 ( 1 p + () (m(x ̲)) (71)

    where x ̲ p(p+1) 2 , p + () is the set of symmetric positive matrices of dimension p and m: p(p+1) 2 p + () is given by:

    m(x ̲)=x 1 x 2 x 1+p(p-1)/2 x 2 x 3 x 1+p(p-1)/2 x p(p+1)/2 (72)

OpenTURNS also proposes some Discrete Distributions.

  • Bernoulli distribution: Univariate distribution. θ ̲=p, with the constraint 0<p<1. The Bernoulli distribution takes only 2 values : 0 and 1.

    X=1=p,X=0=1-p (73)
    Distribution of a Bernoulli distribution.
    CDF of a Bernoulli distribution.
    Figure 35
  • Binomial distribution: Univariate distribution. θ ̲=(n,p), with the constraint 0<p<1 and . The Binomial distribution values are the integer between 0 and n.

    P(X=k)=C n k p k (1-p) n-k (74)

    where

    k ( {0,,n}np[0,1] (75)
    Distribution of a Binomial distribution.
    CDF of a Binomial distribution.
    Figure 36
  • Dirac distribution: Multivariate distribution. θ ̲=point. The Dirac distribution takes only one value : point n .

    X ̲=point=1 (76)
    Distribution of a Dirac distribution.
    CDF of a Dirac distribution.
    Figure 37
  • Geometric distribution: Univariate distribution. θ ̲=p, with the constraint 0<p<1. all natural numbers k * ,

    X=k;θ ̲=p1-p k-1 (77)

    The support is * .

    Distribution of a Geometric distribution.
    CDF of a Geometric distribution.
    Figure 38
  • KPermutationsDistribution distribution: Multivariate d-dimensional distribution. θ ̲=(k,n), with the constraints n1 and k1. The KPermutationsDistribution is the discrete uniform distribution on the set of injective functions (i 0 ,,i k 1 ) from {0,,k-1} into {0,,n-1}:

    P(X ̲=(i 0 ,,i k-1 ))=1 d (78)

    where d=A n k =n! (n-k)!.

  • Multinomial distribution: Multivariate n-dimensional distribution. θ ̲=((p k ) 1kn ,N), with the constraint 0<p<1 and x i * ,

    P(X ̲=x ̲)=N! x 1 !x n !(N-s)!p 1 x 1 p n x n (1-q) N-s (79)

    where

    0 ( p i 1x i q= k=1 n p k 1s= k=1 n x k N ( (80)

    In dimension n=1, this definition corresponds to the Binomial distribution.

  • Negative Binomial distribution: Univariate distribution. θ ̲=(r,p), with the constraint 0<p<1 and r>0. The Negative Binomial distribution values are the positive integers 0,1,

    P(X=k)=Γ(k+r) Γ(r)Γ(k+1)p k (1-p) r (81)

    where k

    Distribution of a Negative Binomial distribution.
    CDF of a Negative Binomial distribution.
    Figure 39
  • Poisson distribution: Univariate distribution. θ ̲=λ, with the constraint λ>0. For all k,

    X=k;θ ̲=λ k k!exp-λ (82)

    The support is .

    Distribution of a Poisson distribution.
    CDF of a Poisson distribution.
    Figure 40
  • Skellam distribution: Univariate distribution. θ ̲=(λ 1 ,λ 2 ), with the constraint λ i >0. The Skellan distribution takes its values in . It is the distribution of (X 1 -X 2 ) for (X 1 ,X 2 ) independant and respectively distributed according to Poisson(λ i ). The probability distribution function is:

    k,X=k=2Y=2λ 1 (83)

    where Y is distributed according to the the non central chi-square distribution χ ν,δ 2 , with ν=2(k+1) and δ=2λ 2 .

    Distribution of a Skellam distribution.
    CDF of a Skellam distribution.
    Figure 41
  • UserDefined: Multivariate n-dimensional distribution. θ ̲=(x k ̲,p k ) 1kN , with the constraint λ>0, where 0p k 1, k=1 N ( p k =1.

    P(X ̲=x ̲ k )=p k ) 1kN (84)

    The support is .

    Distribution of a UserDefined distribution.
    CDF of a UserDefined distribution.
    Figure 42
  • Zipf-Mandelbrot distribution: Univariate distribution. θ ̲=N,q,s, with the constraints N1, q0 and s>0. For all k[1,N], k integer,

    k[1,N],P(X=k)=1 (k+q) s 1 H(N,q,s) (85)

    where H(N,q,s) is the Generalized Harmonic Number : H(N,q,s)= i=1 N 1 (i+q) s .

    Distribution of a Zipf-Mandelbrot distribution.
    CDF of a Zipf-Mandelbrot distribution.
    Figure 43

Standard representative of distributions

OpenTURNS associates to each distribution a standard representative, corresponding to a specific set of its parameters. The following tabulars detail the specific set of parameters and gives the expression of its non centered moments of order n.

Other notations

-

Link with OpenTURNS methodology

These probability distributions can be used in step B "Quantifying Sources of Uncertainty". Choosing a probability distribution is equivalent to implicitly making a hypothesis on the type of uncertainty of one of the variables X ̲ defined in step A "Specifying Criteria and the Case Study".
References and theoretical basics
This parametric approach has the advantage to feature the uncertainty using a reduced number of parameters. This is particularly useful when there is little data available for the unknown variables (situation in which a non-parametric approach would be limited – see [empirical distribution function] and [kernel smoothing] ) and even when there is no data (the analysis can thus only rely on expert judgement, easier to interpret when there are few distribution parameters).

Moreover, a parametric approach is often preferable when the uncertainty study criterion defined in step A deals with a rare event, obtaining a precise evaluation of the necessary criteria generally necessitates the extrapolation of X values from the observed data. Beware however! An unwise modelling assumption (bad choice of distribution) can lead to an erroneous extrapolation and thus the results of the study may be false!

The correct choice of probability distribution is thus crucial. Statistical tools are available to validate or invalidate the choice of distribution given a set of data (see for example [Graphical analysis] [Kolmogorov-Smirnov test] ). But consideration of the underlying context is also recommended. For example:

  • the Normal distribution is relevant in metrology to represent certain measures of uncertainty.

  • the Exponential distribution is useful for modelling uncertainty when considering the life duration of material that is not subject to ageing,

  • the Gumbel distribution is defined to describe extreme phenomenon (e.g. maximal annual flow of a river or of wind speed)

Some distributions are often used to express expert judgement in simple terms:

  • the Uniform distribution expresses knowledge concerning the absolute limits of variables (i.e. the probability to exceed these limits is strictly zero) without any other prior assumption about the distribution (such as, for example the mean value or the most likely value),

  • the Triangular distribution expresses knowledge concerning the absolute limits of variables and the most likely value.

Finally, an important point concerning the multi-dimensional case where n X >1. Choosing the type of distribution implies an assumption about the uncertainty of each of the variables X i , but also on the potential inter-dependencies between variables. These inter-dependencies between unknown variables can consequently have an impact on the results of the uncertainty study.

Readers wishing to consider the dependencies in their study more deeply are referred to, for example, [copula method] , [linear correlation] , [rank correlation] .

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.


3.3.4 Step B  – Copula

Mathematical description

Goal

To define the joined probability density function of the random input vector X ̲ by composition, one needs:

  • the specification of the copula of interest C with its parameters,

  • the specification of the n X marginal laws of interest F X i of the n X input variables X i .

The joined cumulative density function is therefore defined by :

X 1 x 1 ,X 2 x 2 ,,X n X x n X =CF X 1 (x 1 ),F X 2 (x 2 ),,F X n X (x n X )

Within this part, we define the concept of copula and its use within OpenTURNS.

Principles

Copulas allow to represent the part of the joined cumulative density function which is not described by the marginal laws. It enables to represent the dependence structure of the input variables. A copula is a special cumulative density function defined on [0,1] n X whose marginal distributions are uniform on [0,1]. The choice of the dependence structure is disconnected from the choice of the marginal distributions.

Basic properties of copulas

A copula, restricted to [0,1] n X is a n U -dimensional cumulative density function with uniform marginals.

  • C(u ̲)0, u ̲[0,1] n U

  • C(u ̲)=u i , u ̲=(1,...,1,u i ,1,...,1)

  • For all N-box =[a 1 ,b 1 ]××[a n U ,b n U ][0,1] n U , we have 𝒱 C ()0, where:

    • 𝒱 C ()= i=1,,2 n U sign(v ̲ i )×C(v ̲ i ), the summation being made over the 2 n U vertices v i ̲ of .

    • sign(v ̲ i )=+1 if v i k =a k for an even number of k ' s, sign(v ̲ i )=-1 otherwise.

Copulas available within OpenTURNS

Different copulas are available within OpenTURNS:

Ali-Mikhail-Haq Copula: The Ali-Mikhail-Haq copula is archimedean, parameterized by a scalar θ0 . The Clayton copula is thus defined by:

C(u 1 ,u 2 )=u 1 u 2 1-θ(1-u 1 )(1-u 2 )
Iso-PDF of a Ali-Mikhail-Haq copula.
Figure 44

Clayton Copula: The Clayton copula is parameterized by a scalar θ0 . The Clayton copula is thus defined by:

C(u 1 ,u 2 )=u 1 -θ +u 2 -θ -1 -1/θ
Iso-PDF of a Clayton copula.
Figure 45

Composed Copula: A copula may be defined as the product of other copulas : if C 1 and C 2 are two copulas respectively of random vectors in n 1 and n 2 , we can create the copula of a random vector of n 1 +n 2 , noted C as follows :

C(u 1 ,,u n )=C 1 (u 1 ,,u n 1 )C 2 (u n 1 +1 ,,u n 1 +n 2 )

It means that both subvectors (u 1 ,,u n 1 ) and (u n 1 +1 ,,u n 1 +n 2 ) of n 1 and n 2 are independent.

Farlie-Gumbel-Morgenstern Copula: The Farlie-Gumbel-Morgenstern copula is parameterized by a scalar θ[-1,1] . The Farlie-Gumbel-Morgenstern copula is thus defined by:

C(u 1 ,u 2 )=u 1 u 2 (1+θ(1-u 1 )(1-u 2 ))
Iso-PDF of a Farlie-Gumbel-Morgenstern copula.
Figure 46

Frank Copula: The Frank copula is parameterized by a scalar θ0 . The Frank copula is thus defined by:

C(u 1 ,u 2 )=-1 θlog1+(e -θu 1 -1)(e -θu 2 -1 e -θ -1
Iso-PDF of a Frank copula.
Figure 47

Gumbel Copula: The Gumbel copula is parameterized by a scalar θ0 . The Gumbel copula is thus defined by:

C(u 1 ,u 2 )=exp-(-log(u 1 )) θ +(-log(u 2 )) θ 1/θ
Iso-PDF of a Gumbel copula.
Figure 48

Independent Copula: It means that all the input variables are independent the ones from the others. The independent copula is defined by:

C(u 1 ,u 2 ,,u n U )= i=1 n U u i
Iso-PDF of an Independent copula.
Figure 49

Maximum-entropy statistics copula: The density function is defined by:

f U (u)= k=2 d exp- k-1 -1 (u k-1 ) k -1 (u k ) φ k (s)ds k-1 ( k -1 (u k ))-u k 1 F 1 -1 (u 1 )F d -1 (u d )
with k (t)=F k (G -1 (t))andG(t)=1 t k=1 d F k (t)

Min Copula: The Min copula is the upper Fréchet-Hoeffding bound defined by:

C(u 1 ,,u n )=min(u 1 ,,u n )

Normal Copula: The Normal copula is parameterized by a correlation matrix 𝐑. The Normal copula is thus defined by:

C(u 1 ,,u n )=Φ 𝐑 n U Φ -1 (u 1 ),Φ -1 (u 2 ),,Φ -1 (u n U )

where:

  • Φ 𝐑 n X is the multinormal cumulative density function in dimension n X :

    Φ 𝐑 n X (x ̲)= - x 1 ... - x n X 1 (2π.det𝐑) n X 2 .e - t u ̲.𝐑.u ̲ 2 du 1 ...du n X
  • Φ is the cumulative distribution function of the normal law in dimension 1:

    Φ(x)= - x 1 2πe -t 2 2 dt
  • 𝐑 is the correlation matrix. This matrix is defined by its algebric properties: symmetric, definite and positive.

The correlation matrix 𝐑 can be obtained by different means:

  • If one knows the Spearmann correlation Matrix, that is to say,

    ρ ij S =ρ S (X i ,X j )=ρ P (F X i (X i ),F X j (X j ))

    the correlation matrix 𝐑 is deduced by the following formula:

    𝐑 ij =2sin(π 6ρ ij S )
  • If one knows the Kendall measure of correlation, that is to say,

    τ ij =τ(X i ,X j )=(X i 1 -X i 2 ).(X j 1 -X j 2 )>0-(X i 1 -X i 2 ).(X j 1 -X j 2 )<0

    where (X i 1 ,X j 1 ) and (X i 2 ,X j 2 ) follow the law of (X i ,X j ), the correlation matrix 𝐑 is deduced by the following formula:

    𝐑 ij =sin(π 2.τ ij )
  • If one knows the Pearson correlation Matrix 𝐑 P , there are two possibilities:

    1. If and only if all the marginal laws are Normal,

      𝐑𝐑 P
    2. In the other cases, one has to build the correlation matrix 𝐑 by inversion of the following formula from the Pearson Correlation Matrix 𝐑 P :

      𝐑 ij P = 2 (x i -𝔼X i )(x j -𝔼X j )Φ ij (x i ,x j ,𝐑 ij )dx i dx j
Iso-PDF of a Normal copula.
Figure 50

Sklar Copula: The Sklar copula is obtained directly from the expression of the n-dimensional distribution which cumulative distribution function is F with F i its marginals :

C(u 1 ,,u n )=F(F 1 -1 (u 1 ),,F n -1 (u n ))

Figure 51 shows the iso-PDF of a Sklar copula extracted from a bidimensional Student distribution.

Iso-PDF of a Sklar copula.
Figure 51

Other notations

-

Link with OpenTURNS methodology

This method of modelling the dependencies between the input variables is part of the step B of the global methodology ("quantify sources of uncertainty"). It enables to build an expression of the probability density function of the input variables X ̲ defined in step A ("specification of the model and criteria") by composition with the marginal distributions of each X i . This method requires the knowledge of the Spearman correlation matrix or the Kendall correlation measure. It can also be used if one knows the Pearson correlation matrix, but only with the assumption of Normal marginal laws for all the input variables.

References and theoretical basics

One has to pay attention that the composition of the marginal distributions and the copulas available in OpenTURNS is not sufficient to represent all types of dependencies (see examples in the next section). Previous statistical and/or justifications should be done to justify this choice of modeling dependencies. Besides, as previously discussed, the use of Copula is totally decoupled from the knowledge of the marginal laws of the input variables.

The following references give a first entry point to the Copulas:

  • Nelsen, 'Introduction to Copulas'

  • Embrechts P., Lindskog F., Mc Neil A., 'Modelling dependence with copulas and application to Risk Management', ETZH 2001.

Examples

-

3.3.5 Step B  – Random Mixture : affine combination of independent univariate distributions

Mathematical description

Goal

A multivariate random variable Y ̲ may be defined as an affine transform of n independent univariate random variable, as follows :

Y ̲=y ̲ 0 +M ̲ ̲X ̲ (86)

where y ̲ 0 d is a deterministic vector with d{1,2,3}, M ̲ ̲ d,n () a deterministic matrix and (X k ) 1kn are some independent univariate distributions.

In such a case, it is possible to evaluate directly the distribution of Y ̲ and then to ask Y ̲ any request compatible with a distribution : moments, probability and cumulative density functions, quantiles (in dimension 1 only) ...

Principle

Evaluation of the probability density function of the Random Mixture

As the univariate random variables X i are independent, the characteristic function of Y ̲, denoted φ Y , is easily defined from the characteristic function of X k denoted φ X k as follows :

φ Y (u 1 ,,u d )= j=1 d e iu j y 0 j k=1 n φ X k ((M t u) k ),foru ̲ d (87)

Once φ Y evaluated, it is possible to evaluate the probability density function of Y, denoted p Y : several techniques are possible, as the inversion of the Fourier transformation. This technique is not easy to implement.

OpenTURNS uses another technique, based on the Poisson sum formulation, defined as follows :

j 1 j d p Y y 1 +2πj 1 h 1 ,,y d +2πj d h d = j=1 d h j 2*π k 1 k d φk 1 h 1 ,,k d h d e -ı( m=1 d k m h m y m ) (88)

By fixing h 1 ,,h d small enough, 2kπ h j + and p Y (,2kπ h j ,)0 because of the decreasing properties of p Y . Thus the nested sums of the left term of (88) are reduced to the central term j 1 ==j d =0 : the left term is approximatively equal to p Y (y).

Furthermore, the right term of (88) is a series which converges very fast: only few terms of the series are enough to get machine-precision accuracy. Let us note that the factors φ Y (k 1 h 1 ,,k d ,h d ), which are expensive to evaluate, do not depend on y and are evaluated once only.

It is also possible to greatly improve the performance of the algorithm by noticing that equation (88) is linear between p Y and φ Y . We denote q Y and ψ Y respectively the density and the characteristic function of the multivariate normal distribution with the same mean μ ̲ and same covariance matrix C ̲ as the random mixture. By applying this multivariate normal distribution to the equation (88), we obtain by subtraction:
p Y y= j d q Y y 1 +2πj 1 h 1 ,,y d +2πj d h d +H 2 d π d |k 1 |N |k d |N δ Y k 1 h 1 ,,k d h d e -ı( m=1 d k m h m y m ) (89)

where H=h 1 ××h d , j=(j 1 ,,j d ), δ Y :=φ Y -ψ Y

In the case where n 1, using the limit central theorem, the law of Y ̲ tends to the normal distribution density q, which will drastically reduce N. The sum on q will become the most CPU-intensive part, because in the general case we will have to keep more terms than the central one in this sum, since the parameters h 1 ,h d were calibrated with respect to p and not q.

The parameters h 1 ,h d are calibrated using the following formula:

h =2π (β+4α)σ (90)

where σ = Cov Y ̲ , and α, β are respectively the number of standard deviations covered by the marginal distribution (α=5 by default) and β the number of marginal deviations beyond which the density is negligible (β=8.5 by default).

The N parameter is dynamically calibrated: we start with N=8 then we double N value until the total contribution of the additional terms is negligible.

Evaluation of the moments of the Random Mixture

The relation (86) enables to evaluate all the moments of the random mixture, if mathematically defined. For example, we have :

𝔼Y ̲=y 0 ̲+M ̲ ̲𝔼X ̲ Cov Y ̲=M ̲ ̲ Cov X ̲M ̲ ̲ t

Computation on a regular grid

The interest is to compute the density function on a regular grid. Purposes are to get quickly ann approximation. The regular grid is of form:

r{1,,d},m{0,,M-1},y r,m =μ r +b2m+1 M-1σ r (91)

By denoting p m 1 ,,m d =p Y ̲ (y 1,m 1 ,,y d,m d ):

p m 1 ,,m d =Q m 1 ,,m d +S m 1 ,,m d (92)

for which the term S m 1 ,,m d is the most CPU consuming. This term rewrites:

S m 1 ,,m d =H 2 d π d k 1 =-N N k d =-N N δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d ) (3.3)

with:

δk 1 h 1 ,,k d h d =(φ-ψ)k 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )=e -i j=1 d k j h j μ j +b2m j +1 M-1σ j (3.3)

The aim is to rewrite the previous expression as a d- discrete Fourier transform, in order to apply Fast Fourier Transform (FFT) for its evaluation.

We set M=N and j{1,,d},h j =π bσ j and τ j =μ j bσ j . For convenience, we introduce the functions:

f j (k)=e -iπ(k+1)τ j -1+1 N

We use k+1 instead of k in this function to simplify expressions below.

We obtain:

E m 1 ,,m d (k 1 ,,k d )=e -i j=1 d k j h j bσ j μ j bσ j +2m j N+1 N-1 =e -2iπ j=1 d k j m j N e -iπ j=1 d k j τ j -1+1 N =e -2iπ j=1 d k j m j N f 1 (k 1 -1)××f d (k d -1) (3.3)

For performance reasons, we want to use the discrete Fourier transform with the following convention in dimension 1 :

A m = k=0 N-1 a k e -2iπkm N

which extension to dimensions 2 and 3 are respectively :

A m,n = k=0 N-1 l=0 N-1 a k,l e -2iπkm N e -2iπln N
A m,n,p = k=0 N-1 l=0 N-1 s=0 N-1 a k,l,s e -2iπkm N e -2iπln N e -2iπsp N

We decompose sums of (3.3) on the interval [-N,N] into three parts:

k j =-N N δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )= k j =-N -1 δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )+δk 1 h 1 ,,0,,k d h d E m 1 ,,0,,m d (k 1 ,,0,,k d )+ k j =1 N δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d ) (3.3)

If we already computed E for dimension d-1, then the middle term in this sum is trivial.

To compute the last sum of (3.3), we apply a change of variable k j ' =k j -1:

k j =1 N δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )= k j =0 N-1 δk 1 h 1 ,,(k j +1)h j ,,k d h d ×E m 1 ,,m d (k 1 ,,k j +1,,k d ) (3.3)

Equation (3.3) gives:

E m 1 ,,m d (k 1 ,,k j +1,,k d )=e -2iπ l=1 d k l m l N+m j N f 1 (k 1 -1)××f j (k j )××f d (k d -1)=e -2iπm j N e -2iπ l=1 d k l m l N f 1 (k 1 -1)××f j (k j )××f d (k d -1) (3.3)

Thus

k j =1 N δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )=e -2iπm j N k j =0 N-1 δk 1 h 1 ,,(k j +1)h j ,,k d h d ×e -2iπ l=1 d k l m l N f 1 (k 1 -1)××f j (k j )××f d (k d -1) (3.3)

To compute the first sum of equation (3.3), we apply a change of variable k j ' =N+k j :

k j =-N -1 δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )= k j =0 N-1 δk 1 h 1 ,,(k j -N)h j ,,k d h d ×E m 1 ,,m d (k 1 ,,k j -N,,k d ) (3.3)

Equation (3.3) gives:

E m 1 ,,m d (k 1 ,,k j -N,,k d )=e -2iπ l=1 d k l m l N-m j f 1 (k 1 -1)××f j (k j -1-N)××f d (k d -1)=e -2iπ l=1 d k l m l N f 1 (k 1 -1)××f ¯ j (N-1-k j )××f d (k d -1) (3.3)

Thus:

k j =-N -1 δk 1 h 1 ,,k d h d E m 1 ,,m d (k 1 ,,k d )= k j =0 N-1 δk 1 h 1 ,,(k j -N)h j ,,k d h d ×e -2iπ l=1 d k l m l N f 1 (k 1 -1)××f ¯ j (N-1-k j )××f d (k d -1) (3.3)

To summarize:

  1. In order to compute sum from k 1 =1 to N, we multiply by e -2iπm 1 N and consider δ((k 1 +1)h,)f 1 (k 1 )

  2. In order to compute sum from k 1 =-N to -1, we consider δ((k 1 -N)h,)f ¯ 1 (N-1-k 1 )

OpenTURNS

In the 0.13 version of OpenTURNS, distributions which are able to evaluate their characteristic function are the following ones : χ 2 , Exponential, Gamma, Laplace, Logistic, Mixture, univariate Normal, Rayleigh, Triangular, univariate TruncatedNormal, Uniform, KernelMixture (which the distribution coming from a kernel smoothing method without treatment of bounds), RandomMixture.

Thus, all the requests to Y that require the evaluation of the probability density function may be satisfied only if the univariate random variables X i follow distributions which characteristic function has been implemented.

Until the 1.5 version of OpenTURNS, only univariate random mixtures were available. For all the other requests, no restriction is assigned.

Other notations

Link with OpenTURNS methodology

Within the global methodology, random mixtures may be used to define the output variable of interest from some indepedent univariate random variables, within the step B.
References and theoretical basics
"Abate, J. and Whitt, W. (1992). The Fourier-series method for inverting transforms of probability distributions. Queueing Systems 10, 5–88., 1992", formula 5.5.

Examples

The example here is an output variable of interest defined as the following combination :
Y=2+5X 1 +X 2

where X 1 and X 2 are independent and :

  • X 1 follows a (1.5),

  • X 2 follows a 𝒩(4,1).

The pdf and cdf graphs are the following ones.

 
 

3.3.6 Step B  – Using QQ-plot to compare two samples

Mathematical description

Goal

Let X be a scalar uncertain variable modelled as a random variable. This method deals with the construction of a dataset prior to the choice of a probability distribution for X. A QQ-plot (where "QQ" stands for "quantile-quantile") is a tool that may be used to compare two samples x 1 ,...,x N and x 1 ' ,...,x M ' ; the goal is to determine graphically whether these two samples come from the same probability distribution or not. If this is the case, the two samples should be aggregated in order to increase the robustness of further statistical analyses.

Principle of the method

A QQ-plot is based on the notion of quantile. The α-quantile q X (α) of X, where α(0,1), is defined as follows:

Xq X (α)=α

If a sample x 1 ,...,x N of X is available, the quantile can be estimated empirically:

  1. the sample x 1 ,...,x N is first placed in ascending order, which gives the sample x (1) ,...,x (N) ;

  2. then, an estimate of the α-quantile is:

    q ^ X (α)=x ([Nα]+1)

    where [Nα] denotes the integral part of Nα.

Thus, the j th smallest value of the sample x (j) is an estimate q ^ X (α) of the α-quantile where α=(j-1)/N (1<jN). Let us then consider our second sample x 1 ' ,...,x M ' ; this one also provides an estimate q ^ X ' (α) of this same quantile:

q ^ X ' (α)=x ([M×(j-1)/N]+1) '

If the the two samples correspond to the same probability distribution, then q ^ X (α) and q ^ X ' (α) should be close. Thus, graphically, the points q ^ X (α),q ^ X ' (α),α=(j-1)/N,1<jN should be close to the diagonal.

The following figure illustrates the principle of a QQ-plot with two samples of size M=50 and N=50. Note that the unit of the two axis is that of the variable X studied. In this example, the points remain close to the diagonal and the hypothesis "the two samples come frome the same distribution" does not seem irrelevant, even if a more quantitative analysis (see [Smirnov test] ) should be carried out to confirm this.

In this second example, the two samples clearly arise from two different distributions.

Other notations

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty". It is a tool for the construction of a dataset that can be used afterwards to choose a probability distribution for some uncertain variables defined in step A "Specifying Criteria and the Case Study".
References and theoretical basics
A QQ-plot is a graphical analysis, the conclusion of which remains obviously subjective. The reader is referred to [Smirnov test] for a more quantitative analysis. The following bibliographical references provide main starting points for further study of this method:
  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • D'Agostino, R.B. and Stephens, M.A. (1986). "Goodness-of-Fit Techniques", Marcel Dekker, Inc., New York.

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.7 Step B  – Comparison of two samples using Smirnov test

Mathematical description

Goal

Let X be a scalar uncertain variable modelled as a random variable. This method deals with the construction of a dataset prior to the choice of a probability distribution for X. Smirnov's test is a tool that may be used to compare two samples x 1 ,...,x N and x 1 ' ,...,x M ' ; the goal is to determine whether these two samples come from the same probability distribution or not. If this is the case, the two samples should be aggregated in order to increase the robustness of further statistical analyses.

Principle of the method

Smirnov's test is a statistical test based on the maximum distance between the cumulative distribution function F ^ N and F ^ M ' of the samples x 1 ,...,x N and x 1 ' ,...,x M ' (see [empirical cumulative distribution function] ). This distance is expressed as follows:

D ^ M,N =sup x F ^ N x-F ^ M ' x

The probability distribution of the distance D ^ M,N is asymptotically known (i.e. as the size of the samples tends to infinity). If M and N are sufficiently large, this means that for a probability α, one can calculate the threshold / critical value d α such that:

  • if D ^ M,N >d α , we conclude that the two samples are not identically distributed, with a risk of error α,

  • if D ^ M,N d α , it is reasonable to say that both samples arise frome the same distribution.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the "identically-distributed" hypothesis is rejected. Thus, the two samples will be supposed identically distributed if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations
This test is also referred to as the Kolmogorov-Smirnov's test for two samples.

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty". It is a tool for the construction of a dataset that can be used afterwards to choose a probability distribution for some uncertain variables defined in step A "Specifying Criteria and the Case Study".
References and theoretical basics
The test deals with the maximum deviation between the tw empirical distributions; it is by nature highly sensitive to presence of local deviations (two samples may be rejected even if they seem similar for almost the whole domain of variation).

We remind the reader that the underlying theoretical results of the test are asymptotic. There is no rule to determine the minimum number of data values one needs to use this test; but it is often considered a reasonable approximation when N is of an order of a few dozen.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon W.J. & Massey F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill


3.3.8 Step B  – Maximum Likelihood Principle

Mathematical description

Goal

This method deals with the parametric modelling of a probability distribution for a random vector X ̲=X 1 ,...,X n X . The appropriate probability distribution is found by using a sample of data x ̲ 1 ,...,x ̲ N . Such an approach can be described in two steps as follows:

  • Choose a probability distribution (e.g. the Normal distribution, or any other distribution available in OpenTURNS see [standard parametric models] ),

  • Find the parameter values θ ̲ that characterize the probability distribution (e.g. the mean and standard deviation for the Normal distribution) which best describes the sample x ̲ 1 ,...,x ̲ N .

The maximum likelihood method is used for the second step.

Principle

In the current version of OpenTURNS this method is restricted to the case where n X =1 and continuous probability distributions. Please note therefore that X ̲=X 1 =X in the following text. The maximum likelihood estimate (MLE) of θ ̲ is defined as the value of θ ̲ which maximizes the likelihood function LX,θ ̲:

θ ̲ ^= argmax LX,θ ̲

Given that x 1 ,...,x N is a sample of independent identically distributed (i.i.d) observations, Lx 1 ,...,x N ,θ ̲ represents the probability of observing such a sample assuming that they are taken from a probability distribution with parameters θ ̲. In concrete terms, the likelihood Lx 1 ,...,x N ,θ ̲ is calculated as follows:

Lx 1 ,...,x N ,θ ̲= j=1 N f X x j ;θ ̲

if the distribution is continuous, with density f X x;θ ̲.

For example, if we suppose that X is a Gaussian distribution with parameters θ ̲={μ,σ} (i.e. the mean and standard deviation),

Lx 1 ,...,x N ,θ ̲= j=1 N 1 σ2πexp-1 2x j -μ σ 2 =1 σ N (2π) N/2 exp-1 2σ 2 j=1 N x j -μ 2

The following figure graphically illustrates the maximum likelihood method, in the particular case of a Gaussian probability distribution.

In general, in order to maximize the likelihood function classical optimization algorithms (e.g. gradient type) can be used. The Gaussian distribution case is an exception to this, as the maximum likelihood estimators are obtained analytically:

μ ^=1 N i=1 N x i ,σ 2 ^=1 N i=1 N x i -μ ^ 2

Other notations

-

Link with OpenTURNS methodology

Having specified the variable of interest and having defined a criterion (step A "Specifying Criteria and the Case Study"), the uncertainty of the input variable X i must be then quantified in step B. The superscript i is omitted, as only a single component is used here, that is a single unknown variable (or source of uncertainty).

Input:

x 1 ,...,x N : sample data

Distribution: Distribution type chosen from the proposed continuous 1-dimensional distributions in [standard parametric models]

Output :

θ ̲ ^: maximum likelihood estimate of θ ̲

References and theoretical basics
The sample size used in the maximum likelihood method has an effect on the quality of results. In fact:
  • as N tends to infinity, the asymptotic theory results assure, under certain assumptions concerning the regularity of the model, that the MLE is the best possible estimator (its bias tends towards 0 i.e. no tendency towards under- or over-estimation, the uncertainty of θ ̲ ^ is lesser than in all other unbiased estimation methods); in practice, one often considers the asymptotic behaviour to be reached when N a few dozens, even if no theoretical rule can assure this with certitude.

  • if N is smaller, the MLE is still useful but θ ̲ ^ is less robust (uncertainty greater and bias possible).

A more advanced study of the goodness-of-fit of the selected probability distribution with the given sample data is described in [Graphical analysis] [Kolmogorov-Smirnov test] , [Cramer-Von Mises test] , [Anderson-Darling test] and [BIC criterion] .

The following bibliographical references provide main starting points for further study of this method:

  • Saporta G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon W.J. & Massey F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill


3.3.9 Step B  – Bayesian Calibration

Mathematical description

Goal

We consider a computer model h (i.e. a deterministic function) to calibrate:

z ̲=h(x ̲,θ ̲ h ),

where

  • x ̲ d x is the input vector;

  • z ̲ d z is the output vector;

  • θ ̲ h d h are the unknown parameters of h to calibrate.

Our goal here is to estimate θ ̲ h , based on a certain set of n inputs (x ̲ 1 ,...,x ̲ n ) (an experimental design) and some associated observations (y ̲ 1 ,...,y ̲ n ) which are regarded as the realizations of some random vectors (Y ̲ 1 ,...,Y ̲ n ), such that, for all i, the distribution of Y ̲ i depends on z ̲ i =h(x ̲ i ,θ ̲ h ). Typically, Y ̲ i =z ̲ i +ε ̲ i where ε ̲ i is a random measurement error.

For the sake of clarity, lower case letters are used for both random variables and realizations in the following (the notation does not distinguish the two anymore), as usual in the bayesian literature.

In fact, the bayesian procedure which is implemented allows to infer some unknown parameters θ ̲ d θ from some data y ̲ ̲=(y ̲ 1 ,...,y ̲ n ) as soon as the conditional distribution of each y ̲ i given θ ̲ is specified. Therefore θ ̲ can be made up with some computer model parameters θ ̲ h together with some others θ ̲ ε : θ ̲=(θ ̲ h t ,θ ̲ ε t ) t . For example, θ ̲ ε may represent the unknown standard deviation σ of an additive centered gaussian measurement error affecting the data (see the example hereafter). Besides the procedure can be used to estimate the parameters of a distribution from direct observations (no computer model to calibrate: θ ̲=θ ̲ ε ).

More formally, the likelihood L(y ̲ ̲|θ ̲) is defined by, firstly, a family {𝒫 w ̲ ,w ̲ d w } of probability distributions parametrized by w ̲, which is specified in practice by a conditional distribution f(.|w ̲) given w ̲ (f is a PDF or a probability mass function), and, secondly, a function g: d θ nd w such that g(θ)=(g 1 (θ ̲) t ,...,g n (θ ̲) t ) t which enables to express the parameter w ̲ i of the ith observation y ̲ i f(.|w ̲ i ) in function of θ ̲: g i (θ ̲)=w ̲ i thus y ̲ i f(.|g i (θ ̲)) and

L(y ̲ ̲|θ ̲)= i=1 n f(y ̲ i |g i (θ ̲)).

Considering the issue of the calibration of some computer model parameters θ ̲ h , the full statistical model can be seen as a two-level hierarchical model, with a single level of latent variables z ̲. A classical example is given by the nonlinear Gaussian regression model:

y i =h(x ̲ i |θ ̲ h )+ε i ,whereε i i.i.d.𝒩(0,σ 2 ),i=1,...,n.

It can be implemented with f(.|(μ,σ) t ) the PDF of the gaussian distribution 𝒩(μ,σ 2 ), with g i (θ ̲)=(h(x ̲ i ,θ ̲ h ),σ) t , and with θ ̲=θ ̲ h , respectively θ ̲=(θ ̲ h t ,σ) t , if σ is considered known, respectively unknown.

Given a distribution modelling the uncertainty on θ ̲ prior to the data, Bayesian inference is used to perform the inference of θ ̲, hence the name Bayesian calibration.

Principle

Contrary to the maximum likelihood approach described in [Maximum Likelihood Principle] , which provides a single `best estimate' value θ ̲ ^, together with confidence bounds accounting for the uncertainty remaining on the true value θ ̲, the Bayesian approach derives a full distribution of possible values for θ ̲ given the available data y ̲ ̲. Known as the posterior distribution of θ ̲ given the data y ̲ ̲, its density can be expressed according to Bayes' theorem:

π(θ ̲|y ̲ ̲)=L(y ̲ ̲|θ ̲)×π(θ ̲) m(y ̲ ̲), (3.3)

where

  • L(y ̲ ̲|θ ̲) is the (data) likelihood;

  • π(θ ̲) is the so-called prior distribution of θ ̲ (with support Θ), which encodes all possible θ ̲ values weighted by their prior probabilities, before consideration of any experimental data (this allows for instance to incorporate expert information or known physical constraints on the calibration parameter)

  • m(y ̲ ̲) is the marginal likelihood:

    m(y ̲ ̲)= θ ̲Θ L(y ̲ ̲|θ ̲)π(θ ̲)dθ ̲,

    which is the necessary normalizing constant ensuring that the posterior density integrates to 1.

Except in very simple cases, (3.3) has, in general, no closed form. Thus, it must be approximated, either using numerical integration when the parameter space dimension d θ is low, or more generally through stochastic sampling techniques known as Monte-Carlo Markov-Chain (MCMC) methods. See [The Metropolis-Hastings Algorithm] .

The following bibliographical references provide main starting points for further study of this method:

  • Berger, J.O. (1985). "Statistical Decision Theory and Bayesian Analysis", Springer.

  • Marin J.M. & Robert C.P. (2007) "Bayesian Core: A Practical Approach to Computational Bayesian Statistics", Springer.

Other notations

-

3.3.10 Step B  – The Metropolis-Hastings Algorithm

A rigorous and complete documentation about Markov Chain Monte-Carlo sampling is beyond the purpose of this section which provides a short introduction to the Metropolis-Hastings algorithm. In particular, the Metropolis-Hastings algorithm is only introduced hereafter in the context of the simulation of an homogeneous Markov Chain (no dynamical adaptation). The reader is invited to refer to the monographs suggested below for further explanations or details.

Mathematical description

Definitions and notation

Markov chain. Considering a σ-algebra 𝒜 on Ω, a Markov chain is a process (X k ) k such that

(A,x 0 ,...,x k-1 )𝒜×Ω k X k A|X 0 =x 0 ,...,X k-1 =x k-1 =X k A|X k-1 =x k-1 .

An example is the random walk for which X k =X k-1 +ε k where the steps ε k are independent and identically distributed.

Transition kernel. A transition kernel on (Ω,𝒜) is a mapping K:(Ω,𝒜)[0,1] such that

  • A𝒜K(.,A) is measurable;

  • xΩK(x,.) is a probability distribution on (Ω,𝒜).

The kernel K has density k if (x,A)Ω×𝒜K(x,A)= A k(x,y)dy.

(X k ) k is a homogeneous Markov Chain of transition K if (A,x)𝒜×ΩX k A|X k-1 =x=K(x,A).

Some Notations. Let (X k ) k be a homogeneous Markov Chain of transition K on (Ω,𝒜) with initial distribution ν (that is X 0 ν):

  • K ν denotes the probability distribution of the Markov Chain (X k ) k ;

  • νK k denotes the probability distribution of X k (X k νK k );

  • K k denotes the mapping defined by K k (x,A)=X k A|X 0 =x for all (x,A)Ω×𝒜.

Total variation convergence. A Markov Chain of distribution K ν is said to converge in total variation distance towards the distribution t if

lim k+ sup A𝒜 νK k (A)-t(A)=0.

Then the notation used here is νK k TV t.

Some interesting properties. Let t be a (target) distribution on (Ω,𝒜), then a transition kernel K is said to be:

  • t-invariant if tK=t;

  • t-irreducible if, (A,x)Ω×𝒜 such that t(A)>0, k𝒩 * K k (x,A)>0 holds.

Goal

Markov Chain Monte-Carlo techniques allows to sample and integrate according to a distribution t which is only known up to a multiplicative constant. This situation is common in Bayesian statistics where the "target" distribution, the posterior one t(θ ̲)=π(θ ̲|y ̲ ̲), is proportional to the product of prior and likelihood: see equation (3.3).

In particular, given a "target" distribution t and a t-irreducible kernel transition Q, the Metropolis-Hastings algorithm produces a Markov chain (X k ) k of distribution K ν with the following properties:

  • the transition kernel of the Markov chain is t-invariant;

  • νK k TV t;

  • the Markov chain satisfies the ergodic theorem: let φ be a real-valued function such that 𝔼 Xt |φ(X)|<+, then, whatever the initial distribution ν is:

    1 n k=1 n φ(X k ) k+𝔼 Xt φ(X)almostsurely.

In that sense, simulating (X k ) k amounts to sampling according to t and can be used to integrate relatively to the probability measure t. Let us remark that the ergodic theorem implies that 1 n k=1 n 1 A (X k ) k+ Xt XA almost surely.

Principle

By abusing the notation, t(x) represents, in the remainder of this section, a function of x which is proportional to the PDF of the target distribution t. Given a transition kernel Q of density q, the scheme of the Metropolis-Hastings algorithm is the following (lower case letters are used hereafter for both random variables and realizations as usual in the bayesian literature):

0)

Draw x 0 ν and set k=1.

1)

Draw a candidate for x k according to the given transition kernel Q: x ˜Q(x k-1 ,.).

2)

Compute the ratio ρ=t(x ˜)/q(x k-1 ,x ˜) t(x k-1 )/q(x ˜,x k-1 ).

3)

Draw u𝒰([0,1]); if uρ then set x k =x ˜, otherwise set x k =x k-1 .

4)

Set k=k+1 and go back to 1).

Of course, if t is replaced by a different function of x which is proportional to it, the algorithm keeps unchanged, since t only takes part in the latter in the ratio t(x ˜) t(x k-1 ). Moreover, if Q proposes some candidates in a uniform manner (constant density q), the candidate x ˜ is accepted according to a ratio ρ which reduces to the previous "natural" ratio t(x ˜) t(x k-1 ) of PDF. The introduction of q in the ratio ρ prevents from the bias of a non-uniform proposition of candidates which would favor some areas of Ω.

The t-invariance is ensured by the symmetry of the expression of ρ (t-reversibility).

In practice, Q is specified as a random walk (q RW such that q(x,y)=q RW (x-y)) or as a independent sampling (q IS such that q(x,y)=q IS (y)), or as a mixture of random walk and independent sampling.

The important property the practitioner have to keep in mind when choosing the transition kernel Q is the t-irreducibility. Moreover, for efficient convergence, Q has to be chosen so as to explore quickly the whole support of t without conducting to a too small acceptance ratio (the ratio of accepted candidates x ˜ ). It is usually recommended that this latter ratio is about 0.2 but such a ratio is neither a warranty of efficiency, nor a substitute to a convergence diagnosis.

The following bibliographical references provide main starting points for further study of this method:

  • Robert, C.P. and Casella, G. (2004). "Monte Carlo Statistical Methods" (Second Edition), Springer.

  • Meyn, S. and Tweedie R.L. (2009). "Markov Chains ans Stochastic Stability" (Second Edition), Cambridge University Press.

Other notations

-

3.3.11 Step B  – Parametric Estimation

Mathematical description

Goal

The objective is to estimate the value of the parameters based on a sample of an unknown distribution, supposed to be a member of a parametric family of distributions. We describes here the estimators implemented in OpenTURNS for the estimation of the several parametric models. They are all derived from either the Maximum Likelihood method or from the method of moments, excepted for the bound parameters that are systematically modified to strictly include the extrem realizations of the underlying sample (x 1 ,,x n ).

We suppose that we have a realization (x ̲ 1 ,,x ̲ n ) of a sample (X ̲ 1 ,,X ̲ n ) of size n, with the X i being iid, with common distribution 𝒟(θ ̲). The objective is to build an estimator θ ^ n of θ ̲, based on the realization (x ̲ 1 ,,x ̲ n ). We adopt the following notations:

  • x ̲ ¯ n =1 n i=1 n x ̲ i the sample mean (x ¯ n in the 1D case);

  • σ n =1 n-1 i=1 n (x i -x ¯) 2 the sample standard deviation in the 1D case;

  • x (1,n) =min i=1,,n x i the minimum of the realization in the 1D case;

  • x (n,n) =max i=1,,n x i the maximum of the realization in the 1D case;

  • x 1/2 the median of the sample in the 1D case;

Continuous univariate distributions:

Arcsine   μ ^=μ ^ x σ ^=σ ^ x  
Beta   a ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) b ^ n =(1+ sign (x (n,n) )/(2+n))x (n,n) t ^ n =(b ^ n -x ¯ n )(x ¯ n -a ^ n ) (σ n X ) 2 -1r ^ n =t(x ¯ n -a ^ n ) b ^ n -a ^ n  
Burr   c ^ n is the solution of the non linear equation : 1+c nSR-n i=1 n log(1+x i c )SSR=0 where SR= i=1 n log(x i ) 1+x i c and SSR= i=1 n x i c log(x i ) 1+x i c . Then k ^ n =n i=1 n log(1+x i c ).  

Chi   ν ^ n =x 2 ¯ n  
ChiSquare   ν ^ n =x ¯ n  
Dirichlet   Maximum likelihood estimators, according to the reference J. Huang  
Epanechnikov   no parameter to estimate  
Exponential   γ ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) λ ^ n =1/x ¯ n -γ ^ n  
Fisher-Snedecor   No factory method implemented so far  
Gamma   γ ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) λ ^ n =x ¯ n -γ ^ n (σ n X ) 2 k ^ n =(x ¯ n -γ ^ n ) 2 (σ n X ) 2  
Generalized Pareto   see text below  
Gumbel   α ^ n =π σ n X 6β ^ n =x ¯ n -γ6 πσ n X γ0.57721isEuler'sconstant.  
Histogram   The bandwidth is the AMISE-optimal one : h=(24π) 1/3 σ n n 1/3 where σ n 2 is the non biaised variance of the data. The range is [min(data),max(data)].  
Inverse ChiSquare   No factory method implemented so far  
Inverse Gamma   No factory method implemented so far  
Inverse Normal   μ ^ n =x ¯ n λ ^ n =1 n i=1 n 1 x i -1 x ¯ n -1  
Laplace   μ ^ n =x 1/2 λ ^ n =1 n i=1 n |x i -μ ^ n |  
Logistic   α ^=x ¯ n β ^ n =σ n X  
LogNormal   see text below  
LogUniform   a ^ n =(1-1/(2+n))x (1,n) b ^ n =(1+1/(2+n))x (n,n)  
Meixner   Moments method. See details below.  
Non Central Chi Square   No factory method implemented so far  
Non Central Student   No factory method implemented so far  
Normal   Maximum likelihood estimators  
Normal Gamma   No factory method implemented so far  
Rayleigh   γ ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) σ ^ n =2 n i=1 n (x i -γ ^ n ) 2  

Rice   Moments estimators, according to the reference C.G. Koay  
Student (1d)   Moments estimators  
Trapezoidal   Numerical resolution of maximum likelihood estimators  
Triangular   a ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) b ^ n =(1+ sign (x (n,n) )/(2+n))x (n,n) m ^ n =3x ¯ n -a ^ n -b ^ n  
TruncatedNormal   Numerical maximum likelihood estimation.  
Uniform   a ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) b ^ n =(1+ sign (x (n,n) )/(2+n))x (n,n)  
Weibull   γ ^ n =(1- sign (x (1,n) )/(2+n))x (1,n) (α ^ n ,β ^ n )solutionofx ¯ n =γ ^ n +α ^ n +Γ(1+1/β ^ n )(σ n X ) 2 =α ^ n Γ(1+2/β ^ n )-Γ(1+1/β ^ n ) 2  

Details for the Generalized Pareto distribution :

OpenTURNS implements three parametric estimation methods: the classical moments method, the exponential regression method and the probability weighted moments method, according to the reference G. Matthys & J. Beirlant. The default strategy is to use the probability weighted moments method when the sample size is smaller than the threshold defined in the RessourceMap object (GeneralizedParetoFactory-SmallSize). In case of failure, it uses the exponential regression method. If the sample size is to high, it uses the exponential regression method. The classical moments method is proposed but not used by default.

Details for the LogNormal distribution :

We note :

S 0 = i=1 n 1 x i -γS 1 = i=1 n log(x i -γ)S 2 = i=1 n log 2 (x i -γ)S 3 = i=1 n log(x i -γ) x i -γ (3.3)

OpenTURNS tries to evaluate the parameters first using the Local Maximum Likelihood based estimators of (μ ,σ ,γ) defined by :

μ ^ ,n =S 1 (γ ^) nσ ^ ,n 2 =S 2 (γ ^) n-μ ^ l,n 2 (3.3)

Thus, γ ^ n verifies the relation :

S 0 (γ)S 2 (γ)-S 1 (γ)1+S 1 (γ) n+nS 4 (γ)=0 (96)

under the constraint γminx i .

OpenTURNS tries to solve (96) by the step doubling bracheting method followed by the bisection method. Once γ ^ n is evaluated, (μ ^ ,n ,σ ^ ,n ) are evaluated as defined in (3.3) and ().

If the resolution of (96) is not possible, OpenTURNS sends a message to the User and evaluates the parameters from the Modified Moments based estimators unsing x ¯ n , σ n 2 and the additional modified moment equation :

𝔼[log(X (1) -γ)]=log(x (1) -γ) (97)

The quantity EZ 1 (n)=𝔼[log(X (1) -γ)]-μ σ is the mean of the first order statistics of a standard normal sample of size n. We have the following relation :

EZ 1 (n)= nzϕ(z)(1-Φ(z)) n-1 dz (98)

where ϕ et Φ are the pdf and cdf of the standard normal distribution.

The estimator ω ^ n of ω=e σ 2 is obtained as solution of :

ω(ω-1)-κ n ω-e EZ 1 (n)logω 2 =0 (99)

where κ n =s n 2 (x ¯ n -x (1) ) 2 .

Then (μ ^ n ,σ ^ n ,γ ^ n ) are evaluated from :

μ ^ n =logβ ^ n σ ^ n =logω ^ n γ ^ n =x ¯ n -β ^ n ω ^ n (3.3)

where β ^ n =s n ω ^ n (ω ^ n -1).

If the resolution of (99) is not possible, OpenTURNS sends a message to the User and evaluates the parameters from the Moments based estimators which are always defined.

The estimator ω ^ n of ω=e σ 2 is the positive root of :

ω 3 +3ω 2 -(4+a 3,n 2 )=0 (101)

which is always defined. Then we have (μ ^ n ,σ ^ n ,γ ^ n ) using the relations (3.3).

Details for the Meixner distribution :

We use the following estimators:

γ 1 ^ n =1 n i=1 n (x i -x ^ n ) 3 σ ^ n 3 γ 2 ^ n =1 n i=1 n (x i -x ^ n ) 4 σ ^ n 4 δ ^ n =1 γ 2 ^ n -γ 1 ^ n 2 -3β ^ n =sign(γ 1 ^ n )arcos(2-δ ^ n (γ 2 ^ n -3))α ^ n =(σ ^ n 2 (cosβ ^ n +1)) 1/3 (3.3)

where (3.3) is defined if γ 2 ^ n 2γ 1 ^ n +3.

Continuous multivariate distributions:

Dirichlet   Maximum likelihood estimators  
Normal   μ ̲ ^ n ( =x ̲ ¯ n Cov ^ n =1 n-1 i=1 n X ̲ i -μ ̲ ^ n X ̲ i -μ ̲ ^ n t  
Student   not yet implmented  

Discrete univariate distributions :

Bernoulli   p ^ n =x ¯ n  
Binomial   See details below.  
Dirac   point ^ n =x 1  
Geometric   p ^ n =1 x ¯ n  
Multinomial   data:(x ̲ 1 ,,x ̲ n )N=max i,k x i k p i =1 nN k=1 n x i k  
Negative Binomial   data:(x ̲ 1 ,,x ̲ n )p ^ n =x ¯ n r ^ n +x ¯ n r ^ n solutionofnlogr ^ n r ^ n +x ¯ n -ψ(r ^ n )+ i=1 n ψ(x i +r ^ n )=0TheresolutionisdoneusingBrent'smethod.  
Poisson   λ ^ n =x ¯ n  
Skellam   Moments estimators: see details below.  
UserDefined   Uniform distribution over the sample.  

Details for the Binomial distribution :

We initialize the value of (n,p n ) to x ^ n 2 x ^ n -σ ^ n 2 ,x ^ n n where x ^ n is the empirical mean of the sample (x 1 ,,x n ), and σ ^ n 2 its unbiaised empirical variance.

Then, we evaluate the likelihood of the sample with respect to the Binomial distribution parameterized with x ^ n 2 x ^ n -σ ^ n 2 ,x ^ n n. By testing successively n+1 and n-1 instead of n, we determine the variation of the likelihood of the sample with respect to the Binomial distribution parameterized with (n+1,p n+1 ) and (n-1,p n-1 ). We then iterate in the direction that makes the likelihood decrease, until the likelihood stops decreasing. The last couple is the one selected.

Details for the Skellam distribution :

The estimators of (λ 1 ,λ 2 ) write:

λ 1 ^ n =1 2(σ ^ n +x ^ n )λ 2 ^ n =1 2(σ ^ n -x ^ n )

Discrete multivariate distributions:

Dirac   point ^ n =x ̲ 1  
Multinomial   Maximum likelihood estimators  
UserDefined   Uniform distribution over the sample.  

Copula distributions :

We note τ ^ n the Kendall-τ of the sample and ρ ^ n its Spearman correlation coefficient. AMH is the Ali-Mikhail-Haq copula and FGM the Farlie-Gumbel-Morgenstern one.

AMH   θ ^ n solution of τ ^ n =3θ-2 3θ-2(1-θ) 2 ln(1-θ) 3θ 2 .  
Clayton   θ ^ n =2τ ^ n ( 1 ( -τ ^ n .  
FGM   θ ^ n =9 2τ ^ n ( if |θ ^ n |<1. Otherwise, θ ^ n =3ρ ^ n ( if |θ ^ n |<1. Otherwise, the estimation is not possible.  
Frank   θ ^ n solution of τ ^ n =1-41-D(θ ^ n ,1) ( θ where D is the Debye function defined as D(x,n)=n x n 0 x t n e t -1 ( dt.  
Gumbel   θ ^ n =1 ( 1-τ ^ n ( .  
Normal   The correlation matrix R ̲ ̲ is such that R ij =sin(π 2τ ^ n,ij ) ( .  

Other notations

Link with OpenTURNS methodology

When the amount of data is sufficient, parametric estimation may be used within Step B ; Quantification of Uncertainties to model the uncertainty of some input random vectors or the output random vector.
References and theoretical basics
The following bibliographical references provide main starting points for further study of this method:
  • Huang J., "Maximum Likelihood Estimation of Dirichlet Distribution Parameters".

  • Koay C.G., Basser P.J., "Analytically exact correction scheme for signal extraction from noisy magnitude MR signals", Journal of magnetics Resonance 179, 317-322, 2006.

  • G. Matthys & J. Beirlant "Estimating the extreme value index abd high quantiles with exponential regression models", Statistica Sinica, 13, 850-880, 2003.

  • Saporta G. (1990). "Probabilités, Analyse de données et Statistique", Technip.

  • Dixon W.J. & Massey F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill.

Examples

-

3.3.12 Step B  – Graphical goodness-of-fit tests : QQ-plot, Kendall Plot and Henry line

Mathematical description

Goal

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to verify the compatibility between a sample of data x ̲ 1 ,x ̲ 2 ,...,x ̲ N and a candidate probability distribution previous chosen. OpenTURNS enables the use of graphical tools to answer this question in the one dimensional case n X =1, and with a continuous distribution.

Principle

The QQ-plot, and henry line tests are defined in the case to n X =1. Thus we denote X ̲=X 1 =X. The first graphical tool provided by Open TURNS is a QQ-plot (where "QQ" stands for "quantile-quantile"). In the specific case of a Normal distribution (see [standard parametric models] ), Henry's line may also be used.

QQ-plot

A QQ-Plot is based on the notion of quantile. The α-quantile q X (α) of X, where α(0,1), is defined as follows:

Xq X (α)=α

If a sample x 1 ,...,x N of X is available, the quantile can be estimated empirically:

  1. the sample x 1 ,...,x N is first placed in ascending order, which gives the sample x (1) ,...,x (N) ;

  2. then, an estimate of the α-quantile is:

    q ^ X (α)=x ([Nα]+1)

    where [Nα] denotes the integral part of Nα.

Thus, the j th smallest value of the sample x (j) is an estimate q ^ X (α) of the α-quantile where α=(j-1)/N (1<jN).

Let us then consider the candidate probability distribution being tested, and let us denote by F its cumulative distribution function. An estimate of the α-quantile can be also computed from F:

q ^ X ' (α)=F -1 (j-1)/N

If F is really the cumulative distribution function of F, then q ^ X (α) and q ^ X ' (α) should be close. Thus, graphically, the points q ^ X (α),q ^ X ' (α),α=(j-1)/N,1<jN should be close to the diagonal.

The following figure illustrates the principle of a QQ-plot with a sample of size N=50. Note that the unit of the two axis is that of the variable X studied; the quantiles determined via F are called here "value of T". In this example, the points remain close to the diagonal and the hypothesis "F is the cumulative distribution function of X" does not seem irrelevant, even if a more quantitative analysis (see for instance [Kolmogorov-Smirnov goodness-of-fit test] ) should be carried out to confirm this.

In this second example, the candidate distribution function is clearly irrelevant.

Henry's line

This second graphical tool is only relevant if the candidate distribution function being tested is gaussian. It also uses the ordered sample x (1) ,...,x (N) introduced for the QQ-plot, and the empirical cumulative distribution function F ^ N presented in [empirical cumulative distribution function] .

By definition,

x (j) =F ^ N -1 j N

Then, let us denote by Φ the cumulative distribution function of a Normal distribution with mean 0 and standard deviation 1. The quantity t (j) is defined as follows:

t (j) =Φ -1 j N

If X is distributed according to a normal probability distribution with mean μ and standard-deviation σ, then the points x (j) ,t (j) ,1jN should be close to the line defined by t=(x-μ)/σ. This comes from a property of a normal distribution: it the distribution of X is really 𝒩(μ,σ), then the distribution of (X-μ)/σ is 𝒩(0,1).

The following figure illustrates the principle of Henry's graphical test with a sample of size N=50. Note that only the unit of the horizontal axis is that of the variable X studied. In this example, the points remain close to a line and the hypothesis "the distribution function of X is a gaussian one" does not seem irrelevant, even if a more quantitative analysis (see for instance [Kolmogorov-Smirnov goodness-of-fit test] ) should be carried out to confirm this.

In this second example, the hypothesis of a gaussian distribution seems far less relevant because of the behaviour for small values of X.

Kendall plot

In the bivariate case, the Kendall Ploot test enables to validate the choice of a specific copula model or to verify that two samples share the same copula model.

Let X ̲ be a bivariate random vector which copula is noted C.

Let (X ̲ i ) 1iN be a sample of X ̲.

We note :

i1,H i =1 n-1Cardj[1,N],ji,|x 1 j x 1 i andx 2 j x 2 i

and (H (1) ,,H (N) ) the ordered statistics of (H 1 ,,H N ).

The statistic W i is defined by :

W i =NC N-1 i-1 0 1 tK 0 (t) i-1 (1-K 0 (t)) n-i dK 0 (t) (103)

where K 0 (t) is the cumulative density function of H i . We can show that this is the cumulative density function of the random variate C(U,V) when U and V are independent and follow Uniform(0,1) distributions.

In OpenTURNS 0.15.0, Eq. (103) is evaluated with the Monte Carlo sampling method : OpenTURNS generates n samples of size N from the bivariate copula C, in order to have n realisations of the statistics H (i) ,1iN and have an estimation of W i =E[H (i) ],iN.

When testing a specific copula with respect to a sample, the Kendall Plot test draws the points (W i ,H (i) ) 1iN . If the points are one the first diagonal, the copula model is validated.

When testing whether two samples have the same copula, the Kendall Plot test draws the points (H (i) 1 ,H (i) 2 ) 1iN respectively associated to the first and second sample. Note that the two samples must have the same size.

In Figures 52 to 53, the data 1 and data 2 have been generated from a Frank(1.5) copula, and data 3 from a Gumbel(4.5) copula.

Figures 52 and 52 respectively validates and invalidates the Frank copula model to data 1 and data 2.

Figures 53 and 53 respectively validates that data 1 and data 2 share the same copula, and shows that data 1 and data 3 don't share the same copula.

The Kendall Plot test validates the use of the Frank copula model for the data 1.

The Kendall Plot test invalidates the use of the Frank copula model for the data 1.

Figure 52

The Kendall Plot test validates that data 1 and data 2 have the same copula model.

The Kendall Plot test invalidates that data 1 and data 3 have the same copula model.

Figure 53

Remark : In the case where you want to test a sample with respect to a specific copula, if the size of the sample is superior to 500, we recommend to use the second form of the Kendall plot test : generate a sample of the proper size from your copula and then test both samples. This way of doing is more efficient.

Other notations

-

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty", to verify if the probability distribution is appropriate to describe the uncertainty of a component X i of the vector of unknown variables defined in step A "Specifying Criteria and the Case Study". The Kendall Plot is used to validate a copula model.
References and theoretical basics
Since QQ-plot and Henry's line are graphical analysis, their conclusion remain obviously subjective. The reader is referred to [Komogorov-Smirnov test] , [Cramer-Von-Mises test] , [Anderson-Darling test] for a more quantitative analysis.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon W.J. & Massey F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill


3.3.13 Step B  – Chi-squared goodness of fit test

Mathematical description

Goal

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to verify the compatibility between a sample of data x ̲ 1 ,x ̲ 2 ,...,x ̲ N and a candidate probability distribution previous chosen. OpenTURNS enables the use of the χ 2 Goodness-of-Fit test to answer this question in the one dimensional case n X =1, and with a discrete distribution.

Principle

Let us limit the case to n X =1. Thus we denote X ̲=X 1 =X. We also note that as we are considering discrete distributions i.e. those for which the possible values of X belong to a discrete set , the candidate distribution is characterised by the probabilities p(x;θ ̲) x .

The chi squared test is based on the fact that if the candidate distribution is appropriate, the number of values in the sample x1, x2, ..., xN that are equal to x should be on average equal to Np(x;θ ̲). The idea is therefore to compare the "theoretical values" with the actual observed values. This comparison is performed with the aid of the following "distance".

D ^ N 2 = x N Np(x)-n(x) 2 n(x)

where N denotes the elements of which have been observed at least once in the data sample and where n(x) denotes the number of data values in the sample that are equal to x.

The probability distribution of the distance D ^ N 2 is asymptotically known (i.e. as the size of the sample tends to infinity), and this asymptotic distribution does not depend on the candidate distribution being tested. If N is sufficiently large, this means that for a probability α, one can calculate the threshold / critical value) d α such that:

  • if D ^ N >d α , we reject the candidate distribution with a risk of error α,

  • if D ^ N d α , the candidate distribution is considered acceptable.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the candidate distribution is rejected. Thus, the candidate distribution will be accepted if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty", to verify if the probability distribution is appropriate to describe the uncertainty of a component X i of the vector of unknown variables defined in step A "Specifying Criteria and the Case Study".

Input data:

x 1 ,...,x N : data sample

Distribution: probability distribution that we are testing for goodness-of-fit

Parameters:

α : Level of significance for the test

Outputs:

Result: Binary variable specifying whether the candidate distribution is rejected (0) or not (1)

α lim : p-value of the test

References and theoretical basics

The test is suitable for discrete distributions. It cannot be used for continuous distributions except by means of an arbitrary discretisation of possible values of X, an important source of potential error. Readers interested in Goodness of Fit tests for continuous variables are referred to [Kolmogorov-Smirnov test] , [Cramer-Von Mises test] , [Anderson-Darling test] in the reference documentation.

Even for discrete distributions, certain precautions must be taken when using this test. Firstly, the critical value d α is only valid for a sufficiently large sample size. No rule exists to determine the minimum number of data values necessary in order to use this test; it is often thought, however, that the approximation is reasonable when N is of the order of a few dozen. But whatever the value of N, the distance – and similarly the p-value – remains a useful tool for comparing different probability distributions to a sample. The distribution which minimizes D ^ N – or maximizes the p-value – will be of interest to the analyst.

On the other hand, the calculation of d α and of the p-value should in theory be modified if we are testing the goodness of fit of a parametric model and if the parameters of the candidate distribution have been estimated from the same sample. The current version of OpenTURNS, however, does not permit such a modification, and so the results must be used with care when the p-value α lim and the desired error risk α are very close.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • D'Agostino, R.B. and Stephens, M.A. (1986). "Goodness-of-Fit Techniques", Marcel Dekker, Inc., New York.

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.14 Step B  – Kolmogorov-Smirnov goodness-of-fit test

Mathematical description

Goal

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to verify the compatibility between a sample of data x ̲ 1 ,x ̲ 2 ,...,x ̲ N and a candidate probability distribution previous chosen. OpenTURNS enables the use of the Kolmogorov-Smirnov Goodness-of-Fit test to answer this question in the one dimensional case n X =1, and with a continuous distribution.

Principle

Let us limit the case to n X =1. Thus we denote X ̲=X 1 =X. This goodness-of-fit test is based on the maximum distance between the cumulative distribution function F ^ N of the sample x 1 ,x 2 ,...,x N (see [empirical cumulative distribution function] ) and that of the candidate distribution, denoted F. This distance may be expressed as follows:

D=sup x F ^ N x-Fx

With a sample x 1 ,x 2 ,...,x N , the distance is estimated by:

D ^ N =sup i=1...N Fx i -i-1 N;i N-Fx i

The probability distribution of the distance D ^ N is asymptotically known (i.e. as the size of the sample tends to infinity). If N is sufficiently large, this means that for a probability α and a candidate distribution type, one can calculate the threshold / critical value d α such that:

  • if D ^ N >d α , we reject the candidate distribution with a risk of error α,

  • if D ^ N d α , the candidate distribution is considered acceptable.

Note that d α does not depend on the candidate distribution F being tested, and the test is therefore relevant for any continuous distribution.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the candidate distribution is rejected. Thus, the candidate distribution will be accepted if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

The diagram below illustrates the principle of comparison with the empirical cumulative distribution function for an ordered sample 5,6,10,22,27; the candidate distribution considered here is the Exponential distribution with parameters λ=0.07, γ=0 (see [standard parametric models] ).

Other notations
This method is also referred to in the literature as Kolmogorov's Test.

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty", to verify if the probability distribution is appropriate to describe the uncertainty of a component X i of the vector of unknown variables defined in step A "Specifying Criteria and the Case Study".

Input data:

x 1 ,...,x N : data sample

Distribution: probability distribution that we are testing for goodness-of-fit

Parameters:

α : Level of significance for the test

Outputs:

Result: Binary variable specifying whether the candidate distribution is rejected (0) or not (1)

α lim : p-value of the test

References and theoretical basics
The test deals with the maximum deviation between the empirical distribtuion and the candidate distribution, it is by nature highly sensitive to presence of local deviations (a candidate distribution may be rejected even if it correctly describes the sample for almost the whole domain of variation).

We remind the reader that the underlying theoretical results of the test are asymptotic. There is no rule to determine the minimum number of data values one needs to use this test; but it is often considered a reasonable approximation when N is of an order of a few dozen. But whatever the value of N, the distance – and similarly the p-value – remains a useful tool for comparing different probability distributions to a sample. The distribution which minimizes D ^ N – or maximizes the p-value – will be of interest to the analyst.

We also point out that the calculation of d α should in theory be modified if on is testing the goodness-of-fit to a parametric model where the parameters have been estimated from the same sample. The current version of OpenTURNS does not allow this modification, and the results should be therefore used with caution when the p-value α lim and the desired error risk α are very close.

Readers interested in Goodness of Fit tests for continuous distributions are referred to [Cramer-Von Mises test] and [Anderson-Darling test] in the reference documentation.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/

  • D'Agostino, R.B. and Stephens, M.A. (1986). "Goodness-of-Fit Techniques", Marcel Dekker, Inc., New York.

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.15 Step B  – Cramer-Von Mises goodness-of-fit test

Mathematical description

Objective

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to verify the compatibility between a sample of data x ̲ 1 ,x ̲ 2 ,...,x ̲ N and a candidate probability distribution previous chosen. OpenTURNS enables the use of the Cramer-von-Mises Goodness-of-Fit test to answer this question in the one dimensional case n X =1, and with a continuous distribution. The current version is limited to the case of the Normal distribution.

Principle

Let us limit the case to n X =1. Thus we denote X ̲=X 1 =X. This goodness-of-fit test is based on the distance between the cumulative distribution function F ^ N of the sample x 1 ,x 2 ,...,x N (see [empirical cumulative distribution function] ) and that of the candidate distribution, denoted F. This distance is no longer the maximum deviation as in the [Kolmogorov-Smirnov test] but the distance squared and integrated over the entire variation domain of the distribution:

D= - Fx-F ^ N x 2 dF

With a sample x 1 ,x 2 ,...,x N , the distance is estimated by:

D ^ N =1 12N+ i=1 N 2i-1 2N-Fx i 2

The probability distribution of the distance D ^ N is asymptotically known (i.e. as the size of the sample tends to infinity). If N is sufficiently large, this means that for a probability α and a candidate distribution type, one can calculate the threshold / critical value d α such that:

  • if D ^ N >d α , we reject the candidate distribution with a risk of error α,

  • if D ^ N d α , the candidate distribution is considered acceptable.

Note that d α depends on the candidate distribution F being tested; the current version of OpenTURNS is limited to the case of the Normal distribution.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the candidate distribution is rejected. Thus, the candidate distribution will be accepted if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations

-

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty", to verify if the probability distribution is appropriate to describe the uncertainty of a component X i of the vector of unknown variables defined in step A "Specifying Criteria and the Case Study".

Input data:

x 1 ,...,x N : data sample

Distribution: normal probability distribution that we are testing for goodness-of-fit

Parameters:

α : Level of significance for the test

Outputs:

Result: Binary variable specifying whether the candidate distribution is rejected (0) or not (1)

α lim : p-value of the test

References and theoretical basics
The test concerns the deviation squared and integrated over the entire variation domain, it often appears to be more robust than the Kolmogorov-Smirnov test.

We remind the reader that the underlying theoretical results of the test are asymptotic. There is no rule to determine the minimum number of data values one needs to use this test; but it is often considered a reasonable approximation when N is of an order of a few dozen. But whatever the value of N, the distance – and similarly the p-value – remains a useful tool for comparing different probability distributions to a sample. The distribution which minimizes D ^ N – or maximizes the p-value – will be of interest to the analyst.

We also point out that the calculation of d α should in theory be modified if on is testing the goodness-of-fit to a parametric model where the parameters have been estimated from the same sample. The current version of OpenTURNS does not allow this modification, and the results should be therefore used with caution the p-value α lim and the desired error risk α are very close.

Readers interested in Goodness of Fit tests for continuous distributions are referred to [Kolmogorov-Smirnov test] and [Anderson-Darling test] in the reference documentation.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • D'Agostino, R.B. and Stephens, M.A. (1986). "Goodness-of-Fit Techniques", Marcel Dekker, Inc., New York.

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.16 Step B  – Anderson-Darling goodness-of-fit test

Mathematical description

Objective

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to verify the compatibility between a sample of data x ̲ 1 ,x ̲ 2 ,...,x ̲ N and a candidate probability distribution previous chosen. OpenTURNS enables the use of the Anderson-Darling Goodness-of-Fit test to answer this question in the one dimensional case n X =1, and with a continuous distribution. The current version is limited to the case of the Normal distribution.

Principle

Let us limit the case to n X =1. Thus we denote X ̲=X 1 =X. This goodness-of-fit test is based on the distance between the cumulative distribution function F ^ N of the sample x 1 ,x 2 ,...,x N (see [empirical cumulative distribution function] ) and that of the candidate distribution, denoted F. This distance is a quadratic type, as in the [Cramer-Von Mises test] , but gives more weight to deviations of extreme values:

D= - Fx-F ^ N x 2 F(x)1-F(x)dF(x)

With a sample x 1 ,x 2 ,...,x N , the distance is estimated by:

D ^ N =-N- i=1 N 2i-1 NlnF(x (i) )+ln1-F(x (N+1-i) )

where x (1) ,...,x (N) describes the sample placed in increasing order.

The probability distribution of the distance D ^ N is asymptotically known (i.e. as the size of the sample tends to infinity). If N is sufficiently large, this means that for a probability α and a candidate distribution type, one can calculate the threshold / critical value d α such that:

  • if D ^ N >d α , we reject the candidate distribution with a risk of error α,

  • if D ^ N d α , the candidate distribution is considered acceptable.

Note that d α depends on the candidate distribution F being tested; the current version of OpenTURNS is limited to the case of the Normal distribution.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the candidate distribution is rejected. Thus, the candidate distribution will be accepted if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations
-

Link with OpenTURNS methodology

This method is used in step B "Quantifying Sources of Uncertainty", to verify if the probability distribution is appropriate to describe the uncertainty of a component X i of the vector of unknown variables defined in step A "Specifying Criteria and the Case Study".

Input data:

x 1 ,...,x N : data sample

Distribution: normal probability distribution that we are testing for goodness-of-fit

Parameters:

α : Level of significance for the test

Outputs:

D ^ N : Distance between theoretical and empirical values

d α : Threshold / Critical value which if exceeded the tested probability is rejected

Result: Binary variable specifying whether the candidate distribution is rejected or not

References and theoretical basics
The Anderson-Darling test is theoretically designed to be more sensitive to the quality of fit in the tails of the distribution. A user interested in the extreme values of the source of uncertainty being studied will find this particularly interesting but we stress that both tails of the distribution, upper and lower, will influence the test results.

We remind the reader that the underlying theoretical results of the test are asymptotic. There is no rule to determine the minimum number of data values one needs to use this test; but it is often considered a reasonable approximation when N is of an order of a few dozen. But whatever the value of N, the distance – and similarly the p-value – remains a useful tool for comparing different probability distributions to a sample. The distribution which minimizes D ^ N – or maximizes the p-value – will be of interest to the analyst.

We also point out that the calculation of d α should in theory be modified if on is testing the goodness-of-fit to a parametric model where the parameters have been estimated from the same sample. The current version of OpenTURNS does not allow this modification, and the results should be therefore used with caution the p-value α lim and the desired error risk α are very close.

Readers interested in Goodness of Fit tests for continuous distributions are referred to [Kolmogorov-Smirnov test] and [Cramer-von-Mises test] in the reference documentation.

The following bibliographical references provide main starting points for further study of this method:

  • NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/

  • D'Agostino, R.B. and Stephens, M.A. (1986). "Goodness-of-Fit Techniques", Marcel Dekker, Inc., New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.17 Step B  – Bayesian Information Criterion (BIC)

Mathematical description

Goal

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to rank variable candidate distributions by using a sample of data x ̲ 1 ,x ̲ 2 ,...,x ̲ N . OpenTURNS enables the use of the Bayesian Information Criterion (BIC) to answer this question in the one dimensional case n X =1.

Principle

Let us limit the case to n X =1. Thus we denote X ̲=X 1 =X. Moreover, let us denote by 1 ,..., K the parametric models envisaged by the user among the [standard parametric models] . We suppose here that the parameters of these models have been estimated previously by the [maximum likelihood method] on the basis of the sample x ̲ 1 ,x ̲ 2 ,...,x ̲ n . We denote by L i the maximized likelihood for the model i .

By definition of the likelihood, the higher L i , the better the model describes the sample. However, using the likelihood as a criterion to rank the candidate probability distributions would involve a risk: one would almost always favour complex models involving many parameters. If such models provide indeed a large numbers of degrees-of-freedom that can be used to fit the sample, one has to keep in mind that complex models may be less robust that simpler models with less parameters. Actually, the limited available information (N data points) does not allow to estimate robustly too many parameters.

The BIC criterion can be used to avoid this problem. The principle is to rank 1 ,..., K according to the following quantity:

BIC i =logL i -p i 2log(n)

where p i denotes the number of parameters being adjusted for the model i . The larger BIC i , the better the model. Note that the idea is to introduce a penalization term that increases with the numbers of parameters to be estimated. A complex model will then have a good score only if the gain in terms of likelihood is high enough to justify the number of parameters used.

The term "Bayesian Information Criterion" comes the interpretation of the quantity BIC i . In a bayesian context, the unknow "true" model may be seen as a random variable. Suppose now that the user does not have any informative prior information on which model is more relevant among 1 ,..., K ; all the models are thus equally likely from the point of view of the user. Then, one can show that BIC i is an approximation of the posterior distribution's logarithm for the model i .

Other notations
Link with OpenTURNS methodology
This method is used in step B "Quantifying Sources of Uncertainty", to verify if the probability distribution is appropriate to describe the uncertainty of a component X i of the vector of unknown variables defined in step A "Specifying Criteria and the Case Study".
References and theoretical basics
Compared to other criteria proposed in literature for model selection and based on the same idea of penalization (such as the AIC criterion), the BIC criterion tends to favour models with a small number of parameters. Moreover, note that the undelying hypothesis is that the user does not have any significant prior information on which model is more relevant; if such prior information is available (for instance via literature or expert judgement), the BIC criterion becomes less relevant.

Readers interested in other ways to rank candidate models referred to [Kolmogorov-Smirnov test] , [Cramer-Von Mises test] and [Anderson-Darling test] in the reference documentation.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • D'Agostino, R.B. and Stephens, M.A. (1986). "Goodness-of-Fit Techniques", Marcel Dekker, Inc., New York.

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Burnham, K.P., and Anderson, D.R (2002). "Model Selection and Multimodel Inference: A Practical Information Theoretic Approach", Springer



3.3.18 Step B  – Pearson Correlation Coefficient

Mathematical description

Goal

This method deals with the parametric modelling of a probability distribution for a random vector X ̲=X 1 ,...,X n X . It aims to measure a type of dependence (here a linear correlation) which may exist between two components X i and X j .

Principle

The Pearson's correlation coefficient ρ U,V aims to measure the strength of a linear relationship between two random variables U and V. It is defined as follows:

ρ U,V = Cov U,V σ U σ V

where Cov U,V=𝔼U-m U V-m V , m U =𝔼U, m V =𝔼V, σ U = Var U and σ V = Var V. If we have a sample made up of a set of N pairs (u 1 ,v 1 ),(u 2 ,v 2 ),...,(u N ,v N ), Pearson's correlation coefficient can be estimated using the formula:

ρ ^ U,V = i=1 N u i -u ¯v i -v ¯ i=1 N u i -u ¯ 2 v i -v ¯ 2

where u ¯ and v ¯ represent the empirical means of the samples (u 1 ,...,u N ) and (v 1 ,...,v N ).

Pearson's correlation coefficient takes values between -1 and 1. The closer its absolute value is to 1, the stronger the indication is that a linear relationship exists between variables U and V. The sign of Pearson's coefficient indicates if the two variables increase or decrease in the same direction (positive coefficient) or in opposite directions (negative coefficient). We note that a correlation coefficient equal to 0 does not necessarily imply the independence of variables U and V: this property is in fact theoretically guaranteed only if U and V both follow a Normal distribution. In all other cases, there are two possible situations in the event of a zero Pearson's correlation coefficient:

  • the variables U and V are in fact independent,

  • or a non-linear relationship exists between U and V.

Other notations

The estimate ρ ^ of Pearson's correlation coefficient is sometimes denoted by r.

Link with OpenTURNS methodology

Pearson's correlation coefficient can be used in step B "Quantifying Sources of Uncertainty". Having defined the vector X ̲ of input variables in step A "Specifying Criteria and the Case Study", [Pearson's Independence Test] shows how to test for the existence of a linear type of dependency between two components X i and X j . Such a relationship should in fact be taken in to account so as not to falsify the results of step C "Propagation of Uncertainty".

Pearson's correlation coefficient is also used in step C' "Sensitivity Analysis and Ranking of Sources of Uncertainty". If a propagation of uncertainty with Monte-Carlo simulation (step C, [Mean and Variance Estimation using Standard Monte Carlo] ) has been carried out, [Pearson's Ranking] shows the user how to class the components of the input vector X ̲ according to their impact on the uncertainty of a final variable / output variable defined in step A.

References and theoretical basics

Regardless of the method used in step B or step C', we recall that the Pearson's coefficient is only useful in measuring a linear relationship between two variables. Readers are referred to the following references:
  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.


3.3.19 Step B  – Pearson's correlation test

Mathematical description

Goal

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to find a type of dependency (here a linear correlation) which may exist between two components X i and X j .

Principle

The Pearson's correlation coefficient ρ U,V , defined in [Pearson's Coefficient] , measures the strength of a linear relationship between two random variables U and V. If we have a sample made up of N pairs (u 1 ,v 1 ),(u 2 ,v 2 ),(u N ,v N ), we denote ρ ^ U,V to be the estimated coefficient.

Even in the case where two variables U and V have a Pearson's coefficient ρ U,V equal to zero, the estimate ρ ^ U,V obtained from the sample may be non-zero: the limited sample size does not provide the perfect image of the real correlation. Pearson's test nevertheless enables one to determine if the value obtained by ρ ^ U,V is significantly different from zero. More precisely, the user first chooses a probability α. From this value the critical value d α is calculated such that:

  • if ρ ^ U,V >d α , one can conclude that the real Pearson's correlation coefficient ρ U,V is not zero; the risk of error in making this assertion is controlled and equal to α;

  • if ρ ^ U,V d α , there is insufficient evidence to reject the null hypothesis ρ U,V =0.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the null correlation hypothesis is rejected. Thus, Pearson's coefficient is supposed non zero if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations

-

Link with OpenTURNS methodology

The Pearson's test is used in step B "Quantifying Sources of Uncertainty". It enables us to verify if a linear type of dependency exists between the two components X i and X j of the input variable vector X ̲ defined in step A "Specifying Criteria and the Case Study". Such a relationship should in fact be taken into account to avoid distortion of results in step C "Propagation of Uncertainty".

Input data:

Two samples x 1 i ,...,x N i and x 1 j ,...,x N j of variables X i and X j , each pair x k i ,x k j corresponding to a simultaneous sampling of the two variables

Parameters:

a probability α taking values strictly between 0 and 1, defining the risk of permissible decision error (significance level)

Outputs:

Result: Binary variable specifying whether the hypothesis of a correlation coefficient equal to 0 is rejected (0) or not (1)

α lim : p-value of the test

References and theoretical basics
Certain precautions should be taken when interpreting the Pearson's test results.
  • The underlying theory of the Pearson test assumes in fact that the variables X i and X j are both normally distributed. In all other cases, the decision produced by the test is only valid if the sample size N is sufficiently large (in practice N a few dozen, even if there is no theoretical result that enables us to prove that asymptotic behaviour has been attained).

  • Still considering the case of distributions other than the Normal distribution, whatever the value of N, we recall that ρ X i ,X j =0 does not enable us to conclude that X i and X j are independent (see [Pearson's Correlation Coefficient] ).

  • More generally, the numerical value of Pearson's correlation coefficient can only be interpreted when the two variables studied X i and X j are related in a linear way; the scatter plot of points (x 1 i ,x 1 j ),...,(x N i ,x N j ) provides some indication concerning the validity of this hypothesis.

The following pages describe methods which enable us to test the hypothesis of the Normal distribution using the available sample x 1 i ,...,x N i and x 1 j ,...,x N j : [Kolmogorov-Smirnov Goodness of Fit Test] , [Cramer-von Mises Goodness of Fit Test] , [Anderson-Darling Goodness of Fit Test] .

Out of Pearson's test validity domain (i.e. linear relationship, Normal distributions), [Spearman's test] provides some answers.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.


3.3.20 Step B  – Spearman correlation coefficient

Mathematical description

Goal

This method deals with the parametric modelling of a probability distribution for a random vector X ̲=X 1 ,...,X n X . It aims to measure a type of dependence (here a monotonous correlation) which may exist between two components X i and X j .

Principle

The Spearman's correlation coefficient ρ U,V S aims to measure the strength of a monotonic relationship between two random variables U and V. It is in fact equivalent to the Pearson's correlation coefficient after having transformed U and V to linearize any monotonic relationship (remember that Pearson's correlation coefficient may only be used to measure the strength of linear relationships, see [Pearson's Correlation Coefficient] ):

ρ U,V S =ρ F U (U),F V (V)

where F U and F V denote the cumulative distribution functions of U and V.

If we arrange a sample made up of N pairs (u 1 ,v 1 ),(u 2 ,v 2 ),...,(u N ,v N ), the estimation of Spearman's correlation coefficient first of all requires a ranking to produce two samples (u 1 ,...,u N ) and (v 1 ,...,v N ). The ranking u [i] of the observation u i is defined as the position of u i in the sample reordered in ascending order: if u i is the smallest value in the sample (u 1 ,...,u N ), its ranking would equal 1; if u i is the second smallest value in the sample, its ranking would equal 2, and so forth. The ranking transformation is a procedure that takes the sample (u 1 ,...,u N )) as input data and produces the sample (u [1] ,...,u [N] ) as an output result.

For example, let us consider the sample (u 1 ,u 2 ,u 3 ,u 4 )=(1.5,0.7,5.1,4.3). We therefore have (u [1] ,u [2] u [3] ,u [4] )=(2,1,4,3). u 1 =1.5 is in fact the second smallest value in the original, u 2 =0.7 the smallest, etc.

The estimation of Spearman's correlation coefficient is therefore equal to Pearson's coefficient estimated with the aid of the N pairs (u [1] ,v [1] ), (u [2] ,v [2] ), ..., (u [N] ,v [N] ):

ρ ^ U,V S = i=1 N u [i] -u ¯ [] v [i] -v ¯ [] i=1 N u [i] -u ¯ [] 2 v [i] -v ¯ [] 2

where u ¯ [] and v ¯ [] represent the empirical means of the samples (u [1] ,...,u [N] ) and (v [1] ,...,v [N] ).

The Spearman's correlation coefficient takes values between -1 and 1. The closer its absolute value is to 1, the stronger the indication is that a monotonic relationship exists between variables U and V. The sign of Spearman's coefficient indicates if the two variables increase or decrease in the same direction (positive coefficient) or in opposite directions (negative coefficient). We note that a correlation coefficient equal to 0 does not necessarily imply the independence of variables U and V. There are two possible situations in the event of a zero Spearman's correlation coefficient:

  • the variables U and V are in fact independent,

  • or a non-monotonic relationship exists between U and V.

Other notations

Spearman's coeeficient is often referred to as the rank correlation coefficient.

Link with OpenTURNS methodology

Spearman's correlation coefficient can be used in step B "Quantifying Sources of Uncertainty". Having defined the vector X ̲ of input variables in step A "Specifying Criteria and the Case Study", [Spearman's Independence Test] shows how to test for the existence of a monotonous type of dependency between two components X i and X j . Such a relationship should in fact be taken in to account so as not to falsify the results of step C "Propagation of Uncertainty".

Spearman's correlation coefficient is also used in step C' "Sensitivity Analysis and Ranking of Sources of Uncertainty". If a propagation of uncertainty with Monte-Carlo simulation (step C, [Mean and Variance Estimation using Standard Monte Carlo] ) has been carried out, [Spearman's Ranking] shows the user how to class the components of the input vector X ̲ according to their impact on the uncertainty of a final variable / output variable defined in step A.

References and theoretical basics
Regardless of the method used in step B or step C', we recall that the Spearman's coefficient is only useful in measuring a monotonous relationship between two variables. Readers are referred to the following references:
  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.21 Step B  – Spearman correlation test

Mathematical description

Goal

This method deals with the modelling of a probability distribution of a random vector X ̲=X 1 ,...,X n X . It seeks to find a type of dependency (here a monotonous correlation) which may exist between two components X i and X j .

Principle

The Spearman's correlation coefficient ρ U,V S , defined in [Spearman's Coefficient] , measures the strength of a monotonous relationship between two random variables U and V. If we have a sample made up of N pairs (u 1 ,v 1 ),(u 2 ,v 2 ),(u N ,v N ), we denote ρ ^ U,V S to be the estimated coefficient.

Even in the case where two variables U and V have a Spearman's coefficient ρ U,V S equal to zero, the estimate ρ ^ U,V S obtained from the sample may be non-zero: the limited sample size does not provide the perfect image of the real correlation. Pearson's test nevertheless enables one to determine if the value obtained by ρ ^ U,V S is significantly different from zero. More precisely, the user first chooses a probability α. From this value the critical value d α is calculated automatically such that:

  • if ρ ^ U,V S >d α , one can conclude that the real Spearman's correlation coefficient ρ U,V S is not zero; the risk of error in making this assertion is controlled and equal to α;

  • if ρ ^ U,V S d α , there is insufficient evidence to reject the null hypothesis ρ U,V S =0.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the null correlation hypothesis is rejected. Thus, Spearman's's coefficient is supposed non zero if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations

-

Link with OpenTURNS methodology

The Spearman's test is used in step B "Quantifying Sources of Uncertainty". It enables us to verify if a monotonous type of dependency exists between the two components X i and X j of the input variable vector X ̲ defined in step A "Specifying Criteria and the Case Study". Such a relationship should in fact be taken into account to avoid distortion of results in step C "Propagation of Uncertainty".

Input data:

Two samples x 1 i ,...,x N i and x 1 j ,...,x N j of variables X i and X j , each pair x k i ,x k j corresponding to a simultaneous sampling of the two variables

Parameters:

a probability α taking values strictly between 0 and 1, defining the risk of permissible decision error (significance level)

Outputs:

Result: Binary variable specifying whether the hypothesis of a correlation coefficient equal to 0 is rejected (0) or not (1)

α lim : p-value of the test

References and theoretical basics
Certain precautions should be taken when interpreting the Spearman's test results.
  • Remember that ρ X i ,X j =0 does not enable us to conclude that X i and X j are independent (see [Spearman's correlation coefficient] ).

  • More generally, the numerical value of Spearman's correlation coefficient can only be interpreted when the two variables studied X i and X j are related in a monotonous way; the scatter plot of points (x 1 i ,x 1 j ),...,(x N i ,x N j ) provides some indication concerning the validity of this hypothesis.

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.22 Step B  – Chi-squared test for independence

Mathematical description

Goal

This method deals with the parametric modelling of a probability distribution for a random vector X ̲=X 1 ,...,X n X . We seek here to detect possible dependencies that may exist between two components X i and X j . In response to this, OpenTURNS offers the use of the χ 2 test for Independence for discrete probability distributions.

Principle

As we are considering discrete distributions, the possible values for X i and X j respectively belong to the discrete sets i and j . The χ 2 test of independence can be applied when we have a sample consisting of N pairs (x 1 i ,x 1 j ),(x 2 i ,x 2 j ),(x N i ,x N j ). We denote:

  • n u,v the number of pairs in the sample such that x k i =u and x k j =v,

  • n u i the number of pairs in the sample such that x k i =u,

  • n v j the number of pairs in the sample such that x k j =v.

The test thus uses the quantity denoted D ^ N 2 :

D ^ N 2 = u i v 2 p u,v -p v j p u i 2 p u i p v j

where:

p u,v =n u,v N,p u i =n u i N,p v j =n v j N

The probability distribution of the distance D ^ N 2 is asymptotically known (i.e. as the size of the sample tends to infinity). If N is sufficiently large, this means that for a probability α, one can calculate the threshold (critical value) d α such that:

  • if D ^ N >d α , we conclude, with the risk of error α, that a dependency exists between X i and X j ,

  • if D ^ N d α , the independence hypothesis is considered acceptable.

An important notion is the so-called "p-value" of the test. This quantity is equal to the limit error probability α lim under which the independence hypothesis is rejected. Thus, independence is assumed if and only if α lim is greater than the value α desired by the user. Note that the higher α lim -α, the more robust the decision.

Other notations

This method is also referred to in the literature as the χ 2 test of contingency.

Link with OpenTURNS methodology

The χ 2 independence test is used in step B "Quantifying Sources of Uncertainty". It enables the existence of a dependency between two components X i and X j of the input vector X ̲, defined in step A "Specifying Criteria and the Case Study", to be verified.

Input data:

Two samples x 1 i ,...,x N i and x 1 j ,...,x N j of variables X i and X j , each pair x k i ,x k j corresponding to a simultaneous sampling of the two variables

Parameters:

a probability α taking values strictly between 0 and 1, defining the risk of permissible decision error (significance level)

Outputs:

Result: Binary variable specifying whether the hypothesis of independence is rejected (0) or not (1)

α lim : p-value of the test

References and theoretical basics
The χ 2 test of independence can be applied when the two variables of study are discrete. Its use for continuous distributions is only possible by means of an arbitrary discretisation of possible values of X, a high source of potential error.

On the other hand, no hypothesis is made in the form of the relationship between the two tested variables. Readers interested in the detection of dependencies between two continuous variables are referred to [Pearson's Test] and [Spearman's test] in the reference documentation.

The following bibliographical references provide main starting points to further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.

  • Sprent, P., and Smeeton, N.C. (2001). "Applied Nonparametric Statistical Methods – Third edition", Chapman & Hall


3.3.23 Step B  – Linear regression

Mathematical description

Goal

This method deals with the parametric modelling of a probability distribution for a random vector X ̲=X 1 ,...,X n X . It aims to measure a type of dependence (here a linear relation) which may exist between a component X i and other uncertain variables X j .

Principle of the method

The principle of the multiple linear regression model is to find the function that links the variable X i to other variables X j 1 ,...,X j K by means of a linear model:

X i =a 0 + j{j 1 ,...,j K } a j X j +ε

where ε describes a random variable with zero mean and standard deviation σ independent of the input variables X i . For given values of X j 1 ,...,X j K , the average forecast of X i is denoted by X ^ i and is defined as:

X ^ i =a 0 + j{j 1 ,...,j K } a j X j

The estimators for the regression coefficients a ^ 0 ,a ^ 1 ,...,a ^ K , and the standard deviation σ are obtained from a sample of (X i ,X j 1 ,...,X j K ), that is a set of N values (x 1 i ,x 1 j 1 ,...,x 1 j K ),...,(x n i ,x n j 1 ,...,x n j K ). They are determined via the least-squares method:

a ^ 0 ,a ^ 1 ,...,a ^ K = argmin k=1 n x k i -a 0 - j{j 1 ,...,j K } a j x k j 2

In other words, the principle is to minimize the total quadratic distance between the observations x k i and the linear forecast x ^ k i .

Some estimated coefficient a ^ may be close to zero, which may indicate that the variable X j does not bring valuable information to forecast X i . OpenTURNS includes a classical statistical test to identify such situations: Fisher's test. For each estimated coefficient a ^ , an important characteristic is the so-called "p-value" of Fisher's test. The coefficient is said to be "significant" if and only if α lim is greater than a value α chosen by the user (typically 5% or 10%). The higher the p-value, the more significant the coefficient.

Another important characteristic of the adjusted linear model is the coefficient of determination R 2 . This quantity indicates the part of the variance of X i that is explained by the linear model:

R 2 = k=1 n x k i -x ¯ i 2 - k=1 n x k i -x ^ k i 2 k=1 n x k i -x ¯ i 2

where x ¯ i denotes the empirical mean of the sample x 1 i ,...,x n i .

Thus, 0R 2 1. A value close to 1 indicates a good fit of the linear model, whereas a value close to 0 indicates that the linear model does not provide a relevant forecast. A statistical test allows to detect significant values of R 2 . Again, a p-value is provided: the higher the p-value, the more significant the coefficient of determination.

By definition, the multiple regression model is only relevant for linear relationships, as in the following simple example where X 2 =a 0 +a 1 X 1 .

In this second example (still in dimension 1), the linear model is not relevant because of the exponential shape of the relation. But a linear approach would be useful on the transformed problem X 2 =a 0 +a 1 expX 1 . In other words, what is important is that the relationships between X i and the variables X j 1 ,...,X j K is linear with respect to the regression coefficients a j .

The value of R 2 is a good indication of the goodness-of fit of the linear model. However, several other verifications have to be carried out before concluding that the linear model is satisfactory. For instance, one has to pay attentions to the "residuals" {u 1 ,...,u N } of the regression:

u j =x i -x ^ i

A residual is thus equal to the difference between the observed value of X i and the average forecast provided by the linear model. A key-assumption for the robustness of the model is that the characteristics of the residuals do not depend on the value of X i ,X j 1 ,...,X j K : the mean value should be close to 0 and the standard deviation should be constant. Thus, plotting the residuals versus these variables can fruitful.

In the following example, the behaviour of the residuals is satisfactory: no particular trend can be detected neither in the mean nor in he standard deviation.

The next example illustrates a less favourable situation: the mean value of the residuals seems to be close to 0 but the standard deviation tends to increase with X. In such a situation, the linear model should be abandoned, or at least used very cautiously.

Other notations

Link with OpenTURNS methodology

Multiple linear regression can be used in step B "Quantifying Sources of Uncertainty". Having defined the vector X ̲ of input variables in step A "Specifying Criteria and the Case Study", linear regression allows to detect a linear type of dependency between uncertain variables. Such a relationship should in fact be taken in to account so as not to bias the results of step C "Propagation of Uncertainty".
References and theoretical basics
As we have seen in the mathematical description, there is a consequent list of verifications that have to be carried to validate the linear model. In particular, underlying assumptions on the residuals are important to ensure the robustness of the average forecast. Detecting a non-conform behaviour of the residuals can also provide leads on transformations that could be carried out before applying linear regression (such as considering the logarithm of a variable instead of the variable itself).

The following bibliographical references provide main starting points for further study of this method:

  • Saporta, G. (1990). "Probabilités, Analyse de données et Statistique", Technip

  • Dixon, W.J. & Massey, F.J. (1983) "Introduction to statistical analysis (4th ed.)", McGraw-Hill

  • NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/

  • Bhattacharyya, G.K., and R.A. Johnson, (1997). "Statistical Concepts and Methods", John Wiley and Sons, New York.


Global methodology of an uncertainty study  
Table of contents
OpenTURNS' methods for Step C: uncertainty propagation