# 4 OpenTURNS' methods for Step C: uncertainty propagation

This section is organized according to the different uncertainty criterion defined in step A: deterministic min-max criterion, probabilist criterion on central dispersion (expectation and variance), probability of exceeding a threshold / failure probability, and probabilistic criterion based on quantiles. Each method proposed for these criteria is described at the end of the section.

In order to generate random simulations according to a particular distribution, it is necessary to first generate random simulations according to the Uniform(0,1) distribution : OpenTURNS uses the Mersenne Twister algorithm. Then, several techniques exist in order to transform these uniform random simulations into random simulations according to a particular distribution :

Furthermore, low-discrepancy sequences are commonly used as a replacement of uniformly distributed random numbers. They are also called quasi-random or sub-random sequences : the quasi modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage. (source Wikipedia).

## 4.1 Min-max criterion

The min or max of the ouput variable of interest can be found thanks to different techniques : [Min-Max Approach] – see page 4.3.3, based on :

• an design of experiments which can be deterministic or probabilistic : [Design of Experiments ] – see page 4.3.4

• an optimization algorithm which researches the extreme values of the limit state function in a specified interval (which may be infinite) : [Optimization algorithm] – see page 4.3.5

## 4.2 Probabilistic criterion

### 4.2.1 Central dispersion

Two categories of method are proposed: approximation methods and sampling methods.

### 4.2.2 Probability of exceeding a threshold / failure probability / probability of an event

Again, two categories of methods are proposed: approximation methods and sampling methods.

### 4.2.3 Quantile of a variable of interest

Only one sampling approach is available in the current version of OpenTURNS.

## 4.3 Methods description

### 4.3.1 Step C  – Uniform Random Generator

Mathematical description

Goal

Generating simulations according to a distribution is based on generating simulations according to a Uniform distribution on $\left[0,1\right]$ : several techniques exist then to transform a realization according to a uniform distribution onto a realization according to a distribution which cumulative distribution function is $F$ (refer to [Distribution realizations] for each OpenTURNS distribution).

Thus, the quality of the random generation of simulation is entirely based on the quality of the deterministic algorithm which simulates realizations of the Uniform(0,1) distribution.

OpenTURNS uses the DSFTM algorithm described here, which is the acronyme of Double precision SIMD oriented Fast Mersenne Twister.

Principle

Each character is detailed of the acronym is detailed :

• S = SIMD = Single Instruction Multiple Data: the DSFMT algorithm is able to detect and take profit of the capacity of the microprocessor to realise several operations at a time.

• F = Fast: the transformation of the $k$-th state vector of the random generator into the $\left(k+1\right)$-th state vector is written in order to optimize its performance.

• MT = Mersenne Twister: the algorithm characteristics are the following ones :

1. the algorithm is initialized with a high Mersenne Number, of type ${2}^{{2}^{n}}-1$, with $n=19937$.

2. the algorithm period $T$ depends on that initial point : $T={2}^{19937}\simeq {10}^{6000}$. As a general way, the bad effects of the periodicity of the algorithm arise as soon as the number of simulations is greater than $\phantom{\rule{0.166667em}{0ex}}\simeq \sqrt{T}$ simulations. Here, we have : $\sqrt{T}={2}^{9968}\simeq {10}^{3000}$.

3. the realizations of the DSFMT algorithm are uniformly distributed within ${\left[0,1\right]}^{n}$ until $n=624$.

Other notations

The uniform random generator is called as soon as some random realizations are required. Then, they participate to the Step C : <<Propagating Uncertainties>> of the Methodologie.

The User of OpenTURNS is able to save and specify its initial state.

References and theoretical basics

In order to appreciate the quality of the DSFMT algorithm, a comparison with some another algorithms is interesting :
• Crystal Ball : the random generator of the versions 7.1 and 7.3, and also the random generator of the current version at 30/06/2008 are multiplicative congruential generators, which simulations ${R}_{n}$ are given by the relation ${R}_{n+1}=\left(62089911{R}_{n}\right)\phantom{\rule{0.166667em}{0ex}}mod\left({2}^{31}-1\right)$. Their periodicity are $T={2}^{31}-2$.

So, it is recommended not to generate numerical samples of size greater than $\sqrt{T}=46340$!

Furthermore, realizations can not be considered as independent and uniformly spread within ${\left[0,1\right]}^{n}$ as soon as $n=3$....

• Excel: the random generator has a periodicity equal to $T=65000$ : it is recommended not to generate numerical samples of size greater than $\sqrt{T}=256$.

Examples

### 4.3.2 Step C  – Distribution realizations

Mathematical description

Goal

The objective is to transform a random realization of the Uniform(0,1) distribution onto a random realization of a distribution which cumulative density function is $F$.

Principle

Several classical techniques exist :

• The inversion of the CDF: if $U$ is distributed according to the uniform distribution over $\left[0,1\right]$ (the bounds 0 and 1 may or may not be included), then $X={F}^{-1}\left(U\right)$ is distributed according to the CDF $F$. If ${F}^{-1}$ has a simple analytical expression, it provides an efficient way to generate realizations of $X$. Two points need to be mentioned:

1. If the expression of ${F}^{-1}\left(U\right)$ involves the quantity $1-U$ instead of $U$, it can be replaced by $U$ as $U$ and $1-U$ are identically distributed;

2. The numerical range of $F$ is always bounded (i.e. the interval over which it is invertible) even if its mathematical range is unbounded, and the numerical range may not preserve the symmetry of its mathematical counterpart. It can lead to biased nonuniform generators, even if this bias is usually small. For example, using standard double precision, the CDF of the standard normal distribution is numerically invertible over $\left[A,B\right]$ with $A\simeq -17.3$ and $B\simeq 8.5$.

• The rejection method: suppose that one want to sample a random variable $X$ with a continuous distribution with PDF ${p}_{X}$ and that we know how to sample a random variable $Y$ with a continuous distribution with PDF ${p}_{Y}$. We suppose that there exist a positive scalar $k$ such that $\forall t\in ℝ,{p}_{X}\left(t\right)\le k{p}_{Y}\left(t\right)$. The rejection method consists of the following steps:

1. Generate $y$ according to ${p}_{Y}$;

2. Generate $u$ according to a random variable $U$ independent of $Y$ and uniformly distributed over $\left[0,1\right]$;

3. If $u\le {p}_{X}\left(u\right)/\left(k{p}_{Y}\left(u\right)\right)$, accept $y$ as a realization of $X$, else return to step 1.

The rejection method can be improved in several ways:

• If the evaluation of $\rho \left(u\right)={p}_{X}\left(u\right)/\left(k{p}_{Y}\left(u\right)\right)$ is costly, and if one knows a cheap function $h$ such that $h\left(u\right)\le \rho \left(u\right)$, then one can first check if $u\le h\left(u\right)$ and directly accept $y$ if the test is positive (quick acceptance step). This is very effective if $h\left(u\right)$ can be evaluated from quantities that have to be computed to evaluate $\rho \left(u\right)$: $h$ is a kind of cheap version of $\rho$. The same trick can be use if one knows a cheap function $m$ such that $m\left(u\right)\ge \rho \left(u\right)$: one checks if $u\ge m\left(u\right)$ and directly reject $y$ if the test is positive (quick rejection test). The combination of quick acceptation and quick rejection is called a squeeze.

• The test $u\le \rho \left(u\right)$ can be replaced by an equivalent one but much more computationaly efficient.

• The transformation method: suppose that one want to sample a random variable that is the image by a simple transformation of another random variable (or random vector) that can easily be sampled. It is easy to sample this last random variable (or vector) and then transform this realization through the transformation to get the needed realization. This method can be combined with the rejection method for example, to build $h$ or $m$ implicitely.

• The sequential search method (discrete distributions): it is a particular version of the CDF inversion method, dedicated to discrete random variables. One generates a realizaton $u$ of a random variable $U$ uniformly distributed over $\left[0,1\right]$, then we search the smallest integer $k$ such that $u\le {\sum }_{i=0}^{k}{p}_{i}$, where ${p}_{i}=ℙ\left(X=i\right)$.

• The stable distribution method (archimedean copulas): to be detailed.

• The conditional CDF inversion (general copula or general multivariate distributions): this method is a general procedure to sample a multivariate distribution. One generate in sequence a realization of the first marginal distribution, then a realization of the distribution of the second component conditionally to the value taken by the first component, and so on. Each step is done by inversion of the conditional CDF of the component $i$ with respect to the value taken by the preceding components.

• The ratio of uniforms method: this is a special combination of the rejection method and the transformation method that has gained a great popularity due to its concision and its verstility. Let $𝒜=\left\{\left(u,v\right):\phantom{\rule{1.em}{0ex}}0\le u\le \sqrt{f\left(\frac{u}{v}\right)}\right\}$. If $\left(U,V\right)$ is a random vector uniformly distributed over $𝒜$, then $\frac{U}{V}$ has density $f/\int f$. The generation of $\left(U,V\right)$ is done by a rejection method, using a bounded enclosing region of $𝒜$. It can be done if and only if both $f\left(x\right)$ and ${x}^{2}f\left(x\right)$ are bounded. This method can be enhanced by using quick acceptance and quick rejection steps.

• The ziggurat method: this method allows for a very fast generation of positive random variate with decreasing PDF. The graph of the PDF is partitioned into horizontal slices of equal mass, the bottom slice covering the whole support of the PDF. All these slices have a maximal enclosed rectangle (the top one being empty) and a minimal enclosing rectangle (the bottom one not being defined). Then, one generate a discrete uniform random variable $d$ over the number of slice. It selects a slice, and if this slice has both an enclosed and an enclosing rectangle, one generates a realization of a continuous uniform random variable on $\left[0,{L}_{i}\right]$, ${L}_{d}$ being the length of the enclosing rectangle of slice $d$. The enclosing and the enclosed rectangles define an efficient squeeze for a rejection method. If the bottom slice is selected, one has to sample the tail distribution conditional to the length of the enclosed rectangle: it is the only case where a costly non-uniform random number has to be computed. If the number of slices is large enough, this case appears only marginally, which is the main reason for the method efficiency.

We describe here the techniques involved in OpenTURNS.

Distribution   Method
Arcsine   CDF inversion.
Bernoulli   CDF inversion.
Beta
 CDF inversion if $r=1$ or $t-r=1$. Rejection (Johnk's method) for $t\le 1$. Rejection (Cheng's method) for $r>1$, $t-r>1$. Rejection (Atkinson and Whittaker method 1) for $t>1,max\left(r,t-r\right)<1$. Rejection (Atkinson and Whittaker method 2) for $t>1,max\left(r,t-r\right)>1$.

Binomial   Squeeze end Reject method. See the Hormann reference.
Burr   CDF inversion.
Chi   Transformation.
ChiSquare   See the Gamma distribution.
ClaytonCopula   Conditional CDF inversion.
ComposedCopula   Simulation of the copula one by one then association.
ComposedDistribution   Simulation of the copula and the marginale with the CDF inversion technique.
Dirac   Return the supporting point.
Dirichlet   Transformation.
Epanechnikov   CDF inversion.
Exponential   CDF inversion.
Fisher-Snedecor   Transformation.
FrankCopula   Conditional CDF inversion.
Gamma   Transformation and rejection (Marsaglia and Tsang method).
Geometric   CDF inversion.
Generalized Pareto   CDF inversion.
GumbelCopula   Stable distribution.
Gumbel   CDF inversion.
Histogram   CDF inversion.
IndependentCopula   Transformation.
InverseNormal   Transformation.
KernelMixture   Transformation.
Kpermutaions   Knuth's algorithm.
Laplace   CDF inversion.
Logistic   CDF inversion.
LogNormal   Transformation.
LogUniform   Transformation.
Meixner   Uniform ratio method.
MinCopula   Transformation.
Mixture   Transformation.
MultiNomial   Conditional CDF inversion.
Non Central Chi Square   Transformation.
NegativeBinomial   Conditional simulation (Poisson | Gamma)
Non Central Student   Transformation.
NormalCopula   Transformation of independent Normal realizations.
Normal
 1D: Ziggurat method nD: Transformation of independent Normal realizations.

Distribution   Method
Poisson
 Sequential search for $\mu <6$. Ratio of uniforms for $\mu \ge 6$

RandomMixture   Transformation.
Rayleigh   CDF inversion.
Rice   Transformation.
Skellam   Transformation.
SklarCopula   Conditional CDF inversion by Gaussian quadrature and numerical inversion.
Student   Transformation.
Trapezoidal   CDF inversion.
Triangular   CDF inversion.
TruncatedDistribution on $\left[a,b\right]$
 We note $F$ the CDF of the non truncated distribution : if $F\left(b\right)-F\left(a\right) : CDF inversion. if $F\left(b\right)-F\left(a\right)>s$ : rejection. By default, $s=0.5$ (modifiable).

TruncatedNormal
 small truncation interval: CDF inversion. large truncation interval: rejection.

Uniform   Transformation.
UserDefined   Sequential search.
Weibull   CDF inversion.
Zipf-Mandelbrot   Bisection search.
Other notations

These techniques are performed as soon as some random realizations are required. Then, they participate to the Step C : <<Propagating Uncertainties>> of the Methodologie.
References and theoretical basics
The following bibliographical references provide main starting points for further study of non-uniform random variate generation:
• Luc Devroye, "Non-Uniform RandomVariate Generation", Springer-Verlag, 1986, available online at: http://cg.scs.carleton.ca/ luc/nonuniformrandomvariates.zip and with the important errata at: http://cg.scs.carleton.ca/ luc/errors.pdf

• Volfgang Hormann, "The generation of Binomial Random Variates"

• George Marsaglia and Wai Wan Tsang, "A Simple Method for Generating Gamma, Journal of Statistical Computational and Simulation, vol 46, pp101 - 110,1993. Variables", ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363-372.

• Doornik, J.A. (2005), "An Improved Ziggurat Method to Generate Normal Random Samples", mimeo, Nuffield College, University of Oxford.

• Kjersti Aas, "Modelling the dependence structure of financial assets: a survey of four copulas", Norwegian Computing Center report nr. SAMBA/22/04, December 2004.

• E. Stadlober, "The ratio of uniforms approach for generating discrete random variates". Journal of Computational and Applied Mathematics, vol. 31, no. 1, pp. 181-189, 1990.

Examples

### 4.3.3 Step C  – Low Discrepancy Sequence

Mathematical description

Goal

This text was extracted from (2).

A low-discrepancy sequence is a sequence with the property that for all values of $N$, its subsequence $\left({x}_{1},\cdots ,{x}_{N}\right)$ has a low discrepancy.

The discrepancy of a sequence is low if the number of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of a uniform distribution. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).

Low-discrepancy sequences are also called quasi-random or sub-random sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage.

At least three methods of numerical integration can be phrased as follows. Given a set $\left\{{x}_{1},\cdots ,{x}_{N}\right\}$ in the interval [0,1], approximate the integral of a function f as the average of the function evaluated at those points:

 $\begin{array}{c}\hfill {\int }_{0}^{1}f\left(u\right)\phantom{\rule{0.166667em}{0ex}}du\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}f\left({x}_{i}\right).\end{array}$
• If the points are chosen as ${x}_{i}=i/N$, this is the rectangle rule.

• If the points are chosen to be randomly distributed, this is the Monte Carlo method : [Standard Monte-Carlo simulation] – see page 4.3.7.

• If the points are chosen as elements of a low-discrepancy sequence, this is the quasi-Monte Carlo method : [Quasi-Monte Carlo simulation] – see page 4.3.20.

Principle

The discrepancy of a set $P=\left\{{x}_{1},\cdots ,{x}_{N}\right\}$ is defined, using Niederreiter's notation, as

 $\begin{array}{c}\hfill {D}_{N}\left(P\right)=\underset{B\in J}{sup}\left|\frac{A\left(B;P\right)}{N}-{\lambda }_{s}\left(B\right)\right|\end{array}$

where $\lambda -s$ is the s-dimensional Lebesgue measure, $A\left(B;P\right)$ is the number of points in $P$ that fall into $B$, and $J$ is the set of s-dimensional intervals or boxes of the form :

 $\begin{array}{c}\hfill \prod _{i=1}^{s}\left[{a}_{i},{b}_{i}\right)=\left\{𝐱\in {𝐑}^{s}:{a}_{i}\le {x}_{i}<{b}_{i}\right\}\phantom{\rule{0.166667em}{0ex}}\end{array}$

where $0\le {a}_{i}<{b}_{i}\le 1$ .

The star-discrepancy D*N(P) is defined similarly, except that the supremum is taken over the set J* of intervals of the form

 $\begin{array}{c}\hfill \prod _{i=1}^{s}\left[0,{u}_{i}\right)\end{array}$

where ${u}_{i}$ is in the half-open interval $\left[0,1\right)$.

The two are related by

 $\begin{array}{c}\hfill {D}_{N}^{*}\le {D}_{N}\le {2}^{s}{D}_{N}^{*}.\phantom{\rule{0.166667em}{0ex}}\end{array}$

The Koksma-lawka inequality, shows that the error of such a method can be bounded by the product of two terms, one of which depends only on f, and the other one is the discrepancy of the set $\left\{{x}_{1},\cdots ,{x}_{N}\right\}$.

Let ${\overline{I}}^{s}$ be the s-dimensional unit cube, ${\overline{I}}^{s}=\left[0,1\right]×...×\left[0,1\right]$. Let $f$ have bounded variation $V\left(f\right)$ on ${I}^{s}$ in the sense of Hardy and Krause. Then for any $\left({x}_{1},\cdots ,{x}_{N}\right)$ in ${I}^{s}=\left[0,1\right)×...×\left[0,1\right)$,

 $\begin{array}{c}\hfill \left|\frac{1}{N}\sum _{i=1}^{N}f\left({x}_{i}\right)-{\int }_{{\overline{I}}^{s}}f\left(u\right)\phantom{\rule{0.166667em}{0ex}}du\right|\le V\left(f\right)\phantom{\rule{0.166667em}{0ex}}{D}_{N}^{*}\left({x}_{1},...,{x}_{N}\right).\end{array}$

The Koksma-Hlawka inequality is sharp in the following sense: For any point set $\left\{{x}_{1},\cdots ,{x}_{N}\right\}$ in ${I}^{s}$ and any $\epsilon >0$ > 0, there is a function $f$ with bounded variation and $V\left(f\right)=1$ such that :

 $\begin{array}{c}\hfill \left|\frac{1}{N}\sum _{i=1}^{N}f\left({x}_{i}\right)-{\int }_{{\overline{I}}^{s}}f\left(u\right)\phantom{\rule{0.166667em}{0ex}}du\right|>{D}_{N}^{*}\left({x}_{1},...,{x}_{N}\right)-\epsilon .\end{array}$

Therefore, the quality of a numerical integration rule depends only on the discrepancy ${D}_{N}^{*}\left({x}_{1},\cdots ,{x}_{N}\right)$.

Constructions of sequence are known, due to Faure, Halton, Hammersley, Sobol', Niederreiter and van der Corput, such that :

 $\begin{array}{c}\hfill {D}_{N}^{*}\left({x}_{1},...,{x}_{N}\right)\le C\frac{{\left(lnN\right)}^{s}}{N}.\end{array}$

where $C$ is a certain constant, depending on the sequence. These sequences are believed to have the best possible order of convergence. See also: van der Corput sequence, Halton sequences, Sobol sequences. In the case of the Haselgrove sequence, we have:

 $\begin{array}{c}\hfill \forall \epsilon >0,\exists {C}_{\epsilon }>0\phantom{\rule{4.pt}{0ex}}\text{such}\phantom{\rule{4.pt}{0ex}}\text{that}\phantom{\rule{4.pt}{0ex}}{D}_{N}^{*}\left({x}_{1},...,{x}_{N}\right)\le \frac{{C}_{\epsilon }}{{N}^{1-\epsilon }}.\end{array}$

which means a worse asymptotic performance than the previous sequence, but can be interesting for finite sample size.

Remark 1:

If $\left\{{x}_{1},\cdots ,{x}_{N}\right\}$ is a low-discrepancy sequence, then $\frac{1}{N}\sum _{i=1}^{N}{\delta }_{{x}_{i}}$ converges weakly towards ${\lambda }^{s}$ the $s$-dimensional Lebesgue measure on ${\left[0,1\right]}^{s}$, which garanties that for all test function (continuous and bounded) $\phi$, $<\frac{1}{N}\sum _{i=1}^{N}{\delta }_{{x}_{i}},\phi >$ converges towards $<{\lambda }^{s},\phi >=\int \phi \phantom{\rule{0.166667em}{0ex}}d{\lambda }^{s}$. We then obtain :

 $\begin{array}{c}\hfill \frac{1}{N}\sum _{i=1}^{N}\phi \left({x}_{i}\right)⟶\int \phi \phantom{\rule{0.166667em}{0ex}}dx\end{array}$

Be carefull : using low discrepancy sequences instead of random distributed points do not lead to the same control of the variance of the approximation : in the case of random distributed points, this control is given by the Central Limit Theorem that provides confidence intervals. In the case of low discrepancy sequences, it is given by the Koksma-Hlawka inequality.

Remark 2:

It is possible to generate low discrepancy sequence according to measures different from the Lebesgue one, by using the inverse CDF technique. But be carefull : in OpenTURNS, the inverse CDF technique is not the one used in all the cases (some distributions are generated thanks to the rejection method for example) : that's why it is not recommended, in the general case, to substitute a low discrepancy sequence to the uniform random generator.

Remark 3:

The low-discrepancy sequences have performances that deteriorate rapidly with the problem dimension, as the bound on the discrepancy increases exponentially with the dimension. This behaviour is shared by all the low discrepancy sequences, even if all the standard low-discrepancy sequences don't exhibit this behaviour with the same intensity. According to the given reference, the following recommandation can be made:

• Use Halton or ReverseHalton sequences for dimensions not greater than 8;

• Use Faure sequences for dimensions not greater than 25;

• Use Haselgrove sequences for dimensions not greater than 50;

• Use Sobol sequences for dimensions up to several hundreds (but OpenTURNS implementation of the Sobol sequence is limited to dimension less or equal to 40).

Other notations

Low-discrepancy sequences are also called quasi-random or sub-random sequences, but it can be confusing as they are deterministic and that they don't have the same statistical properties as traditional pseudo-random sequences.

Low discrepancy sequences may be used to evaluate the probability that the output variable of interest exceeds a given threshold, through the Quasi-Monte Carlo method. Then they participate to the Step C : <<Propagating Uncertainties>> of the Methodology.

More generally, they can be used toapproximate the integral of a function as the average of the function evaluated at the points of the sequences.

References and theoretical basics

For some information on the performance of the different low-discrepancy sequences for high dimensional applications, one can find useful material in:
• Inna Krykova, "Evaluating of path-dependent securities with low discrepancy methods", Master of Science Thesis, Worcester Polytechnic Institute, 2003.

• Wikipedia contributors, "Low-discrepancy sequence.", Wikipedia, The Free Encyclopedia, 10 April 2012, 17:48 UTC, <en.wikipedia.org/wiki/Low-discrepancy_sequence>

Examples

To illustrate this method, we consider the sampling strategy of an input vector of dimension 2. Both components follow a uniform law $𝒰\left(0,1\right)$. The figures compare the population of 300 points obtained by a the Sobol, Halton, reverse halton sequences and the uniformly generated thanks to the Mersenne Twister generator ( [Uniform Random Generator] – see page 4.3). These figures are for illustration purpose and do not suggest any systematic superiority of one sequence over the others.
 Sobol sequence. Faure sequence.
Figure 54
 Reverse Halton sequence. Halton sequence.
Figure 55
 Haselgrove sequence. Uniform random sequence.
Figure 56

### 4.3.4 Step C  – Min-Max Approach

Mathematical description

Goal

The method is used in the following context: $\underline{x}=\left({x}^{1},...,{x}^{{n}_{X}}\right)$ is a vector of unknown variables, $\underline{d}$ a vector considered to be well known or where uncertainty is negligible, and $\underline{y}=h\left(\underline{x},\underline{d}\right)=\left({y}^{1},...,{y}^{{n}_{Y}}\right)$ describes the variables of interest. The objective here is to determine the extreme (minimum and maximum) values of the components of $\underline{y}$ for all possible values of $\underline{x}$.

Principle

Several techniques enable to determine the extreme (minimum and maximum) values of the variables $\underline{y}$ for the set of all possible values of $\underline{x}$ :

• techniques based on design of experiments : the extreme values of $\underline{y}$ are sought for only a finite set of combinations $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$,

• techniques using optimization algorithms.

Techniques based on design of experiments

In that case, the min-max approach consists in three steps:

• choice of experiment design used to determine the combinations $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of the input random variables,

• calculation of ${\underline{y}}_{i}=h\left({\underline{x}}_{i},\underline{d}\right)$ for $i=1,...,N$,

• calculation of ${min}_{1\le i\le N}{y}_{i}^{k}$ and of ${max}_{1\le i\le N}{y}_{i}^{k}$, together with the combinations related to these extreme values: ${\underline{x}}_{k,min}={\mathrm{argmin}}_{1\le i\le N}{y}_{i}^{k}$ and ${\underline{x}}_{k,max}={\mathrm{argmax}}_{1\le i\le N}{y}_{i}^{k}$.

The type of design of experiments is influent on the quality of the response surface and then on the evaluation of its extreme values. OpenTURNS proposes different kinds of design of experiments : [Design of Experimentss ] – see page 4.3.4.

Techniques based on optimization algorithm

The min or max value of the output variable of interest is searched thanks to a optimization algorithm : [Optimization Algorithms] – see page 4.3.5.

Other notations

### 4.3.5 Step C  – Design of Experiments

Mathematical description

Goal

The method is used in the following context: $\underline{x}=\left({x}^{1},...,{x}^{{n}_{X}}\right)$ is a vector of input parameters. We want to determine a particular set of values of $\underline{x}$ according to a particular design of experiments .

Principle

Different types of design of experiments can be determined:

• some stratified patterns: axial, composite, factorial or box patterns,

• some weighted patterns that we can split into different categories: the random patterns, the low discrepancy sequences and the Gauss product.

Stratified design of experiments

All stratified design of experiments are defined from the data of a center point and some discretization levels. OpenTURNS proposes the same number of levels for each direction: let us denote by ${n}_{level}$ that discretization number.

The axial pattern contains points only along the axes. It is not convenient to model interactions between variables. The pattern is obtained by discretizing each direction according to specified levels, symmetrically with respect to the center of the design of experiments . The number of points generated is $1+2d{n}_{level}$.

The factorial pattern contains points only on diagonals. It is not convenient to model influences of single input variables. The pattern is obtained by discretizing each principal diagonal according to the specified levels, symmetrically with respect to the center of the design of experiments . The number of points generated is $1+{2}^{d}{n}_{level}$.

The composite pattern is the union of both previous ones. The number of points generated is $1+2d{n}_{level}+{2}^{d}{n}_{level}$.

The box pattern is a simple regular discretization of a pavement around the center of the design of experiments , with the number of intermediate points specified for each direction (denoted ${n}_{level}\left(direction\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i\right)$). The number of points generated is $\prod _{i=1}^{d}\left(2+{n}_{level}\left(direction\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i\right)\right)$.

The following figures illustrates the different patterns obtained.

Weighted design of experiments

The first category is the random patterns, where the set of input data is generated from the joint distribution of the input random vector, according to the Monte Carlo sampling technique or the LHS one (refer to [Monte Carlo Simulation] and [Latin Hypercube Sampling] ).

Care: the LHS sampling method requires the independence of the input random variables.

The second category is the low discrepancy sequences (refer to [Low Discrepancy Sequence] ). OpenTURNS proposes the Faure, Halton, Haselgrove, Reverse Halton and Sobol sequences.

The third category is the Gauss product which is the set of points which components are the respective Gauss set (i.e. the roots of the orthogonal polynomials with respect to the univariate distribution).

Combinatorial generators

In some situations, one want to explore all the possibilities related to constrained discrete uncertainties. In this case, we need to obtain all the sets of indices fulfilling the constraints. Exemples of constraints are:

• being a subset with $k$ elements of a set with $n$ elements, with $k\le n$;

• being a permutation of $k$ elements taken into a set of $n$ elements, with $k\le n$;

• being an element of a Cartesian product of sets with ${n}_{1},\cdots ,{n}_{d}$ elements.

It is important to get indices and not real-valued vectors. The distinction is made explicit by calling these design of experiments Combinatorial Generators, which produce collections of indices instead of samples.

The following figures illustrates the different patterns obtained.

Other notations

This method is used in step C "Propagation of uncertainty" to evaluate a deterministic minimum-maximum type of criterion for the output value defined in step A "Specifying the Criteria and the Case Study".

It can also be used within the Quasi-Monte carlo technique to approximate the probability of excedding a given threshold.

References and theoretical basics

### 4.3.6 Step C  – Optimization Algorithms

Mathematical description

Goal

The method is used in the following context: $\underline{x}=\left({x}^{1},...,{x}^{{n}_{X}}\right)$ is a vector of unknown variables, $\underline{d}$ a vector considered to be well known or where uncertainty is negligible, and $y=h\left(\underline{x},\underline{d}\right)=$ is the scalar variable of interest. The objective here is to determine the extreme (minimum and maximum) values of $y$ when $\underline{x}$ varies.

Principle

It is possible to use some optimization algorithms. We give the principle of the TNC (Truncated Newton Constrained) algorithm which minimizes a function with variables subject to bounds, using gradient information.

Truncated-Newton methods are a family of methods suitable for solving large nonlinear optimization problems. At each iteration, the current estimate of the solution is updated by approximately solving the Newton equations using an iterative algorithm. This results in a doubly iterative method : an outer iteration for the nonlinear optimization problem and an inner iteration for the Newton equations. The inner iteration is typically stopped or truncated before the solution to the Newton equations is obtained.

The TNC algorithm resolves : ${min}_{\underline{x}\in \left[\underline{a},\underline{b}\right]\in {\overline{ℝ}}^{n}}f\left(\underline{x}\right)$ and proceeds as follows under the proper regularity of the objective function $f$ :

 $\begin{array}{c}\hfill \left\{\begin{array}{c}\underline{\nabla f}\left(\underline{x}\right)=\underline{0}\hfill \\ \underline{\underline{{\nabla }_{2}}}f\left(\underline{x}\right)\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}\text{definite}\phantom{\rule{4.pt}{0ex}}\text{positive}\hfill \end{array}\right\\end{array}$

The Taylor development of second order of $f$ around ${\underline{x}}_{k}$ leads to the determination of the iterate ${\underline{x}}_{k+1}$ such as :

 $\left\{\begin{array}{ccc}{\underline{\Delta }}_{k}\hfill & =& {\underline{x}}_{k+1}-{\underline{x}}_{k}\hfill \\ \underline{\underline{{\nabla }_{2}}}f\left({\underline{x}}_{k}\right){\underline{\Delta }}_{k}\hfill & =& -\underline{\nabla f}\left({\underline{x}}_{k}\right)\hfill \end{array}\right\$ (104)

The equation (104) is truncated : the iterative research of ${\underline{\Delta }}_{k}$ is stopped as soon as ${\underline{\Delta }}_{k}$ verifies :

 $\begin{array}{c}\hfill ||\underline{\underline{{\nabla }_{2}}}f\left({\underline{x}}_{k}\right){\underline{\Delta }}_{k}+\underline{\nabla f}\left({\underline{x}}_{k}\right)||\le \eta ||\underline{\nabla f}\left({\underline{x}}_{k}\right)||\end{array}$

At last, the iteration $k+1$ is defined by :

 $\begin{array}{c}\hfill {\underline{x}}_{k+1}={\underline{x}}_{k}+{\alpha }_{k}{\underline{\Delta }}_{k}\end{array}$

where ${\alpha }_{k}$ is the parameter stepmx.

Other notations

-

This method is used in step C "Propagation of uncertainty" to evaluate a the minimum-maximum type of criterion for the output value defined in step A "Specifying the Criteria and the Case Study".

References and theoretical basics

More details may be found in the following reference :
• Stephen G. Nash, 1999, "A survey of Truncated-Newton methods", Systems Engineering and Operations Research Dept., George Mason University, Fairfax, VA 22030.

### 4.3.7 Step C  – Taylor variance decomposition / Perturbation Method

Mathematical description

Goal

The quadratic cumul is a probabilistic approach designed to propagate the uncertainties of the input variables $\underline{X}$ through the model $h$ towards the output variables $\underline{Y}$. It enables to access the central dispersion (expectation and variance) of the output variables.

Principles

This method is based on a Taylor decomposition of the output variable $\underline{Y}$ towards the $\underline{X}$ random vectors around the mean point ${\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$. Depending on the order of the Taylor decomposition (classically first order or second order), one can obtain different formulas. For easiness of the reading, we first present the formulas with ${n}_{Y}=1$ before the ones obtained for ${n}_{Y}>1$.

Case ${n}_{Y}=1$

As $Y=h\left(\underline{X}\right)$, the Taylor decomposition around $\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$ at the second order yields to:

 $\begin{array}{c}\hfill Y=h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+<\underline{\nabla }h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>+\frac{1}{2}<<{\underline{\underline{\nabla }}}^{2}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X},\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>,\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>+o\left(\mathrm{Cov}\left[\underline{X}\right]\right)\end{array}$

where:

• ${\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}=𝔼\left[\underline{X}\right]$ is the vector of the input variables at the mean values of each component.

• $\mathrm{Cov}\left[\underline{X}\right]$ is the covariance matrix of the random vector $\underline{X}$. The elements are the followings : ${\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}=𝔼\left[\left({X}^{i}-𝔼\left[{X}^{i}\right]\right)×\left({X}^{j}-𝔼\left[{X}^{j}\right]\right)\right]$

• $\underline{\nabla }h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial y}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}$ is the gradient vector taken at the value $\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$ and $j=1,...,{n}_{X}$.

• ${\underline{\underline{\nabla }}}^{2}h\left(\underline{x},\underline{x}\right)$ is a matrix. It is composed by the second order derivative of the output variable towards the ${i}^{\mathrm{th}}$ and ${j}^{\mathrm{th}}$ components of $\underline{x}$ taken around $\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$. It yields to: ${\left({\nabla }^{2}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X},{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\right)}_{ij}={\left(\frac{{\partial }^{2}h\left(\underline{x},\underline{x}\right)}{\partial {x}^{i}\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}$

• $<,>$ is a scalar product between two vectors.

Approximation at the order 1 - Case ${n}_{Y}=1$

Expectation:

 $\begin{array}{c}\hfill 𝔼\left[Y\right]=h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$

Variance:

 $\begin{array}{c}\hfill \mathrm{Var}\left[Y\right]=\sum _{i,j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {X}^{i}}.\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {X}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}\end{array}$

Approximation at the order 2 - Case ${n}_{Y}=1$

Expectation:

 $\begin{array}{c}\hfill \begin{array}{c}\hfill 𝔼\left[Y\right]=h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+\frac{1}{2}.\sum _{i,j=1}^{{n}_{X}}\frac{{\partial }^{2}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X},{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{i}\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}\end{array}\end{array}$

Variance:

The decomposition of the variance at the order 2 is not implemented in the standard version of OpenTURNS. It requires both the knowledge of higher order derivatives of the model and the knowledge of moments of order strictly greater than 2 of the pdf.

Case ${n}_{Y}>1$

The quadratic cumul approach can be developped at different orders from the Taylor decomposition of the random vector $\underline{Y}$. As $\underline{Y}=h\left(\underline{X}\right)$, the Taylor decomposition around $\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$ at the second order yields to:

 $\begin{array}{c}\hfill \underline{Y}=h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+<\underline{\underline{\nabla }}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>+\frac{1}{2}<<{\underline{\underline{\underline{\nabla }}}}^{2}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X},\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>,\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>+o\left(\mathrm{Cov}\left[\underline{X}\right]\right)\end{array}$

where:

• ${\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}=𝔼\left[\underline{X}\right]$ is the vector of the input variables at the mean values of each component.

• $\mathrm{Cov}\left[\underline{X}\right]$ is the covariance matrix of the random vector $\underline{X}$. The elements are the followings : ${\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}=𝔼\left[{\left({X}^{i}-𝔼\left[{X}^{i}\right]\right)}^{2}\right]$

• $\underline{\underline{\nabla }}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial {y}^{i}}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial {h}^{i}\left(\underline{x}\right)}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}$ is the transposed Jacobian matrix with $i=1,...,{n}_{Y}$ and $j=1,...,{n}_{X}$.

• $\underline{\underline{\underline{{\nabla }^{2}}}}h\left(\underline{x}\phantom{\rule{0.222222em}{0ex}},\underline{x}\right)$ is a tensor of order 3. It is composed by the second order derivative towards the ${i}^{\mathrm{th}}$ and ${j}^{\mathrm{th}}$ components of $\underline{x}$ of the ${k}^{\mathrm{th}}$ component of the output vector $h\left(\underline{x}\right)$. It yields to: ${\left({\nabla }^{2}h\left(\underline{x}\right)\right)}_{ijk}=\frac{{\partial }^{2}\left({h}^{k}\left(\underline{x}\right)\right)}{\partial {x}^{i}\partial {x}^{j}}$

• $<\underline{\underline{\nabla }}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>={\sum }_{j=1}^{{n}_{X}}{\left(\frac{\partial \underline{y}}{\partial {x}^{j}}\right)}_{\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}.\left({X}^{j}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}^{j}\right)$

• $<<{\underline{\underline{\underline{\nabla }}}}^{2}h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X},\phantom{\rule{0.222222em}{0ex}}{\underline{\mu }}_{X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>,\phantom{\rule{0.222222em}{0ex}}\underline{X}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}>={\left({}^{t}\left({\underline{X}}^{i}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}^{i}\right).{\left(\frac{{\partial }^{2}{y}^{k}}{\partial {x}^{i}\partial {x}^{k}}\right)}_{\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}}.\left({\underline{X}}^{j}-{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}^{j}\right)\right)}_{ijk}$

Approximation at the order 1 - Case ${n}_{Y}>1$

Expectation:

 $\begin{array}{c}\hfill 𝔼\left[\underline{Y}\right]\approx \underline{h}\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$

Pay attention that $𝔼\left[\underline{Y}\right]$ is a vector. The ${k}^{\mathrm{th}}$ component of this vector is equal to the ${k}^{\mathrm{th}}$ component of the output vector computed by the model $h$ at the mean value. $𝔼\left[\underline{Y}\right]$ is thus the computation of the model at mean.

Variance:

 $\begin{array}{c}\hfill \mathrm{Cov}\left[\underline{Y}\right]{\approx }^{t}\underline{\underline{\nabla }}\phantom{\rule{0.222222em}{0ex}}\underline{h}\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right).\mathrm{Cov}\left[\underline{X}\right].\underline{\underline{\nabla }}\phantom{\rule{0.222222em}{0ex}}\underline{h}\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$

Approximation at the order 2 - Case ${n}_{Y}>1$

Expectation:

 $\begin{array}{c}\hfill 𝔼\left[\underline{Y}\right]\approx \underline{h}\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+\frac{1}{2}.{\underline{\underline{\underline{\nabla }}}}^{2}\underline{\underline{h}}\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X},{\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\odot \mathrm{Cov}\left[\underline{X}\right]\end{array}$

This last formulation is the reduced writing of the following expression:

 $\begin{array}{c}\hfill {\left(𝔼\left[\underline{Y}\right]\right)}_{k}\approx {\left(\underline{h}\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\right)}_{k}+{\left(\sum _{i=1}^{{n}_{X}}\frac{1}{2}{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ii}.{\left({\nabla }^{2}\phantom{\rule{0.222222em}{0ex}}h\left(\underline{X}\right)\right)}_{iik}+\sum _{i=1}^{{n}_{X}}\sum _{j=1}^{i-1}{\left(\mathrm{Cov}\left[X\right]\right)}_{ij}.{\left({\nabla }^{2}\phantom{\rule{0.222222em}{0ex}}h\left(\underline{X}\right)\right)}_{ijk}\right)}_{k}\end{array}$

Variance:

The decomposition of the variance at the order 2 is not implemented in the standard version of OpenTURNS. It requires both the knowledge of higher order derivatives of the model and the knowledge of moments of order strictly greater than 2 of the pdf.

Other notations

Perturbation methods

This method is part of the step C 'Propagation of Uncertainties' of the global methodology. It requires the definition of the input random vector $\underline{X}$, the definition of the model of interest $h$ ((both should have been done in step specification of model and criteria).

References and theoretical basics

This method is well fitted when one wants to obtain the parameters of the central dispersion. Be careful, if the model is largely non linear or not monotonous, the Taylor approximation at the order 2 may not be accurate on the domain of the input variables and thus the assessment of the first and second order moments may be largely false. Besides, one has to pay attention that this method is generally not justified to compute low probabilities. Pay attention that the mean and variance obtained by quadratic decomposition should not be used tu deduce low probabilities. For instance, the 95 % quantile of ${Y}^{i}$ is generally not equal to the ${\mu }_{Y}^{i}+1,64.{\sigma }^{i}$ - except if one may prove that ${Y}^{i}$ follows a gaussian distribution - and the error is potentially huge.

### 4.3.8 Step C  – Estimating the mean and variance using the Monte Carlo Method

Mathematical description

Goal

Let us denote $\underline{Y}=h\left(\underline{X},\underline{d}\right)=\left({Y}^{1},...,{Y}^{{n}_{Y}}\right)$, where $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ is a random vector, and $\underline{d}$ a deterministic vector. We seek here to evaluate, the characteristics of the central part (central tendency and spread i.e. mean and variance) of the probability distribution of a variable ${Y}^{i}$, using the probability distribution of the random vector $\underline{X}$.

Principle

The Monte Carlo method is a numerical integration method using sampling, which can be used, for example, to determine the mean and standard deviation of a random variable ${Y}^{i}$ (if these quantities exist, which is not the case for all probability distributions):

 $\begin{array}{c}\hfill {m}_{{Y}^{i}}=\int u\phantom{\rule{0.166667em}{0ex}}{f}_{{Y}^{i}}\left(u\right)\phantom{\rule{0.166667em}{0ex}}du,\phantom{\rule{4pt}{0ex}}{\sigma }_{{Y}^{i}}=\sqrt{\int {\left(u-{m}_{{Y}^{i}}\right)}^{2}\phantom{\rule{0.166667em}{0ex}}{f}_{{Y}^{i}}\left(u\right)\phantom{\rule{0.166667em}{0ex}}du}\end{array}$

where ${f}_{{Y}^{i}}$ represents the probability density function of ${Y}^{i}$.

Suppose now that we have the sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$ of $N$ values randomly and independently sampled from the probability distribution ${f}_{{Y}^{i}}$; this sample can be obtained by drawing a $N$ sample $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of the random vector $\underline{X}$ (the distribution of which is known) and by computing ${\underline{y}}_{j}=h\left({\underline{x}}_{j},\underline{d}\right)\phantom{\rule{4pt}{0ex}}\forall 1\le j\le N$. Then, the Monte-Carlo estimations for the mean and standard deviation are the empirical mean and standard deviations of the sample:

 $\begin{array}{c}\hfill {\stackrel{^}{m}}_{{Y}^{i}}=\frac{1}{N}\sum _{j=1}^{N}{y}_{j}^{i},\phantom{\rule{4pt}{0ex}}{\stackrel{^}{\sigma }}_{{Y}^{i}}=\sqrt{\frac{1}{N}{\sum }_{j=1}^{N}{\left({y}_{j}^{i}-{\stackrel{^}{m}}_{{Y}^{i}}\right)}^{2}}\end{array}$

These are just estimations, but by the law of large numbers their convergence to the real values ${m}_{{Y}^{i}}$ and ${\sigma }_{{Y}^{i}}$ is assured as the sample size $N$ tends to infinity. The Central Limit Theorem enables the difference between the estimated value and the sought value to be controlled by means of a confidence interval (especially if N is sufficiently large, typically $N$ > a few dozens even if there is now way to say for sure if the asymptotic behaviour is reached). For a probability $\alpha$ strictly between 0 and 1 chosen by the user, one can, for example, be sure with a confidence $\alpha$, that the true value of ${m}_{{Y}^{i}}$ is between ${\stackrel{^}{m}}_{i,inf}$ and ${\stackrel{^}{m}}_{i,sup}$ calculated analytically from simple formulae. To illustrate, for $\alpha =0.95$:

 $\begin{array}{c}\hfill {\stackrel{^}{m}}_{i,inf}={\stackrel{^}{m}}_{{Y}^{i}}-1.96\frac{{\stackrel{^}{\sigma }}_{{Y}^{i}}}{\sqrt{N}},\phantom{\rule{4pt}{0ex}}{\stackrel{^}{m}}_{i,sup}={\stackrel{^}{m}}_{{Y}^{i}}+1.96\frac{{\stackrel{^}{\sigma }}_{{Y}^{i}}}{\sqrt{N}},\phantom{\rule{4pt}{0ex}}\mathrm{that}\mathrm{is}\mathrm{to}\mathrm{say}\phantom{\rule{4pt}{0ex}}\mathrm{Pr}\left({\stackrel{^}{m}}_{i,inf}\le {m}_{{Y}^{i}}\le {\stackrel{^}{m}}_{i,sup}\right)=0.95\end{array}$

The size of the confidence interval, which represents the uncertainty of this mean estimation, decreases as $N$ increases but more gradually (the rate is proportional to $\sqrt{N}$: multiplying $N$ by 100 reduces the length of the confidence interval $\left|{\stackrel{^}{m}}_{i,inf}-{\stackrel{^}{m}}_{i,sup}\right|$ by a factor 10).

Other notations

Direct sampling, crude Monte Carlo method, Classical Monte Carlo integration

In the overall process, the Monte Carlo simulation method for estimating the variance appears in step C "Propagation of Uncertainty" when the study of uncertainty deals with the dispersion of the variable of interest ${Y}^{i}$ defined in step A "Specifying Criteria and the Case Study". To be more precise, this method requires that the following steps have previously been previously completed:
• step A: specification of input variables $\underline{X}$ and $\underline{d}$ and the output variable of interest $\underline{Y}=h\left(\underline{X},\underline{d}\right)$,

• step B: use of one of the proposed techniques for determining the probability distribution of the variable $\underline{X}$,

The method's parameters are the following:

• number $N$ of simulations,

• probability $\alpha$ giving the required confidence level for the confidence intervals,

The method described here returns the following results:

• the Monte-Carlo estimates ${\stackrel{^}{m}}_{{Y}^{i}}$ and ${\stackrel{^}{\sigma }}_{{Y}^{i}}$ for the mean and standard deviations of the variable of interest ${Y}^{i}$,

• the confidence interval $\left[{\stackrel{^}{m}}_{i,inf},{\stackrel{^}{m}}_{i,sup}\right]$ for the mean ${m}_{{Y}^{i}}$.

References and theoretical basics

The Monte-Carlo method does not require any assumptions on the form of the function $h$ which relates $\underline{X}$ and $\underline{Y}$, except that the expected value and standard deviation of ${Y}^{i}$ should exist (which is not the case, for example, if ${Y}^{i}$ follows the Cauchy distribution).

Actually, the only limitation resides in $N$, the number of simulations, which if not sufficiently high (because of the CPU time required for an estimation of $\underline{Y}=h\left(\underline{X},\underline{d}\right)$), can result in too greater uncertainty for the estimations of ${\stackrel{^}{m}}_{{Y}^{i}}$ and ${\stackrel{^}{\sigma }}_{{Y}^{i}}$. It is fitting then to verify the convergence of the estimators, especially by plotting the graph of the coefficient de variation ${\stackrel{^}{\sigma }}_{{Y}^{i}}/{\stackrel{^}{m}}_{{Y}^{i}}$ as a function of $N$: if convergence is not visible, it is necessary to increase $N$ or if needed to choose another propagation method to estimate the central uncertainty of $\underline{Y}$ (see [Quadratic combination / Perturbation method] ).

The following references provide a bibliographic starting point for interested readers for further study of the method described here:

• Robert C.P., Casella G. (2004). "Monte Carlo Statistical Methods", Springer, ISBN 0-387-21239-6, 2nd ed.

• Rubinstein R.Y. (1981). "Simulation and The Monte Carlo methods", John Wiley & Sons

• "Guide to the expression of Uncertainty in Measurements (GUM)", ISO publication

### 4.3.9 Step C  – Isoprobabilistic transformation preliminary to FORM-SORM methods

Mathematical description

Goal

The isoprobabilistic transformation is used in the following context : $\underline{X}$ is the input random vector, ${F}_{i}$ the cumulative density functions of its components and $C$ its copula.

Let us denote by $\underline{d}$ a deterministic vector, $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and g(X , d) = 0 its boundary.

One way to evaluate the probability content ${P}_{f}$ of the event ${𝒟}_{f}$:

 ${P}_{f}=ℙ\left(g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$ (105)

is to introduce an isoprobabilistic transformation $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into ${ℝ}^{n}$, such that the distribution of the random vector $\underline{U}=T\left(\underline{X}\right)$ has the following properties : $\underline{U}$ and $\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{U}$ have the same distribution for all rotations $\underline{\underline{R}}\in {𝒮𝒫}_{n}\left(ℝ\right)$.

Such transformations exist and the most widely used are :

Principle

If we suppose that the numerical model $g$ has suitable properties of differentiability, the evaluation of the probability (105) can be transformed in the evaluation of the probability:

 ${P}_{f}=ℙ\left(G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{ℝ}^{n}}{1}_{G\left(\underline{u}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}\phantom{\rule{0.166667em}{0ex}}{f}_{\underline{U}}\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}$ (106)

where $T$ is a ${C}^{1}$-diffeomorphism called an isoprobabilistic transformation, ${f}_{\underline{U}}$ the probability density function of $\underline{U}=T\left(\underline{X}\right)$ and $G=f\circ {T}^{-1}$.

The vector $\underline{U}$ is said to be in the standard space, whereas $\underline{X}$ is in the physical space.

The interest of such a transformation is the rotational invariance of the distributions in the standard space : the random vector $\underline{U}$ has a spherical distribution, which means that the density function ${f}_{\underline{U}}$ is a function of $\parallel \underline{u}\parallel$. Thus, without loss of generality, it is possible to map the general failure domain $𝒟$ to a domain ${𝒟}^{\text{'}}$ for which the design point ${{\underline{u}}^{*}}^{\text{'}}$ (the point of the event boundary at minimal distance from the center of the standard space) is supported by the last axis (see Figure (4.3.9).

The following transformations verify that property, under some specific conditions on the dependence structure of the random vector $\underline{X}$ :

• the Nataf transformation (see [A. Nataf]) : $\underline{X}$ must have a normal copula,

• the Generalized Nataf transformation (see [R. Lebrun, A. Dutfoy, 2008, b]) : $\underline{X}$ must have an elliptical copula,

• the Rosenblatt transformation (see [M. Rosenblatt]) : there is no condition on the copula of $\underline{X}$ .

OpenTURNS uses automatically the Generalized Nataf transformation when the copula is elliptical and the Rosenblatt transformation for any other case.

Other notations

Within the global methodology, the isoprobabilistic transformation is used in the First and Second Order Reliability Method to evaluate the probability content of the event ${𝒟}_{f}$ (refer to [FORM] and [SORM] ).
References and theoretical basics
According to the isoprobabilistic transformation used, the standard space differs ([R. Lebrun, A. Dutfoy, 2008, b]) : in the standard space of the Generalized Nataf transformation, distributions are spherical, with zero mean, unit variance and unit correlation matrix. The type of the spherical distribution is the type of the elliptical copula of the initial random vector $\underline{X}$. In the standard space of the Rosenblatt transformation, distributions are normal distributions with zero mean, unit variance and independent components (which is equivalent, in that normal case, to a unit correlation matrix).

The reference [R. Lebrun, A. Dutfoy, 2008, c] makes a comparison between both isoprobabilistic transformations : Generalized Nataf and Rosenblatt. The conclusions are the following ones : In the case where the copula of the random vector $\underline{X}$ is normal, the canonical Rosenblatt transformation (it means when the conditionning order follows the natural order of the components) and the Nataf transformation are identical; if the conditioning order is not the canonical one, both transformations only differ by an orthogonal transformation.

In the case where the copula of the random vector $\underline{X}$ is elliptical without being normal, both transformations differ and both standard spaces can not be compared.

Let's note some useful references:

• A. Der Kiureghian, P.L. Liu, 1986,"Structural Reliability Under Incomplete Probabilistic Information", Journal of Engineering Mechanics, vol 112, no. 1, pp85-104.

• R. Lebrun, A. Dutfoy, 2008, b, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics 24 (2009), pp. 172-178, doi:10.1016/j.probengmech.2008.05.001.

• R. Lebrun, A. Dutfoy, 2008, c, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under review so far.

• A. Nataf, "Détermination des distributions de probabilités dont les marges sont données", Comptes Rendus de l'Académie des Sciences, 225, 42-43.

• M. Rosenblatt, "Remarks on a Multivariat Transformation", The Annals of Mathematical Statistics, Vol. 23, No 3, pp. 470-472.

Examples

Refer to [Generalized Nataf] or [Rosenblatt] to have some examples of isoprobabilistic transformations.

### 4.3.10 Step C  – Generalized Nataf Transformation

Mathematical description

Goal

The Generalized Nataf transformation is an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ) which is used under the following context : $\underline{X}$ is the input random vector, ${F}_{i}$ the cumulative density functions of its components and $C$ its copula, which is supposed to be elliptical.

Let us denote by $\underline{d}$ a determinist vector, $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and g(X , d) = 0 its boundary.

One way to evaluate the probability content of the event ${𝒟}_{f}$:

 ${P}_{f}=ℙ\left(g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$ (107)

is to use the Generalized Nataf transformation $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into the standard space ${ℝ}^{n}$, where distributions are spherical, with zero mean, unit variance and unit correlation matrix. The type of the spherical distribution is the type of the elliptical copula $C$.

The Generalized Nataf transformation presented here is a generalisation of the traditional Nataf transformation (see [A. Nataf]) : the reference [R. Lebrun, A. Dutfoy a] shows that the Nataf transformation can be used only if the copula of $\underline{X}$ is normal. The Generalized Nataf transformation (see [R. Lebrun, A. Dutfoy b]) extends the Nataf transformation to elliptical copulas.

Principle

Let us recall some definitions.

A random vector $\underline{X}$ in ${ℝ}^{n}$ has an elliptical distribution if and only if there exists a deterministic vector $\underline{\mu }$ such that the characteristic function of $\underline{X}-\underline{\mu }$ is a scalar function of the quadratic form ${\underline{u}}^{t}\underline{\underline{\Sigma }}\phantom{\rule{0.166667em}{0ex}}\underline{u}$:

 $\begin{array}{c}\hfill {\varphi }_{\underline{X}-\underline{\mu }}\left(\underline{u}\right)=\psi \left({\underline{u}}^{t}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{\Sigma }}\phantom{\rule{0.166667em}{0ex}}\underline{u}\right)\end{array}$

with $\underline{\underline{\Sigma }}$ a symmetric positive definite matrix of rank $p$. As $\underline{\underline{\Sigma }}$ is symmetric positive, it can be written in the form $\underline{\underline{\Sigma }}=\underline{\underline{D}}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{D}}$, where $\underline{\underline{D}}$ is the diagonal matrix $\underline{\underline{diag{\sigma }_{i}}}$ with ${\sigma }_{i}=\sqrt{{\Sigma }_{ii}}$ and ${R}_{ij}=\frac{{\Sigma }_{ij}}{\sqrt{{\Sigma }_{ii}{\Sigma }_{jj}}}$.

With a specific choice of normalisation for $\psi$, in the case of finite second moment, the covariance matrix of $\underline{X}$ is $\underline{\underline{\Sigma }}$ and $\underline{\underline{R}}$ is then its linear correlation matrix. The matrix $\underline{\underline{R}}$ is always well-defined, even if the distribution has no finite second moment: even in this case, we call it the correlation matrix of the distribution. We note $\underline{\sigma }=\left({\sigma }_{1},\cdots ,{\sigma }_{n}\right)$.

We denote by ${E}_{\underline{\mu },\underline{\sigma },\underline{\underline{R}},\psi }$ the cumulative distribution function of the elliptical distribution ${ℰ}_{\underline{\mu },\underline{\sigma },\underline{\underline{R}},\psi }$.

An elliptical copula ${C}_{\underline{\underline{R}},\psi }^{E}$ is the copula of an elliptical distribution ${ℰ}_{\underline{\mu },\underline{\sigma },\underline{\underline{R}},\psi }$.

The generic elliptical representative of an elliptical distribution family ${ℰ}_{\underline{\mu },\underline{\sigma },\underline{\underline{R}},\psi }$ is the elliptical distribution whose cumulative distribution function is ${E}_{\underline{0},\underline{1},\underline{\underline{R}},\psi }$.

The standard spherical representative of an elliptical distribution family ${ℰ}_{\underline{\mu },\underline{\sigma },\underline{\underline{R}},\psi }$ is the spherical distribution whose cumulative distribution function is ${E}_{\underline{0},\underline{1},{\underline{\underline{I}}}_{n},\psi }$.

The family of distributions with marginal cumulative distribution functions are ${F}_{1},\cdots ,{F}_{n}$ and any elliptical copula ${C}_{\underline{\underline{R}},\psi }^{E}$ is denoted by ${𝒟}_{{F}_{1},\cdots ,{F}_{n},{C}_{\underline{\underline{R}},\psi }^{E}}$. The cumulative distribution function of this distribution is noted ${D}_{{F}_{1},\cdots ,{F}_{n},{C}_{\underline{\underline{R}},\psi }^{E}}$.

The random vector $\underline{X}$ is supposed to be continuous and with full rank. It is also supposed that its cumulative marginal distribution functions ${F}_{i}$ are strictly increasing (so they are bijective) and that the matrix $\underline{\underline{R}}$ of its elliptical copula is symmetric positive definite.

Generalized Nataf transformation: Let $\underline{X}$ in ${ℝ}^{n}$ be a continuous random vector following the distribution ${D}_{{F}_{1},\cdots ,{F}_{n},{C}_{\underline{\underline{R}},\psi }^{E}}$. The Generalized Nataf transformation ${T}_{Nataf}^{gen}$ is defined by:

 $\underline{u}={T}_{Nataf}^{gen}\left(\underline{X}\right)={T}_{3}\circ {T}_{2}\circ {T}_{1}\left(\underline{X}\right)$ (108)

where the three transformations ${T}_{1}$, ${T}_{2}$ and ${T}_{3}$ are given by:

 $\begin{array}{c}\begin{array}{ccc}\hfill {T}_{1}:{ℝ}^{n}& \to & {ℝ}^{n}\hfill \\ \hfill \underline{x}& ↦& \underline{w}={\left({F}_{1}\left({x}_{1}\right),\cdots ,{F}_{n}\left({x}_{n}\right)\right)}^{t}\hfill \end{array}\hfill \\ \begin{array}{ccc}\hfill {T}_{2}:{ℝ}^{n}& \to & {ℝ}^{n}\hfill \\ \hfill \underline{w}& ↦& \underline{v}={\left({E}^{-1}\left({w}_{1}\right),\cdots ,{E}^{-1}\left({w}_{n}\right)\right)}^{t}\hfill \end{array}\hfill \\ \begin{array}{ccc}\hfill {T}_{3}:{ℝ}^{n}& \to & {ℝ}^{n}\hfill \\ \hfill \underline{v}& ↦& \underline{u}=\underline{\underline{\Gamma }}\phantom{\rule{0.166667em}{0ex}}\underline{v}\hfill \end{array}\hfill \end{array}$ (109)

where $E$ is the cumulative distribution function of the standard 1-dimensional elliptical distribution with characteristic generator $\psi$ and $\underline{\underline{\Gamma }}$ is the inverse of the Cholesky factor of $\underline{\underline{R}}$.

The distribution of $\underline{W}={T}_{2}\circ {T}_{1}\left(\underline{X}\right)$ is the generic elliptical representative associated to the copula of $\underline{X}$. The step ${T}_{3}$ maps this distribution into its standard representative, following exactly the same algebra as the normal copula. Thus, in the Generalized Nataf standard space, the random vector $\underline{U}$ follows the standard representative distribution of the copula of the physical random vector $\underline{X}$.

If the copula of $\underline{X}$ is normal, $\underline{U}$ follows the standard normal distribution with independent components.

Other notations

Within the global methodology, the isoprobabilistic transformation is used in the First and Second Order reliability Method to evaluate the probability content of the event ${𝒟}_{f}$ (refer to [FORM] and [SORM] ). It is an alternative to the Rosenblatt transformation (refer to [Rosenblatt] ) in the case where the copula is elliptical (refer to [Isoprobabilistic Transformation] for more details on isoprobabilistic transformations).

OpenTURNS uses the Generalized Nataf transformation to map in the standard space any random vector which copula is elliptical.

References and theoretical basics
Traditionnally, the Nataf transformation is used in the case where the information on the random vector $\underline{X}$ is limited to the distributions of its components ${F}_{i}$ and its linear correlation matrix (or Pearson correlation coefficient, refer to [Pearson correlation coefficient] ). It is important to note that the Nataf transformation adds some information on the dependence structure by modeling it with a normal copula. This implicit hypothesis might have a great impact on the precision of the evaluation of the event probability (see [R. Lebrun, A. Dutfoy, 2008, a]). That's why the Nataf transformation may be used only if it has been verified whether the normal copula is a proper modelisation of the dependence structure of the random vector $\underline{X}$.

Furthermore, the normal copula of the random vector $\underline{X}$ is parameterized from its linear correlation matrix, which arises some problems due to the fact that the linear correlation matrix is not a usefull indicator of the dependence between two correlated variables (see [R. Lebrun, A. Dutfoy, 2008, a]). More adapted indicators of the dependence structure are the Spearman rank correlation coefficents (refer to [Spearman correlation coefficient] ), the Kendall rate or the upper and lower tail coefficients (see R. Lebrun, A. Dutfoy, 2008, c]).

Let's note some usefull references:

• O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark - Maritime engineering, internet publication.

• J. Goyet, 1998, "Sécurité probabiliste des structures - Fiabilité d'un élément de structure," Collège de Polytechnique.

• A. Der Kiureghian, P.L. Liu, 1986,"Structural Reliability Under Incomplete Probabilistic Information", Journal of Engineering Mechanics, vol 112, no. 1, pp85-104.

• R. Lebrun, A. Dutfoy, 2008, a, "An innovating analysis of the Nataf transformation from the copula viewpoint", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.08.001.

• R. Lebrun, A. Dutfoy, 2008, b, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.05.001.

• R. Lebrun, A. Dutfoy, 2008, c, "A practical approach to dependence modelling using copulas", submitted to Risk and Reliability Journal in november 2008, under review so far.

• H.O. Madsen, Krenk, S., Lind, N. C., 1986, "Methods of Structural Safety," Prentice Hall.

• A. Nataf, "Détermination des distributions de probabilités dont les marges sont données", Comptes Rendus de l'Académie des Sciences, 225, 42-43.

Examples

Let us consider a 2-dimensional random vector $\underline{X}=\left({X}_{1},{X}_{2}\right)$ defined by its marginal cumulative distribution functions $\left({F}^{1},{F}^{2}\right)$ and its copula $C$. We suppose that each component follows an exponential distributions defined as ${X}_{1}\sim ℰ\left({\lambda }_{1}\right)$ and ${X}_{2}\sim ℰ\left({\lambda }_{2}\right)$. The copula ${C}_{\underline{\underline{R}}}^{N}$ is supposed to be normal, where $\underline{\underline{R}}=\left(\begin{array}{cc}1\phantom{\rule{1.em}{0ex}}& \rho \\ \rho \phantom{\rule{1.em}{0ex}}& 1\end{array}\right)$.

We consider the following limit state function $g$ defined by:

 $g\left({X}_{1},{X}_{2}\right)=8{X}_{1}+2{X}_{2}-1$ (110)

The Generalized Nataf transformation (which is equivalent to the tranditionnal Nataf transformation here since the copula is normal) leads to the normal random vector $\underline{U}$ defined as:

 $\underline{U}=\underline{\underline{\Gamma }}\left(\begin{array}{c}{\Phi }^{-1}\circ {F}^{1}\left({X}_{1}\right)\hfill \\ {\Phi }^{-1}\circ {F}^{2}\left({X}_{2}\right)\hfill \end{array}\right)$ (111)

As $\underline{\underline{\Gamma }}=\left(\begin{array}{cc}1& 0\\ -\frac{\rho }{\sqrt{1-{\rho }^{2}}}& \frac{1}{\sqrt{1-{\rho }^{2}}}\end{array}\right)$, we have:

 $\left\{\begin{array}{ccc}{U}_{1}\hfill & =& {\Phi }^{-1}\circ {F}^{1}\left({X}_{1}\right)\hfill \\ {U}_{2}\hfill & =& -\frac{\rho {\Phi }^{-1}\circ {F}^{1}\left({X}_{1}\right)}{\sqrt{1-{\rho }^{2}}}+\frac{{\Phi }^{-1}\circ {F}^{2}\left({X}_{2}\right)}{\sqrt{1-{\rho }^{2}}}\hfill \end{array}\right\$ (112)

The expression of the limit state in the standard space has the parametric expression, where $\xi \in \left[0,+\infty \left[$:

 $\left\{\begin{array}{ccc}{u}_{1}\hfill & =& {\Phi }^{-1}\circ {F}^{1}\left(\xi \right)\hfill \\ {u}_{2}\hfill & =& \frac{{\Phi }^{-1}\circ {F}^{2}\left(\frac{1-8\xi }{2}\right)-\rho {\Phi }^{-1}\circ {F}^{1}\left(\xi \right)}{\sqrt{1-{\rho }^{2}}}\hfill \end{array}\right\$ (113)

### 4.3.11 Step C  – Rosenblatt Transformation

Mathematical description

Goal

The Rosenblatt transformation is an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ) which is used under the following context : $\underline{X}$ is the input random vector, ${F}_{i}$ the cumulative density functions of its components and $C$ its copula, without no condition on its type.

Let us denote by $\underline{d}$ a determinist vector, $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and g(X , d) = 0 its boundary.

One way to evaluate the probability content of the event ${𝒟}_{f}$:

 ${P}_{f}=ℙ\left(g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$ (114)

is to use the Rosenblatt transformation $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into the standard space ${ℝ}^{n}$, where distributions are normal, with zero mean, unit variance and unit correlation matrix (which is equivalent in that normal case to independent components).

Principle

Let us recall some definitions.

The cumulative distribution function ${F}_{1,k}$ of the $k$-dimensional random vector $\left({X}_{1},\cdots ,{X}_{k}\right)$ is defined by its marginal distributions ${F}_{i}$ and the copula ${C}_{1,k}$ through the relation :

 ${F}_{1,k}\left({x}_{1},\cdots ,{x}_{k}\right)={C}_{1,k}\left({F}_{1}\left({x}_{1}\right),\cdots ,{F}_{k}\left({x}_{k}\right)\right)$ (115)

with

 ${C}_{1,k}\left({u}_{1},\cdots ,{u}_{k}\right)=C\left({u}_{1},\cdots ,{u}_{k},1,\cdots ,1\right)$ (116)

The cumulative distribution function of the conditional variable ${X}_{k}|{X}_{1},\cdots ,{X}_{k-1}$ is defined by:

 ${F}_{k|1,\cdots ,k-1}\left({x}_{k}|{x}_{1},\cdots ,{x}_{k-1}\right)=\frac{{\partial }^{k-1}{F}_{1,k}\left({x}_{1},\cdots ,{x}_{k}\right)}{\partial {x}_{1}\cdots \partial {x}_{k-1}}/\frac{{\partial }^{k-1}{F}_{1,k-1}\left({x}_{1},\cdots ,{x}_{k-1}\right)}{\partial {x}_{1}\cdots \partial {x}_{k-1}}$ (117)

Rosenblatt transformation: Let $\underline{X}$ in ${ℝ}^{n}$ be a continuous random vector defined by its marginal cumulative distribution functions ${F}_{i}$ and its copula $C$. The Rosenblatt transformation ${T}_{Ros}$ of $\underline{X}$ is defined by:

 $\underline{U}={T}_{Ros}\left(\underline{X}\right)={T}_{2}\circ {T}_{1}\left(\underline{X}\right)$ (118)

where both transformations ${T}_{1}$, and ${T}_{2}$ are given by:

 $\begin{array}{ccc}\hfill {T}_{1}:{ℝ}^{n}& \to & {ℝ}^{n}\hfill \\ \hfill \underline{X}& ↦& \underline{Y}=\left(\begin{array}{c}{F}_{1}\left({X}_{1}\right)\hfill \\ \cdots \hfill \\ {F}_{k|1,\cdots ,k-1}\left({X}_{k}|{X}_{1},\cdots ,{X}_{k-1}\right)\hfill \\ \cdots \hfill \\ {F}_{n|1,\cdots ,n-1}\left({X}_{n}|{X}_{1},\cdots ,{X}_{n-1}\right)\hfill \end{array}\right)\hfill \end{array}$ (119)
 $\begin{array}{ccc}\hfill {T}_{2}:{ℝ}^{n}& \to & {ℝ}^{n}\hfill \\ \hfill \underline{Y}& ↦& \underline{U}=\left(\begin{array}{c}{\Phi }^{-1}\left({Y}_{1}\right)\hfill \\ \cdots \hfill \\ {\Phi }^{-1}\left({Y}_{n}\right)\hfill \end{array}\right)\hfill \end{array}$ (120)

where ${F}_{k|1,\cdots ,k-1}$ is the cumulative distribution function of the conditional random variable ${X}_{k}|{X}_{1},\cdots ,{X}_{k-1}$ and $\Phi$ the cumulative distribution function of the standard 1-dimensional Normal distribution.

Other notations

Within the global methodology, the Rosenblatt transformation is used in the First and Second Order reliability Method to evaluate the probability content of the event ${𝒟}_{f}$ (refer to [FORM] and [SORM] ). It is an alternative to the Generalized Nataf transformation ( [Generalized Nataf] ) in the case where the copula is not elliptical (refer to [Isoprobabilistic Transformation] for more details on isoprobabilistic transformations).

OpenTURNS uses the Rosenblatt transformation to map in the standard space any random vector which copula is not elliptical (for which the Generalized Nataf transformation can not be used).

References and theoretical basics
The first step of the Rosenblatt transformation needs to make a choice on the order of the conditionning (in the ${T}_{1}$ transformation). The order presented here is called the canonical order as it follows the natural order of the components. It is interesting to study the impact of that choice of the Rosenblatt standard space and thus on the FORM and SORM approximations of the event probability.

The article [R. Lebrun, A. Dutfoy, 2008, c] shows that if the initial vector $\underline{X}$ has a normal copula, the effect of changing the order of conditioning in the Rosenblatt transformation is to apply an orthogonal transformation to the canonical Rosenblatt standard space. In the context of the FORM or SORM approximations, it will change the location of the design point, but due to the rotational invariance of the distribution of $\underline{U}$ in the standard space (this distribution is spherical), it will not change the key quantities evaluated using these approximations :

• the Hohenbichler reliability index, which is the distance of the limit state surface from the center of the standard space,

• the FORM approximation of the probability of the failure space which relies only on the Hohenbichler reliability index,

• the SORM approximations of the probability of the failure space, which rely on both the Hohenbichler reliability index and the curvatures of the limit state function at the design point (the point of the failure space the nearest the center of the standard space).

If the copula of the input vector $\underline{X}$ is not normal, the conditioning order generally has an impact on the results given by the FORM and SORM approximations.

Let us recall that it is only the approximate values that are impacted by the conditioning order, not the exact value of the probability : whatever the choosen conditioning order, the associated Rosenblatt transformation remains an isoprobabilistic transformation!

Let's note some usefull references:

• O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark - Maritime engineering, internet publication.

• J. Goyet, 1998,"Sécurité probabiliste des structures - Fiabilité d'un élément de structure," Collège de Polytechnique.

• A. Der Kiureghian, P.L. Liu, 1986,"Structural Reliability Under Incomplete Probabilistic Information", Journal of Engineering Mechanics, vol 112, no. 1, p85-104.

• R. Lebrun, A. Dutfoy, 2008, a, "An innovating analysis of the Nataf transformation from the copula viewpoint", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.08.001.

• R. Lebrun, A. Dutfoy, 2008, b, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.05.001.

• R. Lebrun, A. Dutfoy, 2008, c, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under review so far.

• H.O. Madsen, Krenk, S., Lind, N. C., 1986, "Methods of Structural Safety," Prentice Hall.

• A. Nataf, "Détermination des distributions de probabilités dont les marges sont données", Comptes Rendus de l'Académie des Sciences, 225, 42-43.

• M. Rosenblatt, "Remarks on a Multivariat Transformation", The Annals of Mathematical Statistics, Vol. 23, No 3, pp. 470-472.

Examples

Let us consider a 2-dimensional random vector $\underline{X}=\left({X}_{1},{X}_{2}\right)$ defined by its marginal cumulative distribution functions $\left({F}_{1},{F}_{2}\right)$ and its copula $C$. We suppose that each component follows an exponential distributions defined as ${X}_{1}\sim ℰ\left({\lambda }_{1}\right)$ and ${X}_{2}\sim ℰ\left({\lambda }_{2}\right)$. The copula ${C}_{\theta }$ is supposed to be a Frank copula, where $\theta \in ℝ$, which is defined by :
 ${C}_{\theta }\left(u,v\right)=-\frac{1}{\theta }ln\left(1+\frac{\left({e}^{-\theta u}-1\right)\left({e}^{-\theta v}-1\right)}{{e}^{-\theta }-1}\right),\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\left(u,v,\right)\in {\left[0,1\right]}^{2}$ (121)

This copula is not elliptical but archimedian.

The cumulative distribution function of the conditional variable ${X}_{2}|{X}_{1}$ is defined by :

 ${F}_{2|1}\left({x}_{2}|{x}_{1}\right)={C}_{2|1}\left({F}_{2}\left({x}_{2}\right)|{F}_{1}\left({x}_{1}\right)\right)={C}_{2|1}\left(1-{e}^{-{\lambda }_{2}{x}_{2}},1-{e}^{-{\lambda }_{1}{x}_{1}}\right)$ (122)

where the conditional copula of ${X}_{2}|{X}_{1}$ is defined by :

 ${C}_{2|1}\left(v|u\right)=-\frac{{e}^{-\theta u}\left({e}^{-\theta v}-1\right)}{{e}^{-\theta v}-{e}^{-\theta u}\left({e}^{-\theta v}-1\right)-{e}^{-\theta }}$ (123)

The Rosenblatt transformation leads to the normal random vector $\underline{U}$ defined as:

 $\underline{U}=\left(\begin{array}{c}{\Phi }^{-1}\left(1-{e}^{-{\lambda }_{1}{x}_{1}}\right)\hfill \\ {\Phi }^{-1}\left[{C}_{2|1}\left(1-{e}^{-{\lambda }_{2}{x}_{2}},1-{e}^{-{\lambda }_{1}{x}_{1}}\right)\right]\hfill \end{array}\right)$ (124)

If we consider the following limit state function $g$ defined by:

 $g\left({X}_{1},{X}_{2}\right)=8{X}_{1}+2{X}_{2}-1$ (125)

it is possible to draw the transformations of the limit state surface into the standard space when using the canonical order in the Rosenblatt transformation and its inverse. The following figure draws them in the case where $\theta =10$, ${\lambda }_{1}=1$ and ${\lambda }_{2}=3$.

### 4.3.12 Step C  – FORM

Mathematical description

Goal

The First Order Reliability Method is used under the following context: $\underline{X}$ is the input random vector and ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability function.

Let us denote by $\underline{d}$ a determinist vector, $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and g(X , d) = 0 its boundary.

The objective of FORM is to evaluate the probability content of the event ${𝒟}_{f}$:

 ${P}_{f}=ℙ\left(g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$ (126)

Principle

The method proceeds in three steps :

1. Map the probabilistic model in terms of $\underline{X}$ thanks to an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ) $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into ${ℝ}^{n}$, such that the distribution of the random vector $\underline{U}=T\left(\underline{X}\right)$ has the following properties : $\underline{U}$ and $\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{U}$ have the same distribution for all rotations $\underline{\underline{R}}\in {𝒮𝒫}_{n}\left(ℝ\right)$.

The usual isoprobabilistic transformations are the Generalized Nataf transformation (refer to [Generalized Nataf] ) and the Rosenblatt one (refer to [Rosenblatt] ).

The mapping of the limit state function is $G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)=g\left({T}^{-1}\left(\underline{U}\right)\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$. Then, the event probability ${P}_{f}$ rewrites :

 ${P}_{f}=ℙ\left(G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{ℝ}^{n}}{1}_{G\left(\underline{u}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}\phantom{\rule{0.166667em}{0ex}}{f}_{\underline{U}}\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}$ (127)

where ${f}_{\underline{U}}$ is the density function of the distribution in the standard space : that distribution is spherical (invariant by rotation by definition). That property implies that ${f}_{\underline{U}}$ is a function of $||\underline{U}{||}^{2}$ only. Furthermore, we suppose that outside the sphere which tangents the limit state surface in the standard space, ${f}_{\underline{U}}$ is decreasing.

2. Find the design point ${\underline{u}}^{*}$ which is the point verifying the event of maximum likelihood : the decreasing hypothesis of the standard distribution ${f}_{\underline{U}}$ outside the sphere which tangents the limit state surface in the standard space implies that the design point is the point on the limit state boundary the nearest to the origin of the standard space. Thus, ${\underline{u}}^{*}$ is the result of a constrained optimization problem.

3. In the standard space, approximate the limit state surface in the standard space by a linear surface at the design point ${\underline{u}}^{*}$. Then, the probability ${P}_{f}$ (127) where the limit state surface has been approximated by a linear surface (hyperplane) can be obtained exactly, thanks to the rotation invariance of the standard distribution ${f}_{\underline{U}}$ :

 ${P}_{f,FORM}=\left|\begin{array}{cc}E\left(-{\beta }_{HL}\right)\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\text{origin}\phantom{\rule{4.pt}{0ex}}\text{of}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\underline{u}\text{-lies}\phantom{\rule{4.pt}{0ex}}\text{in}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\text{domain}\phantom{\rule{4.pt}{0ex}}{𝒟}_{f}\hfill \\ E\left(+{\beta }_{HL}\right)\hfill & \text{otherwise}\hfill \end{array}\right$ (128)

where ${\beta }_{HL}$ is the Hasofer-Lind reliability index (refer to [Reliability index] ), defined as the distance of the design point ${\underline{u}}^{*}$ to the origin of the standard space and $E$ the marginal cumulative density function of the spherical distributions in the standard space.

Let us recall here (refer to [Isoprobabilistic Transformation] ) that in the Rosenblatt standard space, random vectors follow the standard normal distribution (with zero mean, unit variance and unit correlation matrix), which implies that $E=\Phi$. In the Generalized Nataf standard space, random vectors follow some spherical distributions, with zero mean, unit variance, unit correlation matrix and which type $\psi$ is the one of the copula of the physical random vector $\underline{X}$ : in that case, $E$ is the 1D cumulative distribution function with zero mean, unit variance and which type is $\psi$.

Other notations

Here, the event considered is explicited directly from the limit state function $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ : this is the classical structural reliability formulation.

However, if the event is a threshold exceedance, it is useful to explicite the variable of interest $Z=\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$, evaluated from the model $\stackrel{˜}{g}\left(.\right)$. In that case, the event considered, associated to the threshold ${z}_{s}$ has the formulation: ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}Z=\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)>{z}_{s}\right\}$ and the limit state function is : $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)={z}_{s}-Z={z}_{s}-\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$. ${P}_{f}$ is the threshold exceedance probability, defined as : ${P}_{f}=P\left(Z\ge {z}_{s}\right)={\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$.

Within the global methodology, the First Order Reliability Method is used in the step C: "Uncertainty propagation" in the case of the evaluation of the probability of an event by an approximation method.

It requires to have fulfilled the following steps beforehand:

• step A: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\stackrel{˜}{g}\left(\underline{X},\underline{d}\right)$, result of the model $\stackrel{˜}{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0$,

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,

• step C: select an appropriate optimization algorithm among those proposed.

The First Order Reliability Method provides the following results:

• the FORM probability calculated by eq.128,

• the importance factors associated to the event (refer to [Importance Factors] ),

• if asked by the user, the sensitivity factors associated to the event (refer to [Sensitivity Factors] ).

References and theoretical basics

One is usually interested in the evaluation of a very small probability ${P}_{f}$ where the evaluation of the limit state function of the model requires computationally expensive subroutines. The FORM method has been designed specifically for such cases for which simulation techniques (see for instance [standard sampling approach] ) are computationally prohibitive.

The quality of the results obtained by the First Order Reliability Method depends on:

• the quality of the optimization algorithm used to find the design point: it is important that the optimization converges towards the global minimum of the distance function

• the quality of the computation of the gradients of the limit state function. It is important to choose an optimization algorithm adapted to the model considered

• the quality of the design point in the $\underline{u}$-space. It has several fields:

• the shape of the limit state surface: the boundary is supposed to be well approximated by a plane near the design point,

• the unicity of the design point in the $\underline{u}$-space: FORM is valid when there is only one point on the limit state surface at a distance minimal to the origin,

• the strongness of the design point: FORM is valid under the hypothesis that most of the contribution to ${P}_{f}$ is concentrated in the vicinity of the design point, which is the case both when around ${\underline{P}}^{*}$, the contribution decreases rapidly with the distance to ${\underline{P}}^{*}$ and when there is no local maximum with comparable density.

The first hypothesis can be checked by testing other method to evaluate ${P}_{f}$ : SORM (refer to [SORM] ) that takes into account the curvatures of the surface, or importance sampling techniques (refer to [Importance sampling] ) that makes no hypothesis on the shape of the surface.

The unicity and the strongness of the design point can be checked thanks to the Strong Maximum Test (refer to [Strong Max Test] ).

Accelerated sampling techniques such as directional sampling (refer to [Directional sampling] ) are also still valid if the unicity or strongness are doubtful.

A limitation of FORM (or SORM) approximation is that it is generally impossible to quantify the approximation error. Although the method has been used satisfactorily in many circumstances, it is generally useful, if computationally possible, to validate Form/Som using at least one of the techniques above mentioned.

Let's note some usefull references:

• O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark - Maritime engineering, internet publication.

• R. Lebrun, A. Dutfoy, 2008, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics 24 (2009), pp. 172-178, doi:10.1016/j.probengmech.2008.05.001.

• R. Lebrun, A. Dutfoy, 2008, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under temptatively accepted so far.

• H. O. Madsen, Krenk, S., Lind, N. C., 1986, "Methods of Structural Safety," Prentice Hall.

Examples

Example 1 : Let's apply this method to the following analytical example which considers a cantilever beam, of Young's modulus E, length L, section modulus I. We apply a concentrated bending force at the other end of the beam. The vertical displacement $y$ of the extrême end is equal to :
 $\begin{array}{c}\hfill y\left(E,F,L,I\right)=\frac{F{L}^{3}}{3EI}\end{array}$

The objective is to propagate until $y$ the uncertainties of the variables $\left(E,F,L,I\right)$.

The input random vector is $\underline{X}=\left(E,F,L,I\right)$, which probabilistic modelisation is (unity is not provided):

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal\left(50,1\right)\hfill \\ F\hfill & =& Normal\left(1,1\right)\hfill \\ L\hfill & =& Normal\left(10,1\right)\hfill \\ I\hfill & =& Normal\left(5,1\right)\hfill \end{array}\right\\end{array}$

The event considered is the threshold exceedance : ${𝒟}_{f}=\left\{\left(E,F,L,I\right)\in {ℝ}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y\left(E,F,L,I\right)\ge 3\right\}$ We obtain the following results, where the Generalized Nataf transformation has been used :

• design point in the $\underline{x}$-space, ${P}^{*}=\left({E}^{*}=49.97,{F}^{*}=1.842,{l}^{*}=10.45,{I}^{*}=4.668\right)$

• the generalized and Hasofer reliability index : ${\beta }_{g}={\beta }_{HL}=1.009$

• the FORM probability : ${P}_{f,FORM}=1.564{e}^{-1}$

Example 2 : If we evaluate the FORM probability in the example described in [Rosenblatt] , in the case where $\theta =10$, ${\lambda }_{1}=1$ and ${\lambda }_{2}=3$ and where we considered the canonical order and its inverse of the Rosenblatt transformation, we obtain :

 $\left\{\begin{array}{ccc}{\beta }_{CanOrd}\hfill & =& 1.24\hfill \\ {\beta }_{InvOrd}\hfill & =& 1.17\hfill \end{array}\right\$ (129)

which leads to different FORM approximations of the failure probability:

 $\left\{\begin{array}{ccc}{P}_{CanOrd}^{FORM}\hfill & =& 1.07\phantom{\rule{0.166667em}{0ex}}{10}^{-1}\hfill \\ {P}_{InvOrd}^{FORM}\hfill & =& 1.22\phantom{\rule{0.166667em}{0ex}}{10}^{-1}\hfill \end{array}\right\$ (130)

### 4.3.13 Step C  – SORM

Mathematical description

Goal

The Second Order Reliability Method is used in the same context as the First Order Reliability: refer to [FORM] for further details. The objective of SORM is to evaluate the probability content of the event ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ :

 ${P}_{f}=ℙ\left(g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$ (131)

Principle

The principle is the same as for FORM. After having mapped the physical space into the standard through an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ), eq. (131) becomes :

 ${P}_{f}=ℙ\left(G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right)={\int }_{{ℝ}^{n}}{1}_{G\left(\underline{u}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}\phantom{\rule{0.166667em}{0ex}}{f}_{\underline{U}}\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}$ (132)

where ${f}_{\underline{U}}$ is the density function of the distribution in the standard space : that distribution is spherical (invariant by rotation by definition). That property implies that ${f}_{\underline{U}}$ is a function of $||\underline{U}{||}^{2}$ only. Furthermore, we suppose that outside the sphere which tangents the limit state surface in the standard space, ${f}_{\underline{U}}$ is decreasing.

The difference with FORM comes from the approximation of the limit state surface at the design point ${\underline{P}}^{*}$ in the standard space : SORM approximates it by a quadratic surface that has the same main curvatures at the design point.

Let us denote by $n$ the dimension of the random vector $\underline{X}$ and ${\left({\kappa }_{i}\right)}_{1\le i\le n-1}$ the $n-1$ main curvatures of the limit state function at the design point in the standard space.

Several approximations are available in the standard version of OpenTURNS, detailed here in the case where the origin of the standard space does not belong to the failure domain :

• Breitung's formula is an asymptotic results (see [Breitung]: the usual formula used in the normal standard space, has been generalized in [R. Lebrun, A. Dutfoy, 2008, b] to standard spaces where the distribution is spherical, with $E$ the marginal cumulative density function of the spherical distributions in the standard space (see [Generalized Nataf] ) :

 ${P}_{Breitung}^{generalized}\stackrel{\beta \to \infty }{=}E\left(-\beta \right)\prod _{i=1}^{n-1}\frac{1}{\sqrt{1+\beta {\kappa }_{i}}}$ (133)

where $\Phi$ is the cumulative distribution function of the standard 1D normal distribution.

• Hohenbichler's formula is an approximation of equation (133):

 ${P}_{Hohenbichler}=\Phi \left(-{\beta }_{HL}\right)\prod _{i=1}^{n-1}{\left(1+\frac{\phi \left(-{\beta }_{HL}\right)}{\Phi \left(-{\beta }_{HL}\right)}{\kappa }_{i}\right)}^{-1/2}$ (134)

This formula is valid only in the normal standard space and if $\forall i,1+\frac{\phi \left(-{\beta }_{HL}\right)}{\Phi \left(-{\beta }_{HL}\right)}{\kappa }_{i}>0$.

• Tvedt's formula (Tvedt, 1988) :

 $\left\{\begin{array}{ccc}{P}_{Tvedt}\hfill & =& {A}_{1}+{A}_{2}+{A}_{3}\hfill \\ {A}_{1}\hfill & =& \Phi \left(-{\beta }_{HL}\right)\prod _{i=1}^{N-1}{\left(1+{\beta }_{HL}{\kappa }_{i}\right)}^{-1/2}\hfill \\ {A}_{2}\hfill & =& \left[{\beta }_{HL}\Phi \left(-{\beta }_{HL}\right)-\phi \left({\beta }_{HL}\right)\right]\left[\prod _{j=1}^{N-1}{\left(1+{\beta }_{HL}{\kappa }_{i}\right)}^{-1/2}-\prod _{j=1}^{N-1}{\left(1+\left(1+{\beta }_{HL}\right){\kappa }_{i}\right)}^{-1/2}\right]\hfill \\ {A}_{3}\hfill & =& \left(1+{\beta }_{HL}\right)\left[{\beta }_{HL}\Phi \left(-{\beta }_{HL}\right)-\phi \left({\beta }_{HL}\right)\right]\left[\prod _{j=1}^{N-1}{\left(1+{\beta }_{HL}{\kappa }_{i}\right)}^{-1/2}\right\hfill \\ & & -ℛe\left(\prod _{j=1}^{N-1}{\left(1+\left(i+{\beta }_{HL}\right){\kappa }_{j}\right)}^{-1/2}\right)]\hfill \end{array}\right\$ (135)

where $ℛe\left(z\right)$ is the real part of the complex number $z$ and $i$ the complex number such that ${i}^{2}=-1$ and $\Phi$ the cumulative distribution function of the standard 1D normal distribution.

This formula is valid only in the normal standard space and if $\forall i,1+\beta {\kappa }_{i}>0$ and $\forall i,1+\left(1+\beta \right){\kappa }_{i}>0$.

Other notations

Within the global methodology, the Second Order Reliability Method is used in the step C : "Uncertainty propagation" in the case of the evaluation of the probability of an event by an approximation method.

It requires to have fulfilled the following steps beforehand:

• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\stackrel{˜}{g}\left(\underline{X},\underline{d}\right)$, result of the model $\stackrel{˜}{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0$,

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,

• step C: select an appropriate optimization algorithm among those proposed.

The Second Order Reliability Method provides the following results :

• the SORM probabilities calculated in Eqs. (133),(134), (135)

• the importance factors associated to the event : refer to [Importance Factors] to obtain details,

• if asked by user, the sensitivity factors associated to the event : refer to [Sensitivity Factors] to obtain details.

References and theoretical basics

The motivations for using SORM are similar to the motivations for using FORM. As it takes into account the curvatures of the limit state surface, SORM is usually more accurate than FORM e.g. in case when the event boundary is highly curved.

The evaluation of the previous formulas requires that the limit state function be twice differentiable at the design point.

The quality of the results obtained by the Second Order Reliability Method depends on the quality of the design point (same points as the FORM approximation), on the shape of the event boundary which must be well approximated by a quadratic surface near the design point and on the accuracy of the computed curvatures, which depends on the accuracy of both the evaluation of the gradient and the hessian at the design point.

The Tvedt formula is exact for a quadratic surface and asymptotically exact for another types of surfaces. The Hohen-Bichler formula is a variant as regards to the Breitung one.

At last, let us note that the SORM approximation depend on the geometry of the limit state function in the standard space and thus it depends on the isoprobabilistic transformation used. In the normal copula case, [R. Lebrun, A. Dutfoy, 2008, c] has showed that both isoprobabilistic transformations are identical and then the SORM approximations are identical too. But, in all the other cases, both isoprobabilistic transformations lead to different standard spaces and thus to different SORM approximations. As the same way, the conditioning order in the Rosenblatt transformation has also an impact on the SORM approximations when the copula of the random vector $\underline{X}$ is not normal.

Let us note some useful references :

• Breitung K. a, "Asymptotic approximation for probability integral," Probability Engineering Mechanics, 1989, Vol 4, No. 4.

• Breitung K. b, 1984, "Asymptotic Approximation for multinormal Integrals," Journal of Engineering Mechanics, ASCE, 110(3), 357-366.

• Hohenbichler M., Rackwitz R., 1988, "Improvement of second order reliability estimates by importance sampling," Journal of Engineering Mechanics, ASCE,114(12), pp 2195-2199.

• R. Lebrun, A. Dutfoy, b, 2008, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics 24 (2009), pp. 172-178, doi:10.1016/j.probengmech.2008.05.001.

• R. Lebrun, A. Dutfoy, 2008, c, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under review so far.

• Tvedt L. 1988, "Second order reliability by an exact integral," proc. of the IFIP Working Conf. Reliability and Optimization of Structural Systems, Thoft-Christensen (Ed), pp377-384.

• Zhao Y. G., Ono T., 1999, "New approximations for SORM : part 1", Journal of Engineering Mechanics, ASCE,125(1), pp 79-85.

• Zhao Y. G., Ono T., 1999, "New approximations for SORM : part 2", Journal of Engineering Mechanics, ASCE,125(1), pp 86-93.

• Adhikari S., 2004, "Reliability analysis using parabolic failure surface approximation", Journal of Engineering Mechanics, ASCE,130(12), pp 1407-1427.

Examples

Let's apply this method to the following analytical example which considers a cantilever beam, of Young's modulus E, length L, section modulus I. We apply a concentrated bending force at the other end of the beam. The vertical displacement $y$ of the extreme end is equal to :
 $\begin{array}{c}\hfill y\left(E,F,L,I\right)=\frac{F{L}^{3}}{3EI}\end{array}$

The objective is to propagate until $y$ the uncertainties of the variables $\left(E,F,L,I\right)$.

The input random vector is $\underline{X}=\left(E,F,L,I\right)$, which probabilistic modelling is (unity is not provided):

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal\left(50,1\right)\hfill \\ F\hfill & =& Normal\left(1,1\right)\hfill \\ L\hfill & =& Normal\left(10,1\right)\hfill \\ I\hfill & =& Normal\left(5,1\right)\hfill \end{array}\right\\end{array}$

The four random variables are independent.

The event considered is the threshold exceedance : ${𝒟}_{f}=\left\{\left(E,F,L,I\right)\in {ℝ}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y\left(E,F,L,I\right)\ge 3\right\}$ We obtain the following results :

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}{P}_{Breitung}\hfill & =& 2.5491{e}^{-1}\phantom{\rule{0.166667em}{0ex}}%\hfill \\ {P}_{Hohenbichler}\hfill & =& 2.648{e}^{-1}\phantom{\rule{0.166667em}{0ex}}%\hfill \\ {P}_{Tvedt}\hfill & =& 2.601{e}^{-1}\hfill \end{array}\right\\end{array}$

These three approximations are coherent between them, which increases confidence in these results.

### 4.3.14 Step C  – Reliability Index

Mathematical description

Goal

The generalised reliability index $\beta$ is used under the following context : $\underline{X}$ is a probabilistic input vector, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and g(X , d) = 0 its boundary.

The probability content of the event ${𝒟}_{f}$ is ${P}_{f}$:

 $\begin{array}{c}\hfill {P}_{f}={\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\end{array}$ (136)

The generalised reliability index is defined as :

 $\begin{array}{c}\hfill {\beta }_{g}={\Phi }^{-1}\left(1-{P}_{f}\right)=-{\Phi }^{-1}\left({P}_{f}\right).\end{array}$

As ${\beta }_{g}$ increases, ${P}_{f}$ decreases rapidly.

Principle

OpenTURNS standard version evaluates :

• ${\beta }_{FORM}$ the FORM reliability index, where ${P}_{f}$ is obtained with a FORM approximation (refer to [FORM] ~): in this case, the generalised reliability index is equal to the Hasofer-Lindt reliability index ${\beta }_{HL}$, which is the distance of the design point from the origin of the standard space,

• ${\beta }_{SORM}$ the SORM reliability index, where ${P}_{f}$ is obtained with a SORM approximation : Breitung, Hohen-Bichler or Tvedt (refer to [SORM] ),

• ${\beta }_{g}$ the generalised reliability index, where ${P}_{f}$ is obtained with another technique : Monte Carlo simulations, importance samplings,... (refer to [Monte Carlo] , [LHS] [Importance samplings] and [Directional Simulation] ~).

Other notations

Within the global methodology, the reliability index is used in the step C: "Uncertainty propagation" in the case of the evaluation of the probability of an event.

It requires to have fulfilled before the following steps:

• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\stackrel{˜}{g}\left(\underline{X},\underline{d}\right)$, result of the model $\stackrel{˜}{g}\left(\right)$,

• step A22: identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0$,

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,

• step C3: select a method to evaluate the probability content of the event : the FORM or SORM approximation (step C31) or a simulation method (step C32).

References and theoretical basics

Interesting litterature on the subject is :
• Cornell, "A probability-based structural code," Journal of the American Concrete Institute, 1969, 66(12), 974-985.

• O. Ditlevsen, 1979, "Generalised Second moment reliability index," Journal of Structural Mechanics, ASCE, Vol.7, pp. 453-472.

• O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark - Maritime engineering, internet publication.

• Hasofer and Lind, 1974, "Exact and invariant second moment code format," Journal of Engineering Mechanics Division, ASCE, Vol. 100, pp. 111-121.

Examples

Let's apply this method to the following analytical example which considers a cantilever beam, of Young's modulus E, length L, section modulus I. We apply a concentrated bending force at the other end of the beam. The vertical displacement $y$ of the extrême end is equal to :
 $\begin{array}{c}\hfill y\left(E,F,L,I\right)=\frac{F{L}^{3}}{3EI}\end{array}$

The objective is to propagate until $y$ the uncertainties of the variables $\left(E,F,L,I\right)$.

The input random vector is $\underline{X}=\left(E,F,L,I\right)$, which probabilistic modelisation is (unity is not provided):

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal\left(50,1\right)\hfill \\ F\hfill & =& Normal\left(1,1\right)\hfill \\ L\hfill & =& Normal\left(10,1\right)\hfill \\ I\hfill & =& Normal\left(5,1\right)\hfill \end{array}\right\\end{array}$

The four random variables are independant.

The event considered is the threshold exceedance : ${𝒟}_{f}=\left\{\left(E,F,L,I\right)\in {ℝ}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y\left(E,F,L,I\right)\ge 3\right\}$ We obtain the following results :

• design point in the $\underline{x}$-space, ${P}^{*}=\left({E}^{*}=49.97,{F}^{*}=1.842,{l}^{*}=10.45,{I}^{*}=4.668\right)$

• generalized and Hasofer-Lind reliability index : ${\beta }_{g}={\beta }_{HL}=1.009$

• Breitung generalized reliability index ${\beta }_{Breitung}=6.591{e}^{-1}$

• HohenBichler generalized reliability index ${\beta }_{HohenBichler}=6.285{e}^{-1}$

• Tvedt generalized reliability index ${\beta }_{Tvedt}=6.429{e}^{-1}$

We note here that the three approximations SORM are consistent between them and different from the FORM one. It may signify that the curvatures are not important to take into account in the evaluation of the event probability.

### 4.3.15 Step C  – Sphere sampling method

Mathematical description

Goal

Within the context of the First and Second Order of the Reliability Method (refer to [FORM] and [SORM] ), the Strong Maximum Test (refer to [Strong Maximum Test] ) helps to check whether the design point computed is :

• the true design point, which means a global maximum point,

• a strong design point, which means that there is no other local maximum verifying the event and associated to a value near the global maximum.

The Strong Maximum Test samples a sphere in the standard space. OpenTURNS standard version uses the gaussian random sampling technique described hereafter.

Principle

OpenTURNS standard version uses the gaussian random sampling technique:

1. sampling of points in ${ℝ}^{N}$ according to a radial distribution : we generate $N$ independent standard normal samples,

2. projection of the points onto ${𝒮}^{*}$ : we map the points different from the origin using the transformation $M↦m$ such as $\mathrm{𝐎𝐦}=R\frac{\mathrm{𝐎𝐌}}{\parallel \mathrm{𝐎𝐌}\parallel }$ where $R$ is the radius of the sphere of interest. This transformation does not depend on the angular coordinates. Thus, the generated points follow a uniform distribution on ${𝒮}^{*}$.

A result of such an algorithm is drawn on the following figure 4.3.15.

Other notations

-

Within the global methodology, the sphere sampling method is used in the step C, within the Strong Maximum Test (refer to [Strong Max Test] ).

It requires to have fulfilled the following steps beforehand:

• step A: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\stackrel{˜}{g}\left(\underline{X},\underline{d}\right)$, result of the model $\stackrel{˜}{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0$,

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,

• step C: select an appropriate optimization algorithm among those proposed to evaluate the Form or Sorm approximations of ${P}_{f}$; evaluate the quality of the design point resulting from the previous step thanks to the Strong Maximum Test.

References and theoretical basics

Other methods can be used to sample the hypersphere of dimension $N-1$ in ${ℝ}^{N}$ : the exclusion method and the parametric method.

The parametric method uses the polar coordinates : the angle parameters are discretized uniformly. It has the inconvenient to generate points principally in the two poles zones.

The exclusion method generates points uniformly within the hypercube containing exactly the sphere. Then we keep only the points located inside the sphere, and we project them on the sphere. This method has the inconvenient to be inefficient for high dimensions : the fraction between the volume of the hypersphere and the volume of the hypercube is less than $0.7%$ as soon as the dimension is greater than 9. Il means that for a dimension greater than 9, $99.3%$ of the points generated are rejected.

Let's note some usefull references:

• Luban, Marshall, Staunton, 1988, "An efficient method for generating a uniform distribution of points within a hypersphere," Computer in Physics, 2(6), 55.

### 4.3.16 Step C  – Strong Maximum Test : a design point validation

Mathematical description

Goal

The Strong Maximum Test is used under the following context: $\underline{X}$ denotes a random input vector, representing the sources of uncertainties, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, representing the fixed variables $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)=0$ its boundary (also called limit state surface).

The probability content of the event ${𝒟}_{f}$ :

 $\begin{array}{ccc}\hfill {P}_{f}& =& {\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\hfill \end{array}$ (4.3)

may be evaluated with the FORM (refer to [FORM] ) or SORM method (refer to [SORM] ).

In order to evaluate an approximation of ${P}_{f}$, these analytical methods uses the Nataf isoprobabilistic transformation which maps the probabilistic model in terms of $\underline{X}$ onto an equivalent model in terms of $n$ independant standard normal random $\underline{U}$ (refer to [Isoprobabilistic Transformation] to have details on the transformation). In that new $\underline{u}$-space, the event has the new expression defined from the transformed limit state function of the model $G$ : ${𝒟}_{f}=\left\{\underline{U}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ and its boundary : $\left\{\underline{U}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)=0\right\}$.

These analytical methods rely on the assumption that most of the contribution to ${P}_{f}$ comes from points located in the vicinity of a particular point ${P}^{*}$, the design point, defined in the $\underline{u}$-space as the point located on the limit state surface and of maximal likelihood. Given the probabilistic caracteristics of the $\underline{u}$-space, ${P}^{*}$ has a geometrical interpretation : it is the point located on the event boundary and at minimal distance from the center of the $\underline{u}$-space. Thus, the design point ${P}^{*}$ is the result of a constrained optimization problem.

The FORM/SORM methods suppose that ${P}^{*}$ is unique.

One important difficulty comes from the fact that numerical method involved in the determination of ${P}^{*}$ gives no guaranty of a global optimum : the point to which it converges might be a local optimum only. In that case, the contribution of the points in the vicinity of the real design point is not taken into account, and this contribution is the most important one.

Furthermore, even in the case where the global optimum has really been found, there may exist another local optimum ${\stackrel{˜}{P}}^{*}$ which likelihood is slightly inferior to the design point one, which means its distance from the center of the $\underline{u}$-space is slightly superior to the design point one. Thus, points in the vicinity of ${\stackrel{˜}{P}}^{*}$ may contribute significantly to the probability ${P}_{f}$ and are not taken into account in the Form and Sorm approximations.

In these both cases, the Form and Sorm approximations are of bad quality because they neglict important contributions to ${P}_{f}$ .

The Strong Maximum Test helps to evaluate the quality of the design point resulting from the optimization algorithm. It checks whether the design point computed is :

• the true design point, which means a global maximum point,

• a strong design point, which means that there is no other local maximum located on the event boundary and which likelihood is slightly inferior to the design point one.

This verification is very important in order to give sense to the FORM and SORM approximations .

Principle

The principle of the Strong Maximum Test, which principles are drawn on the figure (57) relies on the geometrical definition of the design point.

The objective is to detect all the points ${\stackrel{˜}{P}}^{*}$ in the ball of radius ${R}_{\epsilon }=\beta \left(1+{\delta }_{\epsilon }\right)$ which are potentially the real design point (case of ${\stackrel{˜}{P}}_{2}^{*}$) or which contribution to ${P}_{f}$ is not negligeable as regards the approximations Form and Sorm (case of ${\stackrel{˜}{P}}_{1}^{*}$). The contribution of a point is considered as negligeable when its likelihood in the $\underline{u}$-space is more than $\epsilon$-times lesser than the design point one. The radius ${R}_{\epsilon }$ is the distance to the $\underline{u}$-space center upon which points are considered as negligeable in the evaluation of ${P}_{f}$.

In order to catch the potential points located on the sphere of radius ${R}_{\epsilon }$ (frontier of the zone of prospection), it is necessary to go a little further more : that's why the test samples the sphere of radius $R=\beta \left(1+\tau {\delta }_{\epsilon }\right)$, with $\tau >0$.

Points on the sampled sphere ( refer to [Sample Sphere] to have details on the generation of the sample) which are in the vicinity of the design point ${P}^{*}$ are less interesting than those verifying the event and located far from the design point : these last ones might reveal a potential ${\stackrel{˜}{P}}^{*}$ which contribution to ${P}_{f}$ has to be taken into account. The vicinity of the design point is defined with the angular parameter $\alpha$ as the cone centered on ${P}^{*}$ and of half-angle $\alpha$.

The number $N$ of the simulations sampling the sphere of radius $R$ is determined to ensure that the test detect with a probability greater than $\left(1-q\right)$ any point verifying the event and outside the design point vicinity.

 The Strong Maximum Test to validate the quality of the design point : unicity and strongness
Figure 57

The vicinity of the Design Point is the arc of the sampled sphere which is inside the half space which frontier is the linearized limit state function at the Design Point (see figure (58) : the vicinity is the arc included in the half space ${D}_{1}$).

 Vicinity of the Design Point in the standard space : the half space D 1
Figure 58

Other notations

Within the global methodology, the First Order Reliability Method is used in the step C: "Uncertainty propagation" in the case of the evaluation of the probability of an event by an approximation method.

The Strong Maximum Test proceeds as follows. The User selects the parameters :

• the importance level $\epsilon$, where $0<\epsilon <1$,

• the accuracy level $\tau$, where $\tau >0$,

• the confidence level $\left(1-q\right)$ where $0 or the number of points $N$ used to sample the sphere. The parameters are deductible from one other.

The Strong Maximum Test will sample the sphere of radius $\beta \left(1+\tau {\delta }_{\epsilon }\right)$, where ${\delta }_{\epsilon }=\sqrt{1-2\frac{ln\left(\epsilon \right)}{{\beta }^{2}}}-1$.

The test will detect with a probability greater than $\left(1-q\right)$ any point of ${𝒟}_{f}$ which contribution to ${P}_{f}$ is not negligeable (i.e. which density value in the $\underline{u}$-space is greater than $\epsilon$ times the density value at the design point).

The Strong Maximum Test provides :

• set 1 : all the points detected on the sampled sphere that are in ${𝒟}_{f}$ and outside the design point vicinity, with the corresponding value of the limit state function,

• set 2 : all the points detected on the sampled sphere that are in ${𝒟}_{f}$ and in the design point vicinity, with the corresponding value of the limit state function ,

• set 3 : all the points detected on the sampled sphere that are outside ${𝒟}_{f}$ and outside the design point vicinity, with the corresponding value of the limit state function,

• set 4 : all the points detected on the sampled sphere that are outside ${𝒟}_{f}$ but in the vicinity of the design point, with the corresponding value of the limit state function.

Points are described by their coordinates in the $\underline{x}$-space.

References and theoretical basics
The parameter $\tau$ is directly linked to the hypothesis according to which the boundary of the space ${𝒟}_{f}$ is supposed to be well approximated by a plane near the design point, which is primordial for a FORM approximation of the probability content of ${𝒟}_{f}$. Increasing $\tau$ is increasing the area where the approximation FORM is applied.

The parameter $\tau$ also serves as a measure of distance from the design point ${\underline{OP}}^{*}$ for a hypothetical local maximum : the greater it is, the further we search for another local maximum.

Numerical experiments show that it is recommanded to take $\tau \le 4$ (see the given reference below).

The following table helps to quantify the parameters of the test for a problem of dimension 5.

 ${\beta }_{g}$ $\epsilon$ $\tau$ $1-q$ ${\delta }_{\epsilon }$ $N$ 3.0 0.01 2.0 0.9 $4.224{e}^{-1}$ 62 3.0 0.01 2.0 0.99 $4.224{e}^{-1}$ 124 3.0 0.01 4.0 0.9 $4.224{e}^{-1}$ 15 3.0 0.01 4.0 0.99 $4.224{e}^{-1}$ 30 3.0 0.1 2.0 0.9 $2.295{e}^{-1}$ 130 3.0 0.1 2.0 0.99 $2.295{e}^{-1}$ 260 3.0 0.1 4.0 0.9 $2.295{e}^{-1}$ 26 3.0 0.1 4.0 0.99 $2.295{e}^{-1}$ 52 5.0 0.01 2.0 0.9 $1.698{e}^{-1}$ 198 5.0 0.01 2.0 0.99 $1.698{e}^{-1}$ 397 5.0 0.01 4.0 0.9 $1.698{e}^{-1}$ 36 5.0 0.01 4.0 0.99 $1.698{e}^{-1}$ 72 5.0 0.1 2.0 0.9 $8.821{e}^{-2}$ 559 5.0 0.1 2.0 0.99 $8.821{e}^{-2}$ 1118 5.0 0.1 4.0 0.9 $8.821{e}^{-2}$ 85 5.0 0.1 4.0 0.99 $8.821{e}^{-2}$ 169

 ${\beta }_{g}$ $\epsilon$ $\tau$ $N$ ${\delta }_{\epsilon }$ $1-q$ 3.0 0.01 2.0 100 $4.224{e}^{-1}$ 0.97 3.0 0.01 2.0 1000 $4.224{e}^{-1}$ 1.0 3.0 0.01 4.0 100 $4.224{e}^{-1}$ 1.0 3.0 0.01 4.0 1000 $4.224{e}^{-1}$ 1.0 3.0 0.1 2.0 100 $2.295{e}^{-1}$ 0.83 3.0 0.1 2.0 1000 $2.295{e}^{-1}$ 1.0 3.0 0.1 4.0 100 $2.295{e}^{-1}$ 1.0 3.0 0.1 4.0 1000 $2.295{e}^{-1}$ 1.0 5.0 0.01 2.0 100 $1.698{e}^{-1}$ 0.69 5.0 0.01 2.0 1000 $1.698{e}^{-1}$ 1.0 5.0 0.01 4.0 100 $1.698{e}^{-1}$ 1.0 5.0 0.01 4.0 1000 $1.698{e}^{-1}$ 1.0 5.0 0.1 2.0 100 $8.821{e}^{-2}$ 0.34 5.0 0.1 2.0 1000 $8.821{e}^{-2}$ 0.98 5.0 0.1 4.0 100 $8.821{e}^{-2}$ 0.93 5.0 0.1 4.0 1000 $8.821{e}^{-2}$ 0.99

As the Strong Maximum Test involves the computation of $N$ values of the limit state function, which is computationally intensive, it is interesting to have more than just an indication about the quality of ${\underline{OP}}^{*}$. In fact, the test gives some information about the trace of the limit state function on the sphere of radius $\beta \left(1+\tau {\delta }_{\epsilon }\right)$ centered on the origin of the $\underline{u}$-space. Two cases can be distinguished:

• Case 1: set 1 is empty. We are confident on the fact that ${\underline{OP}}^{*}$ is a design point verifying the hypothesis according to which most of the contribution of ${P}_{f}$ is concentrated in the vicinity of ${\underline{OP}}^{*}$. By using the value of the limit state function on the sample $\left({\underline{U}}_{1},\cdots ,{\underline{U}}_{N}\right)$, we can check if the limit state function is reasonably linear in the vicinity of ${\underline{OP}}^{*}$, which can validate the second hypothesis of FORM.

If the behaviour of the limit state function is not linear, we can decide to use an importance sampling version of the Monte Carlo method for computing the probability of failure (refer to [Importance sampling] ). However, the information obtained through the Strong Max Test, according to which ${\underline{OP}}^{*}$ is the actual design point, is quite essential : it allows to construct an effective importance sampling density, e.g. a multidimensional gaussian distribution centered on ${\underline{OP}}^{*}$.

• Case 2: set 1 is not empty. There are two possibilities:

1. We have found some points that suggest that ${\underline{OP}}^{*}$ is not a strong maximum, because for some points of the sampled sphere, the value taken by the limit state function is slightly negative;

2. We have found some points that suggest that ${\underline{OP}}^{*}$ is not even the global maximum, because for some points of the sampled sphere, the value taken by the limit state function is very negative.

In the first case, we can decide to use an importance sampling version of the Monte Carlo method for computing the probability of failure, but with a mixture of e.g. multidimensional gaussian distributions centered on the ${U}_{i}$ in ${𝒟}_{f}$ (refer to [Importance Sampling] ). In the second case, we can restart the search of the design point by starting at the detected ${U}_{i}$.

More details can be found in the following reference:

• A. Dutfoy, R. Lebrun, 2006, "The Strong Maximum Test: an efficient way to assess the quality of a design point," PSAM8, New Orleans.

• A. Dutfoy, R. Lebrun, 2007, 'Le test du maximum fort : une façon efficace de valider la qualité d'un point de conception', CFM 07, Grenoble, France.

### 4.3.17 Step C  – Monte Carlo Simulation

Mathematical description

Goal

Using the probability distribution of a random vector $\underline{X}$, we seek to evaluate the following probability:

 $\begin{array}{c}\hfill {P}_{f}=ℙ\left(g\left(\underline{X},\underline{d}\right)\le 0\right)\end{array}$

Here, $\underline{X}$ is a random vector, $\underline{d}$ a deterministic vector, $g\left(\underline{X},\underline{d}\right)$ the function known as "limit state function" which enables the definition of the event ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X},\underline{d}\right)\le 0\right\}$.

Principle

If we have the set $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of $N$ independent samples of the random vector $\underline{X}$, we can estimate ${\stackrel{^}{P}}_{f}$ as follows:

 $\begin{array}{c}\hfill {\stackrel{^}{P}}_{f}=\frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\left\{g\left({\underline{x}}_{i},\underline{d}\right)\le 0\right\}}\end{array}$

where ${\mathbf{1}}_{\left\{g\left({\underline{x}}_{i},\underline{d}\right)\le 0\right\}}$ describes the indicator function equal to 1 if $g\left({\underline{x}}_{i},\underline{d}\right)\le 0$ and equal to 0 otherwise; the idea here is in fact to estimate the required probability by the proportion of cases, among the $N$ samples of $\underline{X}$, for which the event ${𝒟}_{f}$ occurs.

By the law of large numbers, we know that this estimation converges to the required value ${P}_{f}$ as the sample size $N$ tends to infinity.

The Central Limit Theorem allows to build an asymptotic confidence interval using the normal limit distribution as follows:

 $\begin{array}{c}\hfill \underset{N\to \infty }{lim}ℙ\left({P}_{f}\in \left[{\stackrel{^}{P}}_{f,inf},{\stackrel{^}{P}}_{f,sup}\right]\right)=\alpha \end{array}$

with ${\stackrel{^}{P}}_{f,inf}={\stackrel{^}{P}}_{f}-{q}_{\alpha }\sqrt{\frac{{\stackrel{^}{P}}_{f}\left(1-{\stackrel{^}{P}}_{f}\right)}{N}}$, ${\stackrel{^}{P}}_{f,sup}={\stackrel{^}{P}}_{f}+{q}_{\alpha }\sqrt{\frac{{\stackrel{^}{P}}_{f}\left(1-{\stackrel{^}{P}}_{f}\right)}{N}}$ and ${q}_{\alpha }$ is the $\left(1+\alpha \right)/2$-quantile of the standard normal distribution.

One can also use a convergence indicator that is independent of the confidence level $\alpha$: the coefficient of variation, which is the ratio between the asymptotic standard deviation deviation of the estimate and its mean value. It is a relative measure of dispersion given by:

 $\begin{array}{c}\hfill {\mathrm{CV}}_{{\stackrel{^}{P}}_{f}}=\sqrt{\frac{1-{\stackrel{^}{P}}_{f}}{N{\stackrel{^}{P}}_{f}}}\simeq \frac{1}{\sqrt{N{\stackrel{^}{P}}_{f}}}\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}{\stackrel{^}{P}}_{f}\ll 1\end{array}$

It must be emphasize that these results are asymptotic and as such needs that $N\gg 1$. The convergence to the standard normal distribution is dominated by the skewness of ${\mathbf{1}}_{\left\{g\left({\underline{x}}_{i},\underline{d}\right)\le 0\right\}}$ divided by the sample size $N$, it means $\frac{1-2{P}_{f}}{N\sqrt{{P}_{f}\left(1-{P}_{f}\right)}}$. In the usual case ${P}_{f}\ll 1$, the distribution is highly skewed and this term is approximately equal to $\frac{1}{N\sqrt{{P}_{f}}}\simeq {\mathrm{CV}}_{{\stackrel{^}{P}}_{f}}/\sqrt{N}$. A rule of thumb is that if ${\mathrm{CV}}_{{\stackrel{^}{P}}_{f}}<0.1$ with $N>{10}^{2}$, the asymptotic nature of the Central Limit Theorem is not problematic.

Other notations

Direct sampling, Crude Monte Carlo method, Classical Monte Carlo integration

This method is used in step C and enables the probability of exceeding the threshold of an output variable (we refer to the probability of exceeding the threshold (critical region) because the inequality $g\left(\underline{X},\underline{d}\right)\le 0$ by convention defines a reliability/critical region, and is in the general case the rewritten inequality of type $Z\ge \mathrm{threshold}$ where $Z$ is a a random variable function of $\underline{X}$ and $\underline{d}$).

This amounts to calculating the cumulative distribution function of the output variable at a point and thus propagating the uncertainty defined in step B using the model defined in step A.

Input data:

• $\underline{X}$: random vector modelling the unknown variables defined in step A and for which the joint probability density function has been defined in step B,

• $\underline{d}$: vector of deterministic calculation parameters,

• $g\left(\underline{X},\underline{d}\right)\le 0$: probabilistic criterion specified in step A,

Parameters:

• $N$: number of simulations to be carried out (samples to be taken) (maximal in the case where ${\left({\mathrm{CV}}_{{\stackrel{^}{P}}_{f}}\right)}_{max}$ is specified, see next parameter),

• ${\left({\mathrm{CV}}_{{\stackrel{^}{P}}_{f}}\right)}_{max}$: maximal coefficient of variation of the probability estimator (optional),

• $\alpha$: confidence level required for the confidence interval.

Outputs:

• ${\stackrel{^}{P}}_{f}$: estimation of the probability of exceeding the threshold (critical value/region),

• $\mathrm{Var}\left({\stackrel{^}{P}}_{f}\right)$: estimation of the variance of the probability estimator,

• ${\stackrel{^}{P}}_{f,sup}-{\stackrel{^}{P}}_{f,inf}$: length of the confidence interval.

References and theoretical basics

The standard Monte-Carlo method requires very few special properties for the function $g$: it should be measurable and integrable but can be irregular, non-convex...On the other hand, this method is not suitable when the probability to be estimated is small and when the CPU time needed to evaluate the criterion $g\left(\underline{X},\underline{d}\right)\le 0$ is considerable. In practice, the standard Monte-Carlo method is not recommended except if one has (for ${P}_{f}<{10}^{-2}$):
 $\begin{array}{c}\hfill \frac{{t}_{\mathrm{CPU}}\left\{g\left(\underline{X},\underline{d}\right)\le 0\right\}}{{P}_{f}×{\left(\mathrm{estimation}\mathrm{precision}\right)}^{2}}\le \mathrm{available}\mathrm{machine}\mathrm{time}\end{array}$

where:

• ${t}_{\mathrm{CPU}}\left\{g\left(\underline{X},\underline{d}\right)\le 0\right\}$: CPU time needed to evaluation the criterion $\left\{g\left(\underline{X},\underline{d}\right)\le 0\right\}$ for given data values of $\underline{X}$ and $\underline{d}$,

• estimation precision: desired limit for the coefficient of variation of the estimator,

• available machine time: desired limit on the total duration of the estimation.

Readers interested in the problem of estimating the probability of exceeding a threshold are referred to [FORM] , [SORM] , [LHS] , [Importance Sampling] and [Directional Simulation] .

The following provide an interesting bibliographical starting point to further study of this method:

• Robert C.P., Casella G. (2004). Monte-Carlo Statistical Methods, Springer, ISBN 0-387-21239-6, 2nd ed.

• Rubinstein R.Y. (1981). Simulation and The Monte-Carlo methods, John Wiley & Sons

### 4.3.18 Step C  – Importance Simulation

Mathematical description

Goal

Let us note ${𝒟}_{f}=\left\{\underline{x}\in {ℝ}^{n}|g\left(\underline{x},\underline{d}\right)\le 0\right\}$. The goal is to estimate the following probability:

 $\begin{array}{ccc}\hfill {P}_{f}& =& {\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int }_{{ℝ}^{n}}{\mathbf{1}}_{\left\{g\left(\underline{x},\underline{d}\right)\phantom{\rule{0.222222em}{0ex}}\le 0\phantom{\rule{0.222222em}{0ex}}\right\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& ℙ\left(\left\{g\left(\underline{X},\underline{d}\right)\le 0\right\}\right)\hfill \end{array}$

Principles

This is a sampling-based method. The main idea of the Importance Sampling method is to replace the initial probability distribution of the input variables by a more "efficient" one. "Efficient" means that more events will be counted in the failure domain ${𝒟}_{f}$ and thus reduce the variance of the estimator of the probability of exceeding a threshold. Let $\underline{Y}$ be a random vector such that its probability density function ${f}_{\underline{Y}}\left(\underline{y}\right)>0$ almost everywhere in the domain ${𝒟}_{f}$,

 $\begin{array}{ccc}\hfill {P}_{f}& =& {\int }_{{ℝ}^{n}}{\mathbf{1}}_{\left\{g\left(\underline{x},\underline{d}\right)\le 0\right\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int }_{{ℝ}^{n}}{\mathbf{1}}_{\left\{g\left(\underline{x},\underline{d}\right)\le 0\right\}}\frac{{f}_{\underline{X}}\left(\underline{x}\right)}{{f}_{\underline{Y}}\left(\underline{x}\right)}{f}_{\underline{Y}}\left(\underline{x}\right)d\underline{x}\hfill \end{array}$

The estimator built by Importance Sampling method is:

 $\begin{array}{c}\hfill {\stackrel{^}{P}}_{f,IS}^{N}=\frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\left\{g\left({\underline{Y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right),\underline{d}\right)\le 0\right\}}\frac{{f}_{\underline{X}}\left({\underline{Y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{{f}_{\underline{Y}}\left({\underline{Y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}\end{array}$

where:

• $N$ is the total number of computations,

• the random vectors $\left\{{\underline{Y}}_{i},i=1\cdots N\right\}$ are independent, identically distributed and following the probability density function ${f}_{\underline{Y}}$

Confidence Intervals

With the notations,

 $\begin{array}{ccc}\hfill {\mu }_{N}& =& \frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\left\{g\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right),\underline{d}\right)\le 0\right\}}\frac{{f}_{\underline{X}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{{f}_{\underline{Y}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}\hfill \\ \hfill {\sigma }_{N}^{2}& =& \frac{1}{N}\sum _{i=1}^{N}{\left({\mathbf{1}}_{\left\{g\left({\underline{y}}_{i}\right),\underline{d}\right)\le 0\right\}}\frac{{f}_{\underline{X}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{{f}_{\underline{Y}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}-{\mu }_{N}\right)}^{2}\hfill \end{array}$

The asymptotic confidence interval of order $1-\alpha$ associated to the estimator ${P}_{f,IS}^{N}$ is

 $\begin{array}{c}\hfill \left[{\mu }_{N}-\frac{{q}_{1-\alpha /2}.{\sigma }_{N}}{\sqrt{N}}\phantom{\rule{0.222222em}{0ex}};\phantom{\rule{0.222222em}{0ex}}{\mu }_{N}+\frac{{q}_{1-\alpha /2}.{\sigma }_{N}}{\sqrt{N}}\right]\end{array}$

where ${q}_{1-\alpha /2}$ is the $1-\alpha /2$ quantile from the standard distribution $𝒩\left(0,1\right)$.

Other notations

This method could also be found under the name "Strategic Sampling", "Ponderated Sampling" or "Biased Sampling" (even if this estimator is not biased as it gives exactly the same result).

This method is part of the step C of the global methodology. It requires the specification the joined probability density function of the input variables and the value of the threshold and the comparison operator.
References and theoretical basics
There is no general result concerning the reduction of variance of ${\stackrel{^}{P}}_{f,IS}^{N}$ in comparison with the variance of the initial Monte Carlo estimator ${\stackrel{^}{P}}_{f,MC}^{N}$. Nevertheless, if one knows well the model (regularity, monotoneous,...), it is possible to define a more efficient joined probability density function. Nevertheless, there is a reduction of variance if one chooses a density ${f}_{\underline{Y}}\left(\underline{y}\right)$ such that ${f}_{\underline{Y}}\left(\underline{y}\right)>{f}_{\underline{X}}\left(\underline{y}\right)$ almost everywhere in the failure space. Indeed, in this case $\frac{{f}_{\underline{X}}\left(\underline{y}\right)}{{f}_{\underline{Y}}\left(\underline{y}\right)}<1$ on all the domain, the variance being equal to:
 $\begin{array}{c}\hfill \mathrm{Var}\left[{\stackrel{^}{P}}_{f,IS}\right]={\int }_{{𝒟}_{f}}{\left(\frac{{f}_{\underline{X}}\left(\underline{y}\right)}{{f}_{\underline{Y}}\left(\underline{y}\right)}\right)}^{2}d\underline{y}-{P}_{f}^{2}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}<\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{Var}\left[{\stackrel{^}{P}}_{f,MC}\right]={P}_{f}-{P}_{f}^{2}\end{array}$

Moreover, one has to pay attention to define the same support for the joined pdf of the input variables to ensure the convergence.

The following references are a first introduction to the Monte Carlo methods:

W.G. Cochran. Sampling Techniques. John Wiley and Sons, 1977.

M.H. Kalos et P.A. Monte Carlo Methods, volume I: Basics. John Wiley and Sons, 1986.

R.Y. Rubinstein. Simulation and the Monte Carlo Method. John Wiley and Sons, 1981.

### 4.3.19 Step C  – Directional Simulation

Mathematical description

Goal

Using the probability distribution of a random vector $\underline{X}$, we seek to evaluate the following probability:

 $\begin{array}{c}\hfill {P}_{f}=ℙ\left(g\left(\underline{X},\underline{d}\right)<0\right)\end{array}$

Here, $\underline{X}$ is a random vector, $\underline{d}$ a deterministic vector, $g\left(\underline{X},\underline{d}\right)$ the function known as "limit state function" which enables the definition of the event ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X},\underline{d}\right)\le 0\right\}$.

Principle

The directional simulation method is an accelerated sampling method. It implies a preliminary [iso-probabilistic transformation] , as for [FORM] and [SORM] methods; however, it remains based on sampling and is thus not an approximation method. In the transformed space, the (transformed) uncertain variables $\underline{U}$ are independent standard gaussian variables (mean equal to zero and standard deviation equal to 1).

Roughly speaking, each simulation of the directional simulation algorithm is made of three steps. For the ${i}^{\mathrm{th}}$ iteration, these steps are the following:

• Let $𝒮=\left\{\underline{u}|||\underline{u}||=1\right\}$. A point ${P}_{i}$ is drawn randomly on $𝒮$ according to an uniform distribution.

• In the direction starting from the origin and passing through ${P}_{i}$, solutions of the equation $g\left(\underline{X},\underline{d}\right)=0$ (i.e. limits of ${𝒟}_{f}$) are searched. The set of values of $\underline{u}$ that belong to ${𝒟}_{f}$ is deduced for these solutions: it is a subset ${I}_{i}\subset ℝ$.

• Then, one calculates the probability ${q}_{i}=ℙ\left(||\underline{U}||\in {I}_{i}\right)$. By property of independent standard variable, $||\underline{U}{||}^{2}$ is a random variable distributed according to a chi-square distribution, which makes the computation effortless.

Finally, the estimate of the probability ${P}_{f}$ after $N$ simulations is the following:

 $\begin{array}{c}\hfill {\stackrel{^}{P}}_{f,DS}=\frac{1}{N}\sum _{i=1}^{N}{q}_{i}\end{array}$

The following figure illustrates the principle of an iteration in dimension 2.

The Central Limit Theorem enables the difference between the estimated value and the sought value to be controlled by means of a confidence interval (if N is sufficiently large, typically $N$ > a few dozens even if there is now way to say for sure if the asymptotic behaviour is reached). For a probability $\alpha$ strictly between 0 and 1 chosen by the user, one can, for example, be sure with a confidence $\alpha$, that the true value of ${P}_{f}$ is between ${\stackrel{^}{P}}_{f,inf}$ and ${\stackrel{^}{P}}_{f,sup}$ calculated analytically from simple formulae. To illustrate, for $\alpha =0.95$:

 $\begin{array}{c}\hfill {\stackrel{^}{P}}_{f,inf}={\stackrel{^}{P}}_{f,DS}-1.96\frac{{\sigma }_{q}}{\sqrt{N}},\phantom{\rule{4pt}{0ex}}{\stackrel{^}{P}}_{f,sup}={\stackrel{^}{P}}_{f,DS}+1.96\frac{{\sigma }_{q}}{\sqrt{N}}\end{array}$
 $\begin{array}{c}\hfill \mathrm{that}\mathrm{is}\mathrm{to}\mathrm{say}\phantom{\rule{4pt}{0ex}}ℙ\left({\stackrel{^}{P}}_{f,inf}\le {P}_{f}\le {\stackrel{^}{P}}_{f,sup}\right)=0.95\end{array}$

where ${\sigma }_{q}$ denotes the empirical standard deviation of the sample $\left\{{q}_{1},...,{q}_{N}\right\}$.

In practice in OpenTURNS, the Directional Sampling simulation requires the choice of:

• a Root Strategy :

• RiskyAndFast : for each direction, we check whether there is a sign changement of the standard limit state function between the maximum distant point (at distance MaximumDistance from the center of the standard space) and the center of the standard space.

In case of sign changement, we research one root in the segment [origin, maximum distant point] with the selected non linear solver.

As soon as founded, the segment [root, infinity point] is considered within the failure space.

• MediumSafe : for each direction, we go along the direction by step of length stepSize from the origin to the maximum distant point (at distance MaximumDistance from the center of the standard space) and we check whether there is a sign changement on each segment so formed.

At the first sign changement, we research one root in the concerned segment with the selected non linear solver. Then, the segment [root, maximum distant point] is considered within the failure space.

If stepSize is small enough, this strategy garantees us to find the root which is the nearest from the origin.

• SafeAndSlow : for each direction, we go along the direction by step of length stepSize from the origin to the maximum distant point(at distance MaximumDistance from the center of the standard space) and we check whether there is a sign changement on each segment so formed.

We go until the maximum distant point. Then, for all the segments where we detected the presence of a root, we research the root with the selected non linear solver. We evaluate the contribution to the failure probability of each segment.

If stepSize is small enough, this strategy garantees us to find all the roots in the direction and the contribution of this direction to the failure probability is precisely evaluated.

• a Non Linear Solver :

• Bisection : bisection algorithm,

• Secant : based on the evaluation of a segment between the two last iterated points,

• Brent : mix of Bisection, Secant and inverse quadratic interpolation.

• and a Sampling Strategy :

• RandomDirection : we generate some points on the sphere unity according to the uniform distribution and we consider both opposite directions so formed.

• OrthogonalDirection : this strategy is parameterized by $k\in \left\{1,\cdots ,n\right\}$, where $n$ is the dimension of the input random vector $\underline{X}$. We generate one direct orthonormalized basis $\left({e}_{1},\cdots ,{e}_{n}\right)$ uniformly distributed in the set of direct orthonormal bases. We consider all the normalized linear combinations of $k$ vectors chosen within the $n$ vectors of the basis, where the coefficients of the linear combinations are in $\left\{+1,-1\right\}$. This generates ${C}_{n}^{k}{2}^{k}$ new vectors ${v}_{i}$. We sample according to all the directions defined by the vectors ${v}_{i}$.

If $k=1$, we consider all the axes of the standard space.

Other notations

This method is used in step C and enables the probability of exceeding the threshold of an output variable (we refer to the probability of exceeding the threshold (critical region) because the inequality $g\left(\underline{X},\underline{d}\right)\le 0$ by convention defines a reliability/critical region, and is in the general case the rewritten inequality of type $Z\ge \mathrm{threshold}$ where $Z$ is a random variable function of $\underline{X}$ and $\underline{d}$).

This amounts to calculating the cumulative distribution function of the output variable at a point and thus propagating the uncertainty defined in step B using the model defined in step A.

Input data:

• $\underline{X}$: random vector modelling the unknown variables defined in step A and for which the joint probability density function has been defined in step B,

• $\underline{d}$: vector of deterministic calculation parameters,

• $g\left(\underline{X},\underline{d}\right)<0$: probabilistic criterion specified in step A,

Parameters:

• $N$: number of simulations,

• $\alpha$: confidence level required for the confidence interval,

• Root Strategy,

• Non-linear Solver,

• Sampling Strategy.

Outputs:

• ${\stackrel{^}{P}}_{f,DS}$: estimation of the probability of exceeding the threshold,

• $\mathrm{Var}\left({\stackrel{^}{P}}_{f}\right)$: estimation of the variance of the probability estimator,

• ${\stackrel{^}{P}}_{f,sup}-{\stackrel{^}{P}}_{f,inf}$: length of the confidence interval.

References and theoretical basics

Readers interested in the problem of estimating the probability of exceeding a threshold are also referred to [FORM] , [SORM] , [LHS] , [Importance Sampling] and [Crude Monte-Carlo sampling] .

The following provide an interesting bibliographical starting point to further study of this method:

• Robert C.P., Casella G. (2004). Monte-Carlo Statistical Methods, Springer, ISBN 0-387-21239-6, 2nd ed.

• Rubinstein R.Y. (1981). Simulation and The Monte-Carlo methods, John Wiley & Sons

• Bjerager, P. (1988). "Probability integration by Directional Simulation". Journal of Engineering Mechanics, vol. 114, no. 8

### 4.3.20 Step C  – Latin Hypercube Simulation

Mathematical description

Goal

Let us note ${𝒟}_{f}=\left\{\underline{x}\in {ℝ}^{n}|g\left(\underline{x},\underline{d}\right)\le 0\right\}$. The goal is to estimate the following probability:

 $\begin{array}{ccc}\hfill {P}_{f}& =& {\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int }_{{ℝ}^{n}}{\mathbf{1}}_{\left\{g\left(\underline{x},\underline{d}\right)\le 0\right\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& ℙ\left(\left\{g\left(\underline{X},\underline{d}\right)\le 0\right\}\right)\hfill \end{array}$

Principles

LHS or Latin Hypercube Sampling is a sampling method enabling to better cover the domain of variations of the input variables, thanks to a stratified sampling strategy. This method is applicable in the case of independent input variables. The sampling procedure is based on dividing the range of each variable into several intervals of equal probability. The sampling is undertaken as follows:

• Step 1 The range of each input variable is stratified into isoprobabilistic cells,

• Step 2 A cell is uniformly chosen among all the available cells,

• Step 3 The random number is obtained by inverting the Cumulative Density Function locally in the chosen cell,

• Step 4 All the cells having a common strate with the previous cell are put apart from the list of available cells.

The estimator of the probability of failure with LHS is given by:

 $\begin{array}{c}\hfill {\stackrel{^}{P}}_{f,LHS}^{N}=\frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\left\{g\left({\underline{X}}^{i},\underline{d}\right)\le 0\right\}}\end{array}$

where the sample of $\left\{{\underline{X}}^{i},i=1\cdots N\right\}$ is obtained as described previously.

One can show that:

 $\begin{array}{c}\hfill \mathrm{Var}\left[{\stackrel{^}{P}}_{f,LHS}^{N}\right]\le \frac{N}{N-1}.\mathrm{Var}\left[{\stackrel{^}{P}}_{f,MC}^{N}\right]\end{array}$

where:

• $\mathrm{Var}\left[{\stackrel{^}{P}}_{f,LHS}^{N}\right]$ is the variance of the estimator of the probability of exceeding a threshold computed by the LHS technique,

• $\mathrm{Var}\left[{\stackrel{^}{P}}_{f,MC}^{N}\right]$ is the variance of the estimator of the probability of exceeding a threshold computed by a crude Monte Carlo method.

Confidence Interval

With the notations,

 $\begin{array}{ccc}\hfill {\mu }_{N}& =& \frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\left\{g\left({\underline{x}}_{i}\right),\underline{d}\right)\le 0\right\}}\hfill \\ \hfill {\sigma }_{N}^{2}& =& \frac{1}{N}\sum _{i=1}^{N}{\left({\mathbf{1}}_{\left\{g\left({\underline{x}}^{i}\right),\underline{d}\right)\le 0\right\}}-{\mu }_{N}\right)}^{2}\hfill \end{array}$

the asymptotic confidence interval of order $1-\alpha$ associated to the estimator ${P}_{f,LHS}^{N}$ is

 $\begin{array}{c}\hfill \left[{\mu }_{N}-\frac{{q}_{1-\alpha /2}.{\sigma }_{N}}{\sqrt{N}};{\mu }_{N}+\frac{{q}_{1-\alpha /2}.{\sigma }_{N}}{\sqrt{N}}\right]\end{array}$

where ${q}_{1-\alpha /2}$ is the $1-\alpha /2$ quantile from the reduced standard gaussian law $𝒩\left(0,1\right)$.

It gives an unbiased estimate for ${P}_{f}$ (reminding that all input variables must be independent).

Other notations

This method is derived from a more general method called 'Stratified Sampling'.

This method is part of the step C of the global methodology. It requires the specification of the joined probability density function of the input variables and the definition of the threshold. The PDF must have an independent copula.
References and theoretical basics
• This method a priori enables a better exploration of the domain of variations of the input variables. No general rule can guarantee a better efficiency of the LHS sampling than the classical Monte Carlo sampling. Nevertheless, one can show that the LHS strategy leads to a variance reduction if the model is motoneous over each variable.

• Be careful, this method is valid only if the input random variables are independent!

• Moreover, for reliability problems, when the failure probability is low, the tails of the distributions usually contain the most influent domains in terms of reliability.

• A fruitful link towards the global approach can be established with the files

[Importance Sampling to evaluate a probability to exceed a threshold] coming from the methodology.

The following provide an interesting bibliographical starting point to further study of this method:

• Mc Kay, Conover, Beckman, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 21 (2), 1979

• Latin Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems, J. Helton, F.J. Davis, 2002, SAND 2001-0417

• The Design and Analysis of Computer Experiments by Thomas J. Santner, Brian J. Williams, and William I. Notz, Springer Verlag, New York 2003

• A Central Limit Theorem for Latin Hypercube Sampling, Art B. Owen, 1992, Journal of the Royal Statistical Society. Series B (Methodological), Vol. 54, No. 2, pp. 541-551

• Large Sample Properties of Simulations Using Latin Hypercube Sampling, Michael Stein, Technometrics, Vol. 29, No. 2 (May, 1987), pp. 143-151

Examples

To illustrate this method, we consider the sampling strategy of an input vector of dimension 2. Both components follow a uniform law $𝒰\left(0,1\right)$. The unit square ${\left(0,1\right)}^{2}$ is divided into $10×10=100$ cells of equal probability. The figure compares the population of 10 points obtained by a Latin Hypercube Sampling and by a Monte Carlo Sampling.

For the LHS sampling, each column and each row contains exactly one blue point. For the Monte Carlo sampling, there are some columns which contain no red point (e.g. the $\left(0.7,0.8\right)$ column), while some others contain two red points (e.g. the $\left(0.4,0.5\right)$ column). The same is true for the rows of the Monte Carlo sampling. Hence, the LHS sampling fills the sample space more evenly.

### 4.3.21 Step C  – Quasi Monte Carlo

Mathematical description

Goal

Let us note ${𝒟}_{f}=\left\{\underline{x}\in {ℝ}^{n}\phantom{\rule{0.222222em}{0ex}}|\phantom{\rule{0.222222em}{0ex}}g\left(\underline{x},\underline{d}\right)\le 0\right\}$. The goal is to estimate the following probability:

 $\begin{array}{ccc}\hfill {P}_{f}& =& {\int }_{{𝒟}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int }_{{ℝ}^{n}}{\mathbf{1}}_{\left\{g\left(\underline{x},\underline{d}\right)\le 0\right\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& ℙ\left(\left\{\phantom{\rule{0.222222em}{0ex}}g\left(\underline{X},\underline{d}\right)\le 0\right\}\right)\hfill \end{array}$ (4.3)

Quasi-Monte Carlo is a technique which approximates the integral (4.3) using low discrepancy sequences $\left\{{\underline{x}}^{1},...,{\underline{x}}^{N}\right\}$ instead of randomly generated sequences, as follows :

 $\begin{array}{c}\hfill {P}_{f}\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}{\mathbf{1}}_{{𝒟}_{f}}\left({\underline{x}}_{i}\right)f\left({\underline{x}}_{i}\right).\end{array}$

To have information on low discrepancy sequences, refer to [Low Discrepancy Sequence] – see page 4.3.2.

Principles

In general, the integral of a function $f$ on $\Delta ={\left[0,1\right]}^{s}$ can be approximated by using some low discrepancy sequence $\left\{{\underline{x}}_{1},\cdots ,{\underline{x}}_{N}\right\}$ as follows :

 $\begin{array}{c}\hfill {\int }_{\Delta }f\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}f\left({\underline{x}}_{i}\right).\end{array}$

The low discrepancy sequence is generated on $\Delta$ according to the Lebesgue measure then may be transformed to any measure $\mu$ thanks to the inverse CDF technique in order to approximate the integral :

 $\begin{array}{c}\hfill {\int }_{{ℝ}^{s}}f\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}f\left({\underline{x}}_{i}\right).\end{array}$

As the sequence is not randomly generated, we can not use the Central Limit Theorem in order to control the quality of the approximation. This quality is given by the Koksma-Hlawka inequality that we recall here :

 $\begin{array}{c}\hfill \left|\frac{1}{N}\sum _{i=1}^{N}f\left({\underline{x}}_{i}\right)-{\int }_{I}f\left(\underline{x}\right)d\underline{x}\right|\le Var\left(f\right){D}^{N}\left({\underline{x}}_{1},...,{\underline{x}}_{N}\right)\end{array}$

where ${D}^{N}\left({\underline{x}}_{1},...,{\underline{x}}_{N}\right)$ is the discrepancy of the sequence $\left\{{\underline{x}}_{1},\cdots ,{\underline{x}}_{N}\right\}$.

For Halton, Inverse Halton and Sobol sequences, we have :

 $\begin{array}{c}\hfill {D}^{N}=O\left(\frac{{log}^{s}N}{N}\right)\end{array}$

Thus, asymptotically the error converges in $O\left(\frac{{log}^{s}N}{N}\right)$, instead of $O\left(\frac{1}{\sqrt{N}}\right)$ for traditional Monte Carlo. The convergence rate depends on the dimension $s$ so one must have $N>>{e}^{s}$.

Other notations

This method is part of the step C of the global methodology. It requires the specification of the joined probability density function of the input variables and the definition of the threshold. The PDF must have an independent copula.
References and theoretical basics
This method a priori provides an asymptotically higher convergence rate than traditional Monte Carlobut no general rule can guarantee a better efficiency of the Quasi-Monte Carlo sampling than the classical Monte Carlo sampling.

The advantages of Quasi-Monte Carlo sampling tend to disappear with the increase of the number of dimensions of the input variables.

It is recommended to use the Quasi-Monte Carlo technique with high sampling sizes or with very low dimensionality.

This method is valid only if the input random variables are independent.

A fruitful link towards the global approach can be established with the files :

### 4.3.22 Step C  – Estimating a quantile by Sampling / Wilks' Method

Mathematical description

Goal

Let us denote $\underline{Y}=h\left(\underline{X},\underline{d}\right)=\left({Y}^{1},...,{Y}^{{n}_{Y}}\right)$, where $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ is a random vector, and $\underline{d}$ a deterministic vector. We seek here to evaluate, using the probability distribution of the random vector $\underline{X}$, the $\alpha$-quantile ${q}_{{Y}^{i}}\left(\alpha \right)$ of ${Y}^{i}$, where $\alpha \in \left(0,1\right)$:

 $\begin{array}{c}\hfill ℙ\left({Y}^{i}\le {q}_{{Y}^{i}}\left(\alpha \right)\right)=\alpha \end{array}$

Principle

If we have a sample $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of $N$ independent samples of the random vector $\underline{X}$, ${q}_{{Y}^{i}}\left(\alpha \right)$ can be estimated as follows:

• the sample $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of vector $\underline{X}$ is first transformed to a sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$ of the variable ${Y}^{i}$, using $\underline{y}=h\left({\underline{x}}_{i},\underline{d}\right)$,

• the sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$ is then placed in ascending order, which gives the sample $\left\{{y}^{\left(1\right)},...,{y}^{\left(N\right)}\right\}$,

• this empirical estimation of the quantile is then calculated by the formula:

 $\begin{array}{c}\hfill {\stackrel{^}{q}}_{{y}^{i}}\left(\alpha \right)={y}^{\left(\left[N\alpha \right]+1\right)}\end{array}$

where $\left[N\alpha \right]$ denotes the integral part of $N\alpha$.

For example, if $N=100$ and $\alpha =0.95$, ${\stackrel{^}{q}}_{Z}\left(0.95\right)$ is equal to ${y}^{\left(96\right)}$, which is the ${5}^{\mathrm{th}}$ largest value of the sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$. We note that this estimation has no meaning unless $1/N\le \alpha \le 1-1/N$. For example, if $N=100$, one can only consider values of a to be between 1% and 99%.

It is also possible to calculate an upper limit for the quantile with a confidence level $\beta$ chosen by the user; one can then be sure with a $\beta$ level of confidence that the real value of ${q}_{{Y}^{i}}\left(\alpha \right)\right)$ is less than or equal to ${\stackrel{^}{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}$:

 $\begin{array}{c}\hfill ℙ\left({q}_{{Y}^{i}}\left(\alpha \right)\le {\stackrel{^}{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}\right)=\beta \end{array}$

The most robust method for calculating this upper limit consists of taking ${\stackrel{^}{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}={y}^{\left(j\left(\alpha ,\beta ,N\right)\right)}$ where $j\left(\alpha ,\beta ,N\right)$ is an integer between 2 and $N$ found by solving the equation:

 $\begin{array}{c}\hfill \sum _{k=1}^{j\left(\alpha ,\beta ,N\right)-1}{C}_{N}^{k}{\alpha }^{k}{\left(1-\alpha \right)}^{N-k}=\beta \end{array}$

A solution to this does not necessarily exist, i.e. there may be no integer value for $j\left(\alpha ,\beta ,N\right)$ satisfying this equality; one can in this case choose the smallest integer $j$ such that:

 $\begin{array}{c}\hfill \sum _{k=1}^{j\left(\alpha ,\beta ,N\right)-1}{C}_{N}^{k}{\alpha }^{k}{\left(1-\alpha \right)}^{N-k}>\beta \end{array}$

which ensures that $ℙ\left({q}_{{Y}^{i}}\left(\alpha \right)\le {\stackrel{^}{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}\right)>\beta$; in other words, the level of confidence of the quantile estimation is greater than that initially required.

This formula of the confidence interval can be used in two ways:

• either directly to determine $j\left(\alpha ,\beta ,N\right)$ for the values $\alpha ,\beta ,N$ chosen by the user,

• or in reverse to determine the number $N$ of simulations to be carried out for the values $\alpha ,\beta$ and $j\left(\alpha ,\beta ,N\right)$ chosen by the user; this is known as Wilks' formula.

For example for $\alpha =\beta =95%$, we take $j=59$ with $N=59$ simulations (that is the maximum value out of 59 samples) or else $j=92$ with $N=93$ simulations (that is the second largest result out of the 93 selections). For values of $N$ between 59 and 92, the upper limit is the maximum value of the sample. The following tabular presents the whole results for $N\le 1000$, still for $\alpha =\beta =95%$.

 $N$ Rank of the uper bound of the quantile Rank of the empirical the quantile 59 59 57 93 92 89 124 122 118 153 150 146 181 177 172 208 203 198 234 228 223 260 253 248 286 278 272 311 302 296 336 326 320 361 350 343 386 374 367 410 397 390 434 420 413 458 443 436 482 466 458 506 489 481 530 512 504 554 535 527 577 557 549 601 580 571 624 602 593 647 624 615 671 647 638 694 669 660 717 691 682 740 713 704 763 735 725 786 757 747 809 779 769 832 801 791 855 823 813 877 844 834 900 866 856 923 888 877 945 909 898 968 931 920 991 953 942

Other notations

${\stackrel{^}{q}}_{{Y}^{i}}\left(\alpha \right)$ is often called the "empirical $\alpha$-quantile" for the variable ${Y}^{i}$.

In the overall process, the Monte Carlo simulation method for estimating the variance appears in step C "Propagation of Uncertainty" when the study of uncertainty deals with the dispersion of the variable of interest ${Y}^{i}$ defined in step A "Specifying Criteria and the Case Study".

Input data:

• $\underline{X}$: random vector modelling the unknown variables defined in step A and for which the joint probability density function has been defined in step B,

• $\underline{d}$: vector of deterministic calculation parameters,

• $\underline{Y}=h\left(\underline{X},\underline{d}\right)$: output variable / variable of interest specified in step A

Parameters:

• $\alpha$: quantile level ($\alpha$-quantile),

• $\beta$: confidence level for the quantile's upper bound,

• $N$: number of simulations to be carried out (which can be computed by OpenTURNS using Wilk's formula)

Outputs:

• ${\stackrel{^}{q}}_{Z}\left(\alpha \right)$: quantile estimate,

• ${\stackrel{^}{q}}_{Z}{\left(\alpha \right)}_{sup}$: quantile upper bound with confidence $\beta$

References and theoretical basics

The Monte-Carlo standard method does not require the function $h$ to have any special property (it can be non-linear, non-monotonic, non-differentiable, discontinuous, etc.) and the number of necessary simulations does not depend on the number of components of vector $\underline{X}$. On the other hand, this method is not suitable (for the estimation of the quantile) or is very conservative (for the estimation of the upper limit) if $N$ is small and if $\alpha$ and $\beta$ are very close to 1.

The following references provide an interesting bibliographical starting point for further study of the method described here:

• Wilks, S.S. (1962). "Mathematical Statistics", New York, John Wiley.

• Robert C.P., Casella G. (2004). Monte-Carlo Statistical Methods, Springer, ISBN 0-387-21239-6, 2nd ed.

• Rubinstein R.Y. (1981). Simulation and The Monte-Carlo methods, John Wiley & Sons

 OpenTURNS' methods for Step B: quantification of the uncertainty sources Table of contents OpenTURNS' methods for Step C': ranking uncertainty sources / sensitivity analysis