This section is organized according to the different uncertainty criterion defined in step A: deterministic minmax criterion, probabilist criterion on central dispersion (expectation and variance), probability of exceeding a threshold / failure probability, and probabilistic criterion based on quantiles. Each method proposed for these criteria is described at the end of the section.
In order to generate random simulations according to a particular distribution, it is necessary to first generate random simulations according to the Uniform(0,1) distribution : OpenTURNS uses the Mersenne Twister algorithm. Then, several techniques exist in order to transform these uniform random simulations into random simulations according to a particular distribution :
[Uniform Random Generator] – see page 4.3
[Distribution Realisations] – see page 4.3.1
Furthermore, lowdiscrepancy sequences are commonly used as a replacement of uniformly distributed random numbers. They are also called quasirandom or subrandom sequences : the quasi modifier is used to denote more clearly that the values of a lowdiscrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasiMonte Carlo method their lower discrepancy is an important advantage. (source Wikipedia).
[Low Discrepancy Sequence] – see page 4.3.2
The min or max of the ouput variable of interest can be found thanks to different techniques : [MinMax Approach] – see page 4.3.3, based on :
an design of experiments which can be deterministic or probabilistic : [Design of Experiments ] – see page 4.3.4
an optimization algorithm which researches the extreme values of the limit state function in a specified interval (which may be infinite) : [Optimization algorithm] – see page 4.3.5
Two categories of method are proposed: approximation methods and sampling methods.
Approximation methods
[Quadratic combination / Perturbation method] – see page 4.3.6
Sampling methods
[Standard Monte Carlo simulation] – see page 4.3.7
Again, two categories of methods are proposed: approximation methods and sampling methods.
Approximation methods
FORMSORM methods
[Preliminary isoprobabilistic transformation] – see page 4.3.8
[Generalized Nataf isoprobabilistic transformation] – see page 4.3.9
[Rosenblatt isoprobabilistic transformation] – see page 4.3.10
[FORM algorithm] – see page 4.3.11
[SORM algorithm] – see page 4.3.12
[Reliability index] – see page 4.3.13
Validation of FORMSORM underlying hypothesis
[Preliminary sphere sampling] – see page 4.3.14
[Strongmaximum test] – see page 4.3.15
Sampling methods
[Monte Carlo] – see page 4.3.16
Accelerated simulation
[Importance Simulation] – see page 4.3.17
[Directional Simulation] – see page 4.3.18
[Latin Hypercube Simulation] – see page 4.3.19
[QuasiMonte Carlo Simulation] – see page 4.3.20
Only one sampling approach is available in the current version of OpenTURNS.
[Monte carlo Simulation and Wilk's formula] – see page 4.3.21
Mathematical description
Generating simulations according to a distribution is based on generating simulations according to a Uniform distribution on $[0,1]$ : several techniques exist then to transform a realization according to a uniform distribution onto a realization according to a distribution which cumulative distribution function is $F$ (refer to [Distribution realizations] for each OpenTURNS distribution).
Thus, the quality of the random generation of simulation is entirely based on the quality of the deterministic algorithm which simulates realizations of the Uniform(0,1) distribution.
OpenTURNS uses the DSFTM algorithm described here, which is the acronyme of Double precision SIMD oriented Fast Mersenne Twister.
Principle
Each character is detailed of the acronym is detailed :
S = SIMD = Single Instruction Multiple Data: the DSFMT algorithm is able to detect and take profit of the capacity of the microprocessor to realise several operations at a time.
F = Fast: the transformation of the $k$th state vector of the random generator into the $(k+1)$th state vector is written in order to optimize its performance.
MT = Mersenne Twister: the algorithm characteristics are the following ones :
the algorithm is initialized with a high Mersenne Number, of type ${2}^{{2}^{n}}1$, with $n=19937$.
the algorithm period $T$ depends on that initial point : $T={2}^{19937}\simeq {10}^{6000}$. As a general way, the bad effects of the periodicity of the algorithm arise as soon as the number of simulations is greater than $\phantom{\rule{0.166667em}{0ex}}\simeq \sqrt{T}$ simulations. Here, we have : $\sqrt{T}={2}^{9968}\simeq {10}^{3000}$.
the realizations of the DSFMT algorithm are uniformly distributed within ${[0,1]}^{n}$ until $n=624$.
Other notations
Link with OpenTURNS methodology
The User of OpenTURNS is able to save and specify its initial state.
References and theoretical basics
Crystal Ball : the random generator of the versions 7.1 and 7.3, and also the random generator of the current version at 30/06/2008 are multiplicative congruential generators, which simulations ${R}_{n}$ are given by the relation ${R}_{n+1}=\left(62089911{R}_{n}\right)\phantom{\rule{0.166667em}{0ex}}mod({2}^{31}1)$. Their periodicity are $T={2}^{31}2$.
So, it is recommended not to generate numerical samples of size greater than $\sqrt{T}=46340$!
Furthermore, realizations can not be considered as independent and uniformly spread within ${[0,1]}^{n}$ as soon as $n=3$....
Excel: the random generator has a periodicity equal to $T=65000$ : it is recommended not to generate numerical samples of size greater than $\sqrt{T}=256$.
Examples
Mathematical description
The objective is to transform a random realization of the Uniform(0,1) distribution onto a random realization of a distribution which cumulative density function is $F$.
Principle
Several classical techniques exist :
The inversion of the CDF: if $U$ is distributed according to the uniform distribution over $[0,1]$ (the bounds 0 and 1 may or may not be included), then $X={F}^{1}\left(U\right)$ is distributed according to the CDF $F$. If ${F}^{1}$ has a simple analytical expression, it provides an efficient way to generate realizations of $X$. Two points need to be mentioned:
If the expression of ${F}^{1}\left(U\right)$ involves the quantity $1U$ instead of $U$, it can be replaced by $U$ as $U$ and $1U$ are identically distributed;
The numerical range of $F$ is always bounded (i.e. the interval over which it is invertible) even if its mathematical range is unbounded, and the numerical range may not preserve the symmetry of its mathematical counterpart. It can lead to biased nonuniform generators, even if this bias is usually small. For example, using standard double precision, the CDF of the standard normal distribution is numerically invertible over $[A,B]$ with $A\simeq 17.3$ and $B\simeq 8.5$.
The rejection method: suppose that one want to sample a random variable $X$ with a continuous distribution with PDF ${p}_{X}$ and that we know how to sample a random variable $Y$ with a continuous distribution with PDF ${p}_{Y}$. We suppose that there exist a positive scalar $k$ such that $\forall t\in \mathbb{R},{p}_{X}\left(t\right)\le k{p}_{Y}\left(t\right)$. The rejection method consists of the following steps:
Generate $y$ according to ${p}_{Y}$;
Generate $u$ according to a random variable $U$ independent of $Y$ and uniformly distributed over $[0,1]$;
If $u\le {p}_{X}\left(u\right)/\left(k{p}_{Y}\left(u\right)\right)$, accept $y$ as a realization of $X$, else return to step 1.
The rejection method can be improved in several ways:
If the evaluation of $\rho \left(u\right)={p}_{X}\left(u\right)/\left(k{p}_{Y}\left(u\right)\right)$ is costly, and if one knows a cheap function $h$ such that $h\left(u\right)\le \rho \left(u\right)$, then one can first check if $u\le h\left(u\right)$ and directly accept $y$ if the test is positive (quick acceptance step). This is very effective if $h\left(u\right)$ can be evaluated from quantities that have to be computed to evaluate $\rho \left(u\right)$: $h$ is a kind of cheap version of $\rho $. The same trick can be use if one knows a cheap function $m$ such that $m\left(u\right)\ge \rho \left(u\right)$: one checks if $u\ge m\left(u\right)$ and directly reject $y$ if the test is positive (quick rejection test). The combination of quick acceptation and quick rejection is called a squeeze.
The test $u\le \rho \left(u\right)$ can be replaced by an equivalent one but much more computationaly efficient.
The transformation method: suppose that one want to sample a random variable that is the image by a simple transformation of another random variable (or random vector) that can easily be sampled. It is easy to sample this last random variable (or vector) and then transform this realization through the transformation to get the needed realization. This method can be combined with the rejection method for example, to build $h$ or $m$ implicitely.
The sequential search method (discrete distributions): it is a particular version of the CDF inversion method, dedicated to discrete random variables. One generates a realizaton $u$ of a random variable $U$ uniformly distributed over $[0,1]$, then we search the smallest integer $k$ such that $u\le {\sum}_{i=0}^{k}{p}_{i}$, where ${p}_{i}=\mathbb{P}\left(X=i\right)$.
The stable distribution method (archimedean copulas): to be detailed.
The conditional CDF inversion (general copula or general multivariate distributions): this method is a general procedure to sample a multivariate distribution. One generate in sequence a realization of the first marginal distribution, then a realization of the distribution of the second component conditionally to the value taken by the first component, and so on. Each step is done by inversion of the conditional CDF of the component $i$ with respect to the value taken by the preceding components.
The ratio of uniforms method: this is a special combination of the rejection method and the transformation method that has gained a great popularity due to its concision and its verstility. Let $\mathcal{A}=\{(u,v):\phantom{\rule{1.em}{0ex}}0\le u\le \sqrt{f\left(\frac{u}{v}\right)}\}$. If $(U,V)$ is a random vector uniformly distributed over $\mathcal{A}$, then $\frac{U}{V}$ has density $f/\int f$. The generation of $(U,V)$ is done by a rejection method, using a bounded enclosing region of $\mathcal{A}$. It can be done if and only if both $f\left(x\right)$ and ${x}^{2}f\left(x\right)$ are bounded. This method can be enhanced by using quick acceptance and quick rejection steps.
The ziggurat method: this method allows for a very fast generation of positive random variate with decreasing PDF. The graph of the PDF is partitioned into horizontal slices of equal mass, the bottom slice covering the whole support of the PDF. All these slices have a maximal enclosed rectangle (the top one being empty) and a minimal enclosing rectangle (the bottom one not being defined). Then, one generate a discrete uniform random variable $d$ over the number of slice. It selects a slice, and if this slice has both an enclosed and an enclosing rectangle, one generates a realization of a continuous uniform random variable on $[0,{L}_{i}]$, ${L}_{d}$ being the length of the enclosing rectangle of slice $d$. The enclosing and the enclosed rectangles define an efficient squeeze for a rejection method. If the bottom slice is selected, one has to sample the tail distribution conditional to the length of the enclosed rectangle: it is the only case where a costly nonuniform random number has to be computed. If the number of slices is large enough, this case appears only marginally, which is the main reason for the method efficiency.
We describe here the techniques involved in OpenTURNS.
Distribution  Method  
Arcsine  CDF inversion.  
Bernoulli  CDF inversion.  
Beta 


Binomial  Squeeze end Reject method. See the Hormann reference.  
Burr  CDF inversion.  
Chi  Transformation.  
ChiSquare  See the Gamma distribution.  
ClaytonCopula  Conditional CDF inversion.  
ComposedCopula  Simulation of the copula one by one then association.  
ComposedDistribution  Simulation of the copula and the marginale with the CDF inversion technique.  
Dirac  Return the supporting point.  
Dirichlet  Transformation.  
Epanechnikov  CDF inversion.  
Exponential  CDF inversion.  
FisherSnedecor  Transformation.  
FrankCopula  Conditional CDF inversion.  
Gamma  Transformation and rejection (Marsaglia and Tsang method).  
Geometric  CDF inversion.  
Generalized Pareto  CDF inversion.  
GumbelCopula  Stable distribution.  
Gumbel  CDF inversion.  
Histogram  CDF inversion.  
IndependentCopula  Transformation.  
InverseNormal  Transformation.  
KernelMixture  Transformation.  
Kpermutaions  Knuth's algorithm.  
Laplace  CDF inversion.  
Logistic  CDF inversion.  
LogNormal  Transformation.  
LogUniform  Transformation.  
Meixner  Uniform ratio method.  
MinCopula  Transformation.  
Mixture  Transformation.  
MultiNomial  Conditional CDF inversion.  
Non Central Chi Square  Transformation.  
NegativeBinomial  Conditional simulation (Poisson  Gamma)  
Non Central Student  Transformation.  
NormalCopula  Transformation of independent Normal realizations.  
Normal 

Distribution  Method  
Poisson 


RandomMixture  Transformation.  
Rayleigh  CDF inversion.  
Rice  Transformation.  
Skellam  Transformation.  
SklarCopula  Conditional CDF inversion by Gaussian quadrature and numerical inversion.  
Student  Transformation.  
Trapezoidal  CDF inversion.  
Triangular  CDF inversion.  
TruncatedDistribution on $[a,b]$ 


TruncatedNormal 


Uniform  Transformation.  
UserDefined  Sequential search.  
Weibull  CDF inversion.  
ZipfMandelbrot  Bisection search. 
Link with OpenTURNS methodology
Luc Devroye, "NonUniform RandomVariate Generation", SpringerVerlag, 1986, available online at: http://cg.scs.carleton.ca/ luc/nonuniformrandomvariates.zip and with the important errata at: http://cg.scs.carleton.ca/ luc/errors.pdf
Volfgang Hormann, "The generation of Binomial Random Variates"
George Marsaglia and Wai Wan Tsang, "A Simple Method for Generating Gamma, Journal of Statistical Computational and Simulation, vol 46, pp101  110,1993. Variables", ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363372.
Doornik, J.A. (2005), "An Improved Ziggurat Method to Generate Normal Random Samples", mimeo, Nuffield College, University of Oxford.
Kjersti Aas, "Modelling the dependence structure of financial assets: a survey of four copulas", Norwegian Computing Center report nr. SAMBA/22/04, December 2004.
E. Stadlober, "The ratio of uniforms approach for generating discrete random variates". Journal of Computational and Applied Mathematics, vol. 31, no. 1, pp. 181189, 1990.
Examples
Mathematical description
This text was extracted from (2).
A lowdiscrepancy sequence is a sequence with the property that for all values of $N$, its subsequence $({x}_{1},\cdots ,{x}_{N})$ has a low discrepancy.
The discrepancy of a sequence is low if the number of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of a uniform distribution. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).
Lowdiscrepancy sequences are also called quasirandom or subrandom sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a lowdiscrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasiMonte Carlo method their lower discrepancy is an important advantage.
At least three methods of numerical integration can be phrased as follows. Given a set $\{{x}_{1},\cdots ,{x}_{N}\}$ in the interval [0,1], approximate the integral of a function f as the average of the function evaluated at those points:
$$\begin{array}{c}\hfill {\int}_{0}^{1}f\left(u\right)\phantom{\rule{0.166667em}{0ex}}du\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}f\left({x}_{i}\right).\end{array}$$ 
If the points are chosen as ${x}_{i}=i/N$, this is the rectangle rule.
If the points are chosen to be randomly distributed, this is the Monte Carlo method : [Standard MonteCarlo simulation] – see page 4.3.7.
If the points are chosen as elements of a lowdiscrepancy sequence, this is the quasiMonte Carlo method : [QuasiMonte Carlo simulation] – see page 4.3.20.
Principle
The discrepancy of a set $P=\{{x}_{1},\cdots ,{x}_{N}\}$ is defined, using Niederreiter's notation, as
$$\begin{array}{c}\hfill {D}_{N}\left(P\right)=\underset{B\in J}{sup}\left\frac{A(B;P)}{N}{\lambda}_{s}\left(B\right)\right\end{array}$$ 
where $\lambda s$ is the sdimensional Lebesgue measure, $A(B;P)$ is the number of points in $P$ that fall into $B$, and $J$ is the set of sdimensional intervals or boxes of the form :
$$\begin{array}{c}\hfill \prod _{i=1}^{s}[{a}_{i},{b}_{i})=\{\mathbf{x}\in {\mathbf{R}}^{s}:{a}_{i}\le {x}_{i}<{b}_{i}\}\phantom{\rule{0.166667em}{0ex}}\end{array}$$ 
where $0\le {a}_{i}<{b}_{i}\le 1$ .
The stardiscrepancy D*N(P) is defined similarly, except that the supremum is taken over the set J* of intervals of the form
$$\begin{array}{c}\hfill \prod _{i=1}^{s}[0,{u}_{i})\end{array}$$ 
where ${u}_{i}$ is in the halfopen interval $[0,1)$.
The two are related by
$$\begin{array}{c}\hfill {D}_{N}^{*}\le {D}_{N}\le {2}^{s}{D}_{N}^{*}.\phantom{\rule{0.166667em}{0ex}}\end{array}$$ 
The Koksmalawka inequality, shows that the error of such a method can be bounded by the product of two terms, one of which depends only on f, and the other one is the discrepancy of the set $\{{x}_{1},\cdots ,{x}_{N}\}$.
Let ${\overline{I}}^{s}$ be the sdimensional unit cube, ${\overline{I}}^{s}=[0,1]\times ...\times [0,1]$. Let $f$ have bounded variation $V\left(f\right)$ on ${I}^{s}$ in the sense of Hardy and Krause. Then for any $({x}_{1},\cdots ,{x}_{N})$ in ${I}^{s}=[0,1)\times ...\times [0,1)$,
$$\begin{array}{c}\hfill \left\frac{1}{N}\sum _{i=1}^{N}f\left({x}_{i}\right){\int}_{{\overline{I}}^{s}}f\left(u\right)\phantom{\rule{0.166667em}{0ex}}du\right\le V\left(f\right)\phantom{\rule{0.166667em}{0ex}}{D}_{N}^{*}({x}_{1},...,{x}_{N}).\end{array}$$ 
The KoksmaHlawka inequality is sharp in the following sense: For any point set $\{{x}_{1},\cdots ,{x}_{N}\}$ in ${I}^{s}$ and any $\epsilon >0$ > 0, there is a function $f$ with bounded variation and $V\left(f\right)=1$ such that :
$$\begin{array}{c}\hfill \left\frac{1}{N}\sum _{i=1}^{N}f\left({x}_{i}\right){\int}_{{\overline{I}}^{s}}f\left(u\right)\phantom{\rule{0.166667em}{0ex}}du\right>{D}_{N}^{*}({x}_{1},...,{x}_{N})\epsilon .\end{array}$$ 
Therefore, the quality of a numerical integration rule depends only on the discrepancy ${D}_{N}^{*}({x}_{1},\cdots ,{x}_{N})$.
Constructions of sequence are known, due to Faure, Halton, Hammersley, Sobol', Niederreiter and van der Corput, such that :
$$\begin{array}{c}\hfill {D}_{N}^{*}({x}_{1},...,{x}_{N})\le C\frac{{(lnN)}^{s}}{N}.\end{array}$$ 
where $C$ is a certain constant, depending on the sequence. These sequences are believed to have the best possible order of convergence. See also: van der Corput sequence, Halton sequences, Sobol sequences. In the case of the Haselgrove sequence, we have:
$$\begin{array}{c}\hfill \forall \epsilon >0,\exists {C}_{\epsilon}>0\phantom{\rule{4.pt}{0ex}}\text{such}\phantom{\rule{4.pt}{0ex}}\text{that}\phantom{\rule{4.pt}{0ex}}{D}_{N}^{*}({x}_{1},...,{x}_{N})\le \frac{{C}_{\epsilon}}{{N}^{1\epsilon}}.\end{array}$$ 
which means a worse asymptotic performance than the previous sequence, but can be interesting for finite sample size.
Remark 1:
If $\{{x}_{1},\cdots ,{x}_{N}\}$ is a lowdiscrepancy sequence, then $\frac{1}{N}\sum _{i=1}^{N}{\delta}_{{x}_{i}}$ converges weakly towards ${\lambda}^{s}$ the $s$dimensional Lebesgue measure on ${[0,1]}^{s}$, which garanties that for all test function (continuous and bounded) $\phi $, $<\frac{1}{N}\sum _{i=1}^{N}{\delta}_{{x}_{i}},\phi >$ converges towards $<{\lambda}^{s},\phi >=\int \phi \phantom{\rule{0.166667em}{0ex}}d{\lambda}^{s}$. We then obtain :
$$\begin{array}{c}\hfill {\displaystyle \frac{1}{N}\sum _{i=1}^{N}\phi \left({x}_{i}\right)\u27f6\int \phi \phantom{\rule{0.166667em}{0ex}}dx}\end{array}$$ 
Be carefull : using low discrepancy sequences instead of random distributed points do not lead to the same control of the variance of the approximation : in the case of random distributed points, this control is given by the Central Limit Theorem that provides confidence intervals. In the case of low discrepancy sequences, it is given by the KoksmaHlawka inequality.
Remark 2:
It is possible to generate low discrepancy sequence according to measures different from the Lebesgue one, by using the inverse CDF technique. But be carefull : in OpenTURNS, the inverse CDF technique is not the one used in all the cases (some distributions are generated thanks to the rejection method for example) : that's why it is not recommended, in the general case, to substitute a low discrepancy sequence to the uniform random generator.
Remark 3:
The lowdiscrepancy sequences have performances that deteriorate rapidly with the problem dimension, as the bound on the discrepancy increases exponentially with the dimension. This behaviour is shared by all the low discrepancy sequences, even if all the standard lowdiscrepancy sequences don't exhibit this behaviour with the same intensity. According to the given reference, the following recommandation can be made:
Use Halton or ReverseHalton sequences for dimensions not greater than 8;
Use Faure sequences for dimensions not greater than 25;
Use Haselgrove sequences for dimensions not greater than 50;
Use Sobol sequences for dimensions up to several hundreds (but OpenTURNS implementation of the Sobol sequence is limited to dimension less or equal to 40).
Other notations
Link with OpenTURNS methodology
More generally, they can be used toapproximate the integral of a function as the average of the function evaluated at the points of the sequences.
References and theoretical basics
Inna Krykova, "Evaluating of pathdependent securities with low discrepancy methods", Master of Science Thesis, Worcester Polytechnic Institute, 2003.
Wikipedia contributors, "Lowdiscrepancy sequence.", Wikipedia, The Free Encyclopedia, 10 April 2012, 17:48 UTC, <en.wikipedia.org/wiki/Lowdiscrepancy_sequence>
Examples
Sobol sequence.

Faure sequence.

Reverse Halton sequence.

Halton sequence.

Haselgrove sequence.

Uniform random sequence.

Mathematical description
The method is used in the following context: $\underline{x}=\left({x}^{1},...,{x}^{{n}_{X}}\right)$ is a vector of unknown variables, $\underline{d}$ a vector considered to be well known or where uncertainty is negligible, and $\underline{y}=h(\underline{x},\underline{d})=\left({y}^{1},...,{y}^{{n}_{Y}}\right)$ describes the variables of interest. The objective here is to determine the extreme (minimum and maximum) values of the components of $\underline{y}$ for all possible values of $\underline{x}$.
Principle
Several techniques enable to determine the extreme (minimum and maximum) values of the variables $\underline{y}$ for the set of all possible values of $\underline{x}$ :
techniques based on design of experiments : the extreme values of $\underline{y}$ are sought for only a finite set of combinations $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$,
techniques using optimization algorithms.
Techniques based on design of experiments
In that case, the minmax approach consists in three steps:
choice of experiment design used to determine the combinations $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of the input random variables,
calculation of ${\underline{y}}_{i}=h({\underline{x}}_{i},\underline{d})$ for $i=1,...,N$,
calculation of ${min}_{1\le i\le N}{y}_{i}^{k}$ and of ${max}_{1\le i\le N}{y}_{i}^{k}$, together with the combinations related to these extreme values: ${\underline{x}}_{k,min}={\mathrm{argmin}}_{1\le i\le N}{y}_{i}^{k}$ and ${\underline{x}}_{k,max}={\mathrm{argmax}}_{1\le i\le N}{y}_{i}^{k}$.
The type of design of experiments is influent on the quality of the response surface and then on the evaluation of its extreme values. OpenTURNS proposes different kinds of design of experiments : [Design of Experimentss ] – see page 4.3.4.
Techniques based on optimization algorithm
The min or max value of the output variable of interest is searched thanks to a optimization algorithm : [Optimization Algorithms] – see page 4.3.5.
Other notations
Mathematical description
The method is used in the following context: $\underline{x}=\left({x}^{1},...,{x}^{{n}_{X}}\right)$ is a vector of input parameters. We want to determine a particular set of values of $\underline{x}$ according to a particular design of experiments .
Principle
Different types of design of experiments can be determined:
some stratified patterns: axial, composite, factorial or box patterns,
some weighted patterns that we can split into different categories: the random patterns, the low discrepancy sequences and the Gauss product.
Stratified design of experiments
All stratified design of experiments are defined from the data of a center point and some discretization levels. OpenTURNS proposes the same number of levels for each direction: let us denote by ${n}_{level}$ that discretization number.
The axial pattern contains points only along the axes. It is not convenient to model interactions between variables. The pattern is obtained by discretizing each direction according to specified levels, symmetrically with respect to the center of the design of experiments . The number of points generated is $1+2d{n}_{level}$.
The factorial pattern contains points only on diagonals. It is not convenient to model influences of single input variables. The pattern is obtained by discretizing each principal diagonal according to the specified levels, symmetrically with respect to the center of the design of experiments . The number of points generated is $1+{2}^{d}{n}_{level}$.
The composite pattern is the union of both previous ones. The number of points generated is $1+2d{n}_{level}+{2}^{d}{n}_{level}$.
The box pattern is a simple regular discretization of a pavement around the center of the design of experiments , with the number of intermediate points specified for each direction (denoted ${n}_{level}\left(direction\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i\right)$). The number of points generated is $\prod _{i=1}^{d}(2+{n}_{level}\left(direction\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i\right))$.
The following figures illustrates the different patterns obtained.
Weighted design of experiments
The first category is the random patterns, where the set of input data is generated from the joint distribution of the input random vector, according to the Monte Carlo sampling technique or the LHS one (refer to [Monte Carlo Simulation] and [Latin Hypercube Sampling] ).
Care: the LHS sampling method requires the independence of the input random variables.
The second category is the low discrepancy sequences (refer to [Low Discrepancy Sequence] ). OpenTURNS proposes the Faure, Halton, Haselgrove, Reverse Halton and Sobol sequences.
The third category is the Gauss product which is the set of points which components are the respective Gauss set (i.e. the roots of the orthogonal polynomials with respect to the univariate distribution).
Combinatorial generators
In some situations, one want to explore all the possibilities related to constrained discrete uncertainties. In this case, we need to obtain all the sets of indices fulfilling the constraints. Exemples of constraints are:
being a subset with $k$ elements of a set with $n$ elements, with $k\le n$;
being a permutation of $k$ elements taken into a set of $n$ elements, with $k\le n$;
being an element of a Cartesian product of sets with ${n}_{1},\cdots ,{n}_{d}$ elements.
It is important to get indices and not realvalued vectors. The distinction is made explicit by calling these design of experiments Combinatorial Generators, which produce collections of indices instead of samples.
The following figures illustrates the different patterns obtained.
Other notations
Link with OpenTURNS methodology
It can also be used within the QuasiMonte carlo technique to approximate the probability of excedding a given threshold.
Mathematical description
The method is used in the following context: $\underline{x}=\left({x}^{1},...,{x}^{{n}_{X}}\right)$ is a vector of unknown variables, $\underline{d}$ a vector considered to be well known or where uncertainty is negligible, and $y=h(\underline{x},\underline{d})=$ is the scalar variable of interest. The objective here is to determine the extreme (minimum and maximum) values of $y$ when $\underline{x}$ varies.
Principle
It is possible to use some optimization algorithms. We give the principle of the TNC (Truncated Newton Constrained) algorithm which minimizes a function with variables subject to bounds, using gradient information.
TruncatedNewton methods are a family of methods suitable for solving large nonlinear optimization problems. At each iteration, the current estimate of the solution is updated by approximately solving the Newton equations using an iterative algorithm. This results in a doubly iterative method : an outer iteration for the nonlinear optimization problem and an inner iteration for the Newton equations. The inner iteration is typically stopped or truncated before the solution to the Newton equations is obtained.
The TNC algorithm resolves : ${min}_{\underline{x}\in [\underline{a},\underline{b}]\in {\overline{\mathbb{R}}}^{n}}f\left(\underline{x}\right)$ and proceeds as follows under the proper regularity of the objective function $f$ :
$$\begin{array}{c}\hfill \left\{\begin{array}{c}\underline{\nabla f}\left(\underline{x}\right)=\underline{0}\hfill \\ \underline{\underline{{\nabla}_{2}}}f\left(\underline{x}\right)\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}\text{definite}\phantom{\rule{4.pt}{0ex}}\text{positive}\hfill \end{array}\right.\end{array}$$ 
The Taylor development of second order of $f$ around ${\underline{x}}_{k}$ leads to the determination of the iterate ${\underline{x}}_{k+1}$ such as :
$$\left\{\begin{array}{ccc}{\underline{\Delta}}_{k}\hfill & =& {\underline{x}}_{k+1}{\underline{x}}_{k}\hfill \\ \underline{\underline{{\nabla}_{2}}}f\left({\underline{x}}_{k}\right){\underline{\Delta}}_{k}\hfill & =& \underline{\nabla f}\left({\underline{x}}_{k}\right)\hfill \end{array}\right.$$  (104) 
The equation (104) is truncated : the iterative research of ${\underline{\Delta}}_{k}$ is stopped as soon as ${\underline{\Delta}}_{k}$ verifies :
$$\begin{array}{c}\hfill \left\right\underline{\underline{{\nabla}_{2}}}f\left({\underline{x}}_{k}\right){\underline{\Delta}}_{k}+\underline{\nabla f}\left({\underline{x}}_{k}\right)\left\right\le \eta \left\right\underline{\nabla f}\left({\underline{x}}_{k}\right)\left\right\end{array}$$ 
At last, the iteration $k+1$ is defined by :
$$\begin{array}{c}\hfill {\underline{x}}_{k+1}={\underline{x}}_{k}+{\alpha}_{k}{\underline{\Delta}}_{k}\end{array}$$ 
where ${\alpha}_{k}$ is the parameter stepmx.
Other notations
Link with OpenTURNS methodology
References and theoretical basics
Stephen G. Nash, 1999, "A survey of TruncatedNewton methods", Systems Engineering and Operations Research Dept., George Mason University, Fairfax, VA 22030.
Mathematical description
The quadratic cumul is a probabilistic approach designed to propagate the uncertainties of the input variables $\underline{X}$ through the model $h$ towards the output variables $\underline{Y}$. It enables to access the central dispersion (expectation and variance) of the output variables.
Principles
This method is based on a Taylor decomposition of the output variable $\underline{Y}$ towards the $\underline{X}$ random vectors around the mean point ${\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$. Depending on the order of the Taylor decomposition (classically first order or second order), one can obtain different formulas. For easiness of the reading, we first present the formulas with ${n}_{Y}=1$ before the ones obtained for ${n}_{Y}>1$.
Case ${n}_{Y}=1$
As $Y=h\left(\underline{X}\right)$, the Taylor decomposition around $\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$ at the second order yields to:
$$\begin{array}{c}\hfill Y=h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+<\underline{\nabla}h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>+\frac{1}{2}<<{\underline{\underline{\nabla}}}^{2}h({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X},\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}),\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>,\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>+o\left(\mathrm{Cov}\left[\underline{X}\right]\right)\end{array}$$ 
where:
${\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}=\mathbb{E}\left[\underline{X}\right]$ is the vector of the input variables at the mean values of each component.
$\mathrm{Cov}\left[\underline{X}\right]$ is the covariance matrix of the random vector $\underline{X}$. The elements are the followings : ${\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}=\mathbb{E}\left[\left({X}^{i}\mathbb{E}\left[{X}^{i}\right]\right)\times \left({X}^{j}\mathbb{E}\left[{X}^{j}\right]\right)\right]$
$\underline{\nabla}h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial y}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}$ is the gradient vector taken at the value $\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$ and $j=1,...,{n}_{X}$.
${\underline{\underline{\nabla}}}^{2}h(\underline{x},\underline{x})$ is a matrix. It is composed by the second order derivative of the output variable towards the ${i}^{\mathrm{th}}$ and ${j}^{\mathrm{th}}$ components of $\underline{x}$ taken around $\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$. It yields to: ${\left({\nabla}^{2}h({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X},{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X})\right)}_{ij}={\left(\frac{{\partial}^{2}h(\underline{x},\underline{x})}{\partial {x}^{i}\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}$
$<,>$ is a scalar product between two vectors.
Approximation at the order 1  Case ${n}_{Y}=1$
Expectation:
$$\begin{array}{c}\hfill \mathbb{E}\left[Y\right]=h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$$ 
Variance:
$$\begin{array}{c}\hfill \mathrm{Var}\left[Y\right]=\sum _{i,j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {X}^{i}}.\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {X}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}\end{array}$$ 
Approximation at the order 2  Case ${n}_{Y}=1$
Expectation:
$$\begin{array}{c}\hfill \begin{array}{c}\hfill \mathbb{E}\left[Y\right]=h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+\frac{1}{2}.\sum _{i,j=1}^{{n}_{X}}\frac{{\partial}^{2}h({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X},{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X})}{\partial {x}^{i}\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}\end{array}\end{array}$$ 
Variance:
The decomposition of the variance at the order 2 is not implemented in the standard version of OpenTURNS. It requires both the knowledge of higher order derivatives of the model and the knowledge of moments of order strictly greater than 2 of the pdf.
Case ${n}_{Y}>1$
The quadratic cumul approach can be developped at different orders from the Taylor decomposition of the random vector $\underline{Y}$. As $\underline{Y}=h\left(\underline{X}\right)$, the Taylor decomposition around $\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$ at the second order yields to:
$$\begin{array}{c}\hfill \underline{Y}=h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+<\underline{\underline{\nabla}}h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>+\frac{1}{2}<<{\underline{\underline{\underline{\nabla}}}}^{2}h({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X},\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}),\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>,\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>+o\left(\mathrm{Cov}\left[\underline{X}\right]\right)\end{array}$$ 
where:
${\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}=\mathbb{E}\left[\underline{X}\right]$ is the vector of the input variables at the mean values of each component.
$\mathrm{Cov}\left[\underline{X}\right]$ is the covariance matrix of the random vector $\underline{X}$. The elements are the followings : ${\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}=\mathbb{E}\left[{\left({X}^{i}\mathbb{E}\left[{X}^{i}\right]\right)}^{2}\right]$
$\underline{\underline{\nabla}}h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial {y}^{i}}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}={\phantom{\rule{0.222222em}{0ex}}}^{t}{\left(\frac{\partial {h}^{i}\left(\underline{x}\right)}{\partial {x}^{j}}\right)}_{\underline{x}\phantom{\rule{0.222222em}{0ex}}=\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}$ is the transposed Jacobian matrix with $i=1,...,{n}_{Y}$ and $j=1,...,{n}_{X}$.
$\underline{\underline{\underline{{\nabla}^{2}}}}h(\underline{x}\phantom{\rule{0.222222em}{0ex}},\underline{x})$ is a tensor of order 3. It is composed by the second order derivative towards the ${i}^{\mathrm{th}}$ and ${j}^{\mathrm{th}}$ components of $\underline{x}$ of the ${k}^{\mathrm{th}}$ component of the output vector $h\left(\underline{x}\right)$. It yields to: ${\left({\nabla}^{2}h\left(\underline{x}\right)\right)}_{ijk}=\frac{{\partial}^{2}\left({h}^{k}\left(\underline{x}\right)\right)}{\partial {x}^{i}\partial {x}^{j}}$
$<\underline{\underline{\nabla}}h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right),\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>={\sum}_{j=1}^{{n}_{X}}{\left(\frac{\partial \underline{y}}{\partial {x}^{j}}\right)}_{\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}.\left({X}^{j}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}^{j}\right)$
$<<{\underline{\underline{\underline{\nabla}}}}^{2}h({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X},\phantom{\rule{0.222222em}{0ex}}{\underline{\mu}}_{X}),\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>,\phantom{\rule{0.222222em}{0ex}}\underline{X}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}>={\left({}^{t}({\underline{X}}^{i}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}^{i}).{\left(\frac{{\partial}^{2}{y}^{k}}{\partial {x}^{i}\partial {x}^{k}}\right)}_{\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}}.({\underline{X}}^{j}{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}^{j})\right)}_{ijk}$
Approximation at the order 1  Case ${n}_{Y}>1$
Expectation:
$$\begin{array}{c}\hfill \mathbb{E}\left[\underline{Y}\right]\approx \underline{h}\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$$ 
Pay attention that $\mathbb{E}\left[\underline{Y}\right]$ is a vector. The ${k}^{\mathrm{th}}$ component of this vector is equal to the ${k}^{\mathrm{th}}$ component of the output vector computed by the model $h$ at the mean value. $\mathbb{E}\left[\underline{Y}\right]$ is thus the computation of the model at mean.
Variance:
$$\begin{array}{c}\hfill \mathrm{Cov}\left[\underline{Y}\right]{\approx}^{t}\underline{\underline{\nabla}}\phantom{\rule{0.222222em}{0ex}}\underline{h}\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right).\mathrm{Cov}\left[\underline{X}\right].\underline{\underline{\nabla}}\phantom{\rule{0.222222em}{0ex}}\underline{h}\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$$ 
Approximation at the order 2  Case ${n}_{Y}>1$
Expectation:
$$\begin{array}{c}\hfill \mathbb{E}\left[\underline{Y}\right]\approx \underline{h}\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)+\frac{1}{2}.{\underline{\underline{\underline{\nabla}}}}^{2}\underline{\underline{h}}({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X},{\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X})\odot \mathrm{Cov}\left[\underline{X}\right]\end{array}$$ 
This last formulation is the reduced writing of the following expression:
$$\begin{array}{c}\hfill {\left(\mathbb{E}\left[\underline{Y}\right]\right)}_{k}\approx {\left(\underline{h}\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\right)}_{k}+{\left(\sum _{i=1}^{{n}_{X}}\frac{1}{2}{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ii}.{\left({\nabla}^{2}\phantom{\rule{0.222222em}{0ex}}h\left(\underline{X}\right)\right)}_{iik}+\sum _{i=1}^{{n}_{X}}\sum _{j=1}^{i1}{\left(\mathrm{Cov}\left[X\right]\right)}_{ij}.{\left({\nabla}^{2}\phantom{\rule{0.222222em}{0ex}}h\left(\underline{X}\right)\right)}_{ijk}\right)}_{k}\end{array}$$ 
Variance:
The decomposition of the variance at the order 2 is not implemented in the standard version of OpenTURNS. It requires both the knowledge of higher order derivatives of the model and the knowledge of moments of order strictly greater than 2 of the pdf.
Other notations
Link with OpenTURNS methodology
References and theoretical basics
Mathematical description
Let us denote $\underline{Y}=h\left(\underline{X},\underline{d}\right)=\left({Y}^{1},...,{Y}^{{n}_{Y}}\right)$, where $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ is a random vector, and $\underline{d}$ a deterministic vector. We seek here to evaluate, the characteristics of the central part (central tendency and spread i.e. mean and variance) of the probability distribution of a variable ${Y}^{i}$, using the probability distribution of the random vector $\underline{X}$.
Principle
The Monte Carlo method is a numerical integration method using sampling, which can be used, for example, to determine the mean and standard deviation of a random variable ${Y}^{i}$ (if these quantities exist, which is not the case for all probability distributions):
$$\begin{array}{c}\hfill {m}_{{Y}^{i}}=\int u\phantom{\rule{0.166667em}{0ex}}{f}_{{Y}^{i}}\left(u\right)\phantom{\rule{0.166667em}{0ex}}du,\phantom{\rule{4pt}{0ex}}{\sigma}_{{Y}^{i}}=\sqrt{\int {\left(u{m}_{{Y}^{i}}\right)}^{2}\phantom{\rule{0.166667em}{0ex}}{f}_{{Y}^{i}}\left(u\right)\phantom{\rule{0.166667em}{0ex}}du}\end{array}$$ 
where ${f}_{{Y}^{i}}$ represents the probability density function of ${Y}^{i}$.
Suppose now that we have the sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$ of $N$ values randomly and independently sampled from the probability distribution ${f}_{{Y}^{i}}$; this sample can be obtained by drawing a $N$ sample $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of the random vector $\underline{X}$ (the distribution of which is known) and by computing ${\underline{y}}_{j}=h\left({\underline{x}}_{j},\underline{d}\right)\phantom{\rule{4pt}{0ex}}\forall 1\le j\le N$. Then, the MonteCarlo estimations for the mean and standard deviation are the empirical mean and standard deviations of the sample:
$$\begin{array}{c}\hfill {\widehat{m}}_{{Y}^{i}}=\frac{1}{N}\sum _{j=1}^{N}{y}_{j}^{i},\phantom{\rule{4pt}{0ex}}{\widehat{\sigma}}_{{Y}^{i}}=\sqrt{\frac{1}{N}{\sum}_{j=1}^{N}{\left({y}_{j}^{i}{\widehat{m}}_{{Y}^{i}}\right)}^{2}}\end{array}$$ 
These are just estimations, but by the law of large numbers their convergence to the real values ${m}_{{Y}^{i}}$ and ${\sigma}_{{Y}^{i}}$ is assured as the sample size $N$ tends to infinity. The Central Limit Theorem enables the difference between the estimated value and the sought value to be controlled by means of a confidence interval (especially if N is sufficiently large, typically $N$ > a few dozens even if there is now way to say for sure if the asymptotic behaviour is reached). For a probability $\alpha $ strictly between 0 and 1 chosen by the user, one can, for example, be sure with a confidence $\alpha $, that the true value of ${m}_{{Y}^{i}}$ is between ${\widehat{m}}_{i,inf}$ and ${\widehat{m}}_{i,sup}$ calculated analytically from simple formulae. To illustrate, for $\alpha =0.95$:
$$\begin{array}{c}\hfill {\widehat{m}}_{i,inf}={\widehat{m}}_{{Y}^{i}}1.96\frac{{\displaystyle {\widehat{\sigma}}_{{Y}^{i}}}}{{\displaystyle \sqrt{N}}},\phantom{\rule{4pt}{0ex}}{\widehat{m}}_{i,sup}={\widehat{m}}_{{Y}^{i}}+1.96\frac{{\widehat{\sigma}}_{{Y}^{i}}}{\sqrt{N}},\phantom{\rule{4pt}{0ex}}\mathrm{that}\mathrm{is}\mathrm{to}\mathrm{say}\phantom{\rule{4pt}{0ex}}\mathrm{Pr}\left({\widehat{m}}_{i,inf}\le {m}_{{Y}^{i}}\le {\widehat{m}}_{i,sup}\right)=0.95\end{array}$$ 
The size of the confidence interval, which represents the uncertainty of this mean estimation, decreases as $N$ increases but more gradually (the rate is proportional to $\sqrt{N}$: multiplying $N$ by 100 reduces the length of the confidence interval $\left{\widehat{m}}_{i,inf}{\widehat{m}}_{i,sup}\right$ by a factor 10).
Other notations
Link with OpenTURNS methodology
step A: specification of input variables $\underline{X}$ and $\underline{d}$ and the output variable of interest $\underline{Y}=h(\underline{X},\underline{d})$,
step B: use of one of the proposed techniques for determining the probability distribution of the variable $\underline{X}$,
The method's parameters are the following:
number $N$ of simulations,
probability $\alpha $ giving the required confidence level for the confidence intervals,
The method described here returns the following results:
the MonteCarlo estimates ${\widehat{m}}_{{Y}^{i}}$ and ${\widehat{\sigma}}_{{Y}^{i}}$ for the mean and standard deviations of the variable of interest ${Y}^{i}$,
the confidence interval $\left[{\widehat{m}}_{i,inf},{\widehat{m}}_{i,sup}\right]$ for the mean ${m}_{{Y}^{i}}$.
References and theoretical basics
Actually, the only limitation resides in $N$, the number of simulations, which if not sufficiently high (because of the CPU time required for an estimation of $\underline{Y}=h(\underline{X},\underline{d})$), can result in too greater uncertainty for the estimations of ${\widehat{m}}_{{Y}^{i}}$ and ${\widehat{\sigma}}_{{Y}^{i}}$. It is fitting then to verify the convergence of the estimators, especially by plotting the graph of the coefficient de variation ${\widehat{\sigma}}_{{Y}^{i}}/{\widehat{m}}_{{Y}^{i}}$ as a function of $N$: if convergence is not visible, it is necessary to increase $N$ or if needed to choose another propagation method to estimate the central uncertainty of $\underline{Y}$ (see [Quadratic combination / Perturbation method] ).
The following references provide a bibliographic starting point for interested readers for further study of the method described here:
Robert C.P., Casella G. (2004). "Monte Carlo Statistical Methods", Springer, ISBN 0387212396, 2nd ed.
Rubinstein R.Y. (1981). "Simulation and The Monte Carlo methods", John Wiley & Sons
"Guide to the expression of Uncertainty in Measurements (GUM)", ISO publication
Mathematical description
The isoprobabilistic transformation is used in the following context : $\underline{X}$ is the input random vector, ${F}_{i}$ the cumulative density functions of its components and $C$ its copula.
Let us denote by $\underline{d}$ a deterministic vector, $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and g(X , d) = 0 its boundary.
One way to evaluate the probability content ${P}_{f}$ of the event ${\mathcal{D}}_{f}$:
$${P}_{f}=\mathbb{P}\left(g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$$  (105) 
is to introduce an isoprobabilistic transformation $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into ${\mathbb{R}}^{n}$, such that the distribution of the random vector $\underline{U}=T\left(\underline{X}\right)$ has the following properties : $\underline{U}$ and $\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{U}$ have the same distribution for all rotations $\underline{\underline{R}}\in {\mathcal{S}\mathcal{P}}_{n}\left(\mathbb{R}\right)$.
Such transformations exist and the most widely used are :
the Generalized Nataf transformation (refer to [Generalized Nataf] ),
the Rosenblatt transformation (refer to [Rosenblatt] ).
Principle
If we suppose that the numerical model $g$ has suitable properties of differentiability, the evaluation of the probability (105) can be transformed in the evaluation of the probability:
$${P}_{f}=\mathbb{P}\left(G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathbb{R}}^{n}}{1}_{G(\underline{u}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}\phantom{\rule{0.166667em}{0ex}}{f}_{\underline{U}}\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}$$  (106) 
where $T$ is a ${C}^{1}$diffeomorphism called an isoprobabilistic transformation, ${f}_{\underline{U}}$ the probability density function of $\underline{U}=T\left(\underline{X}\right)$ and $G=f\circ {T}^{1}$.
The vector $\underline{U}$ is said to be in the standard space, whereas $\underline{X}$ is in the physical space.
The interest of such a transformation is the rotational invariance of the distributions in the standard space : the random vector $\underline{U}$ has a spherical distribution, which means that the density function ${f}_{\underline{U}}$ is a function of $\parallel \underline{u}\parallel $. Thus, without loss of generality, it is possible to map the general failure domain $\mathcal{D}$ to a domain ${\mathcal{D}}^{\text{'}}$ for which the design point ${{\underline{u}}^{*}}^{\text{'}}$ (the point of the event boundary at minimal distance from the center of the standard space) is supported by the last axis (see Figure (4.3.9).
The following transformations verify that property, under some specific conditions on the dependence structure of the random vector $\underline{X}$ :
the Nataf transformation (see [A. Nataf]) : $\underline{X}$ must have a normal copula,
the Generalized Nataf transformation (see [R. Lebrun, A. Dutfoy, 2008, b]) : $\underline{X}$ must have an elliptical copula,
the Rosenblatt transformation (see [M. Rosenblatt]) : there is no condition on the copula of $\underline{X}$ .
OpenTURNS uses automatically the Generalized Nataf transformation when the copula is elliptical and the Rosenblatt transformation for any other case.
Link with OpenTURNS methodology
The reference [R. Lebrun, A. Dutfoy, 2008, c] makes a comparison between both isoprobabilistic transformations : Generalized Nataf and Rosenblatt. The conclusions are the following ones : In the case where the copula of the random vector $\underline{X}$ is normal, the canonical Rosenblatt transformation (it means when the conditionning order follows the natural order of the components) and the Nataf transformation are identical; if the conditioning order is not the canonical one, both transformations only differ by an orthogonal transformation.
In the case where the copula of the random vector $\underline{X}$ is elliptical without being normal, both transformations differ and both standard spaces can not be compared.
Let's note some useful references:
A. Der Kiureghian, P.L. Liu, 1986,"Structural Reliability Under Incomplete Probabilistic Information", Journal of Engineering Mechanics, vol 112, no. 1, pp85104.
R. Lebrun, A. Dutfoy, 2008, b, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics 24 (2009), pp. 172178, doi:10.1016/j.probengmech.2008.05.001.
R. Lebrun, A. Dutfoy, 2008, c, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under review so far.
A. Nataf, "Détermination des distributions de probabilités dont les marges sont données", Comptes Rendus de l'Académie des Sciences, 225, 4243.
M. Rosenblatt, "Remarks on a Multivariat Transformation", The Annals of Mathematical Statistics, Vol. 23, No 3, pp. 470472.
Examples
Mathematical description
The Generalized Nataf transformation is an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ) which is used under the following context : $\underline{X}$ is the input random vector, ${F}_{i}$ the cumulative density functions of its components and $C$ its copula, which is supposed to be elliptical.
Let us denote by $\underline{d}$ a determinist vector, $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and g(X , d) = 0 its boundary.
One way to evaluate the probability content of the event ${\mathcal{D}}_{f}$:
$${P}_{f}=\mathbb{P}\left(g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$$  (107) 
is to use the Generalized Nataf transformation $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into the standard space ${\mathbb{R}}^{n}$, where distributions are spherical, with zero mean, unit variance and unit correlation matrix. The type of the spherical distribution is the type of the elliptical copula $C$.
The Generalized Nataf transformation presented here is a generalisation of the traditional Nataf transformation (see [A. Nataf]) : the reference [R. Lebrun, A. Dutfoy a] shows that the Nataf transformation can be used only if the copula of $\underline{X}$ is normal. The Generalized Nataf transformation (see [R. Lebrun, A. Dutfoy b]) extends the Nataf transformation to elliptical copulas.
Principle
Let us recall some definitions.
A random vector $\underline{X}$ in ${\mathbb{R}}^{n}$ has an elliptical distribution if and only if there exists a deterministic vector $\underline{\mu}$ such that the characteristic function of $\underline{X}\underline{\mu}$ is a scalar function of the quadratic form ${\underline{u}}^{t}\underline{\underline{\Sigma}}\phantom{\rule{0.166667em}{0ex}}\underline{u}$:
$$\begin{array}{c}\hfill {\varphi}_{\underline{X}\underline{\mu}}\left(\underline{u}\right)=\psi \left({\underline{u}}^{t}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{\Sigma}}\phantom{\rule{0.166667em}{0ex}}\underline{u}\right)\end{array}$$ 
with $\underline{\underline{\Sigma}}$ a symmetric positive definite matrix of rank $p$. As $\underline{\underline{\Sigma}}$ is symmetric positive, it can be written in the form $\underline{\underline{\Sigma}}=\underline{\underline{D}}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{D}}$, where $\underline{\underline{D}}$ is the diagonal matrix $\underline{\underline{diag{\sigma}_{i}}}$ with ${\sigma}_{i}=\sqrt{{\Sigma}_{ii}}$ and ${R}_{ij}=\frac{{\Sigma}_{ij}}{\sqrt{{\Sigma}_{ii}{\Sigma}_{jj}}}$.
With a specific choice of normalisation for $\psi $, in the case of finite second moment, the covariance matrix of $\underline{X}$ is $\underline{\underline{\Sigma}}$ and $\underline{\underline{R}}$ is then its linear correlation matrix. The matrix $\underline{\underline{R}}$ is always welldefined, even if the distribution has no finite second moment: even in this case, we call it the correlation matrix of the distribution. We note $\underline{\sigma}=({\sigma}_{1},\cdots ,{\sigma}_{n})$.
We denote by ${E}_{\underline{\mu},\underline{\sigma},\underline{\underline{R}},\psi}$ the cumulative distribution function of the elliptical distribution ${\mathcal{E}}_{\underline{\mu},\underline{\sigma},\underline{\underline{R}},\psi}$.
An elliptical copula ${C}_{\underline{\underline{R}},\psi}^{E}$ is the copula of an elliptical distribution ${\mathcal{E}}_{\underline{\mu},\underline{\sigma},\underline{\underline{R}},\psi}$.
The generic elliptical representative of an elliptical distribution family ${\mathcal{E}}_{\underline{\mu},\underline{\sigma},\underline{\underline{R}},\psi}$ is the elliptical distribution whose cumulative distribution function is ${E}_{\underline{0},\underline{1},\underline{\underline{R}},\psi}$.
The standard spherical representative of an elliptical distribution family ${\mathcal{E}}_{\underline{\mu},\underline{\sigma},\underline{\underline{R}},\psi}$ is the spherical distribution whose cumulative distribution function is ${E}_{\underline{0},\underline{1},{\underline{\underline{I}}}_{n},\psi}$.
The family of distributions with marginal cumulative distribution functions are ${F}_{1},\cdots ,{F}_{n}$ and any elliptical copula ${C}_{\underline{\underline{R}},\psi}^{E}$ is denoted by ${\mathcal{D}}_{{F}_{1},\cdots ,{F}_{n},{C}_{\underline{\underline{R}},\psi}^{E}}$. The cumulative distribution function of this distribution is noted ${D}_{{F}_{1},\cdots ,{F}_{n},{C}_{\underline{\underline{R}},\psi}^{E}}$.
The random vector $\underline{X}$ is supposed to be continuous and with full rank. It is also supposed that its cumulative marginal distribution functions ${F}_{i}$ are strictly increasing (so they are bijective) and that the matrix $\underline{\underline{R}}$ of its elliptical copula is symmetric positive definite.
Generalized Nataf transformation: Let $\underline{X}$ in ${\mathbb{R}}^{n}$ be a continuous random vector following the distribution ${D}_{{F}_{1},\cdots ,{F}_{n},{C}_{\underline{\underline{R}},\psi}^{E}}$. The Generalized Nataf transformation ${T}_{Nataf}^{gen}$ is defined by:
$$\underline{u}={T}_{Nataf}^{gen}\left(\underline{X}\right)={T}_{3}\circ {T}_{2}\circ {T}_{1}\left(\underline{X}\right)$$  (108) 
where the three transformations ${T}_{1}$, ${T}_{2}$ and ${T}_{3}$ are given by:
$$\begin{array}{c}\begin{array}{ccc}\hfill {T}_{1}:{\mathbb{R}}^{n}& \to & {\mathbb{R}}^{n}\hfill \\ \hfill \underline{x}& \mapsto & \underline{w}={({F}_{1}\left({x}_{1}\right),\cdots ,{F}_{n}\left({x}_{n}\right))}^{t}\hfill \end{array}\hfill \\ \begin{array}{ccc}\hfill {T}_{2}:{\mathbb{R}}^{n}& \to & {\mathbb{R}}^{n}\hfill \\ \hfill \underline{w}& \mapsto & \underline{v}={({E}^{1}\left({w}_{1}\right),\cdots ,{E}^{1}\left({w}_{n}\right))}^{t}\hfill \end{array}\hfill \\ \begin{array}{ccc}\hfill {T}_{3}:{\mathbb{R}}^{n}& \to & {\mathbb{R}}^{n}\hfill \\ \hfill \underline{v}& \mapsto & \underline{u}=\underline{\underline{\Gamma}}\phantom{\rule{0.166667em}{0ex}}\underline{v}\hfill \end{array}\hfill \end{array}$$  (109) 
where $E$ is the cumulative distribution function of the standard 1dimensional elliptical distribution with characteristic generator $\psi $ and $\underline{\underline{\Gamma}}$ is the inverse of the Cholesky factor of $\underline{\underline{R}}$.
The distribution of $\underline{W}={T}_{2}\circ {T}_{1}\left(\underline{X}\right)$ is the generic elliptical representative associated to the copula of $\underline{X}$. The step ${T}_{3}$ maps this distribution into its standard representative, following exactly the same algebra as the normal copula. Thus, in the Generalized Nataf standard space, the random vector $\underline{U}$ follows the standard representative distribution of the copula of the physical random vector $\underline{X}$.
If the copula of $\underline{X}$ is normal, $\underline{U}$ follows the standard normal distribution with independent components.
Other notations
Link with OpenTURNS methodology
OpenTURNS uses the Generalized Nataf transformation to map in the standard space any random vector which copula is elliptical.
Furthermore, the normal copula of the random vector $\underline{X}$ is parameterized from its linear correlation matrix, which arises some problems due to the fact that the linear correlation matrix is not a usefull indicator of the dependence between two correlated variables (see [R. Lebrun, A. Dutfoy, 2008, a]). More adapted indicators of the dependence structure are the Spearman rank correlation coefficents (refer to [Spearman correlation coefficient] ), the Kendall rate or the upper and lower tail coefficients (see R. Lebrun, A. Dutfoy, 2008, c]).
Let's note some usefull references:
O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark  Maritime engineering, internet publication.
J. Goyet, 1998, "Sécurité probabiliste des structures  Fiabilité d'un élément de structure," Collège de Polytechnique.
A. Der Kiureghian, P.L. Liu, 1986,"Structural Reliability Under Incomplete Probabilistic Information", Journal of Engineering Mechanics, vol 112, no. 1, pp85104.
R. Lebrun, A. Dutfoy, 2008, a, "An innovating analysis of the Nataf transformation from the copula viewpoint", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.08.001.
R. Lebrun, A. Dutfoy, 2008, b, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.05.001.
R. Lebrun, A. Dutfoy, 2008, c, "A practical approach to dependence modelling using copulas", submitted to Risk and Reliability Journal in november 2008, under review so far.
H.O. Madsen, Krenk, S., Lind, N. C., 1986, "Methods of Structural Safety," Prentice Hall.
A. Nataf, "Détermination des distributions de probabilités dont les marges sont données", Comptes Rendus de l'Académie des Sciences, 225, 4243.
Examples
We consider the following limit state function $g$ defined by:
$$g({X}_{1},{X}_{2})=8{X}_{1}+2{X}_{2}1$$  (110) 
The Generalized Nataf transformation (which is equivalent to the tranditionnal Nataf transformation here since the copula is normal) leads to the normal random vector $\underline{U}$ defined as:
$$\underline{U}=\underline{\underline{\Gamma}}\left(\begin{array}{c}{\Phi}^{1}\circ {F}^{1}\left({X}_{1}\right)\hfill \\ {\Phi}^{1}\circ {F}^{2}\left({X}_{2}\right)\hfill \end{array}\right)$$  (111) 
As $\underline{\underline{\Gamma}}=\left(\begin{array}{cc}1& 0\\ {\displaystyle \frac{\rho}{\sqrt{1{\rho}^{2}}}}& {\displaystyle \frac{1}{\sqrt{1{\rho}^{2}}}}\end{array}\right)$, we have:
$$\left\{\begin{array}{ccc}{U}_{1}\hfill & =& {\Phi}^{1}\circ {F}^{1}\left({X}_{1}\right)\hfill \\ {U}_{2}\hfill & =& {\displaystyle \frac{\rho {\Phi}^{1}\circ {F}^{1}\left({X}_{1}\right)}{\sqrt{1{\rho}^{2}}}+\frac{{\Phi}^{1}\circ {F}^{2}\left({X}_{2}\right)}{\sqrt{1{\rho}^{2}}}}\hfill \end{array}\right.$$  (112) 
The expression of the limit state in the standard space has the parametric expression, where $\xi \in [0,+\infty [$:
$$\left\{\begin{array}{ccc}{u}_{1}\hfill & =& {\Phi}^{1}\circ {F}^{1}\left(\xi \right)\hfill \\ {u}_{2}\hfill & =& {\displaystyle \frac{{\Phi}^{1}\circ {F}^{2}\left(\frac{18\xi}{2}\right)\rho {\Phi}^{1}\circ {F}^{1}\left(\xi \right)}{\sqrt{1{\rho}^{2}}}}\hfill \end{array}\right.$$  (113) 
Mathematical description
The Rosenblatt transformation is an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ) which is used under the following context : $\underline{X}$ is the input random vector, ${F}_{i}$ the cumulative density functions of its components and $C$ its copula, without no condition on its type.
Let us denote by $\underline{d}$ a determinist vector, $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and g(X , d) = 0 its boundary.
One way to evaluate the probability content of the event ${\mathcal{D}}_{f}$:
$${P}_{f}=\mathbb{P}\left(g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$$  (114) 
is to use the Rosenblatt transformation $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into the standard space ${\mathbb{R}}^{n}$, where distributions are normal, with zero mean, unit variance and unit correlation matrix (which is equivalent in that normal case to independent components).
Principle
Let us recall some definitions.
The cumulative distribution function ${F}_{1,k}$ of the $k$dimensional random vector $({X}_{1},\cdots ,{X}_{k})$ is defined by its marginal distributions ${F}_{i}$ and the copula ${C}_{1,k}$ through the relation :
$${F}_{1,k}({x}_{1},\cdots ,{x}_{k})={C}_{1,k}({F}_{1}\left({x}_{1}\right),\cdots ,{F}_{k}\left({x}_{k}\right))$$  (115) 
with
$${C}_{1,k}({u}_{1},\cdots ,{u}_{k})=C({u}_{1},\cdots ,{u}_{k},1,\cdots ,1)$$  (116) 
The cumulative distribution function of the conditional variable ${X}_{k}{X}_{1},\cdots ,{X}_{k1}$ is defined by:
$$F}_{k1,\cdots ,k1}\left({x}_{k}\right{x}_{1},\cdots ,{x}_{k1})=\frac{{\partial}^{k1}{F}_{1,k}({x}_{1},\cdots ,{x}_{k})}{\partial {x}_{1}\cdots \partial {x}_{k1}}/\frac{{\partial}^{k1}{F}_{1,k1}({x}_{1},\cdots ,{x}_{k1})}{\partial {x}_{1}\cdots \partial {x}_{k1}$$  (117) 
Rosenblatt transformation: Let $\underline{X}$ in ${\mathbb{R}}^{n}$ be a continuous random vector defined by its marginal cumulative distribution functions ${F}_{i}$ and its copula $C$. The Rosenblatt transformation ${T}_{Ros}$ of $\underline{X}$ is defined by:
$$\underline{U}={T}_{Ros}\left(\underline{X}\right)={T}_{2}\circ {T}_{1}\left(\underline{X}\right)$$  (118) 
where both transformations ${T}_{1}$, and ${T}_{2}$ are given by:
$$\begin{array}{ccc}\hfill {T}_{1}:{\mathbb{R}}^{n}& \to & {\mathbb{R}}^{n}\hfill \\ \hfill \underline{X}& \mapsto & \underline{Y}=\left(\begin{array}{c}{F}_{1}\left({X}_{1}\right)\hfill \\ \cdots \hfill \\ {F}_{k1,\cdots ,k1}\left({X}_{k}\right{X}_{1},\cdots ,{X}_{k1})\hfill \\ \cdots \hfill \\ {F}_{n1,\cdots ,n1}\left({X}_{n}\right{X}_{1},\cdots ,{X}_{n1})\hfill \end{array}\right)\hfill \end{array}$$  (119) 
$$\begin{array}{ccc}\hfill {T}_{2}:{\mathbb{R}}^{n}& \to & {\mathbb{R}}^{n}\hfill \\ \hfill \underline{Y}& \mapsto & \underline{U}=\left(\begin{array}{c}{\Phi}^{1}\left({Y}_{1}\right)\hfill \\ \cdots \hfill \\ {\Phi}^{1}\left({Y}_{n}\right)\hfill \end{array}\right)\hfill \end{array}$$  (120) 
where ${F}_{k1,\cdots ,k1}$ is the cumulative distribution function of the conditional random variable ${X}_{k}{X}_{1},\cdots ,{X}_{k1}$ and $\Phi $ the cumulative distribution function of the standard 1dimensional Normal distribution.
Other notations
Link with OpenTURNS methodology
OpenTURNS uses the Rosenblatt transformation to map in the standard space any random vector which copula is not elliptical (for which the Generalized Nataf transformation can not be used).
The article [R. Lebrun, A. Dutfoy, 2008, c] shows that if the initial vector $\underline{X}$ has a normal copula, the effect of changing the order of conditioning in the Rosenblatt transformation is to apply an orthogonal transformation to the canonical Rosenblatt standard space. In the context of the FORM or SORM approximations, it will change the location of the design point, but due to the rotational invariance of the distribution of $\underline{U}$ in the standard space (this distribution is spherical), it will not change the key quantities evaluated using these approximations :
the Hohenbichler reliability index, which is the distance of the limit state surface from the center of the standard space,
the FORM approximation of the probability of the failure space which relies only on the Hohenbichler reliability index,
the SORM approximations of the probability of the failure space, which rely on both the Hohenbichler reliability index and the curvatures of the limit state function at the design point (the point of the failure space the nearest the center of the standard space).
If the copula of the input vector $\underline{X}$ is not normal, the conditioning order generally has an impact on the results given by the FORM and SORM approximations.
Let us recall that it is only the approximate values that are impacted by the conditioning order, not the exact value of the probability : whatever the choosen conditioning order, the associated Rosenblatt transformation remains an isoprobabilistic transformation!
Let's note some usefull references:
O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark  Maritime engineering, internet publication.
J. Goyet, 1998,"Sécurité probabiliste des structures  Fiabilité d'un élément de structure," Collège de Polytechnique.
A. Der Kiureghian, P.L. Liu, 1986,"Structural Reliability Under Incomplete Probabilistic Information", Journal of Engineering Mechanics, vol 112, no. 1, p85104.
R. Lebrun, A. Dutfoy, 2008, a, "An innovating analysis of the Nataf transformation from the copula viewpoint", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.08.001.
R. Lebrun, A. Dutfoy, 2008, b, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics, doi:10.1016/j.probengmech.2008.05.001.
R. Lebrun, A. Dutfoy, 2008, c, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under review so far.
H.O. Madsen, Krenk, S., Lind, N. C., 1986, "Methods of Structural Safety," Prentice Hall.
A. Nataf, "Détermination des distributions de probabilités dont les marges sont données", Comptes Rendus de l'Académie des Sciences, 225, 4243.
M. Rosenblatt, "Remarks on a Multivariat Transformation", The Annals of Mathematical Statistics, Vol. 23, No 3, pp. 470472.
Examples
$$C}_{\theta}(u,v)=\frac{1}{\theta}ln\left(1+\frac{({e}^{\theta u}1)({e}^{\theta v}1)}{{e}^{\theta}1}\right),\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}(u,v,)\in {[0,1]}^{2$$  (121) 
This copula is not elliptical but archimedian.
The cumulative distribution function of the conditional variable ${X}_{2}{X}_{1}$ is defined by :
$${F}_{21}\left({x}_{2}\right{x}_{1})={C}_{21}\left({F}_{2}\left({x}_{2}\right)\right{F}_{1}\left({x}_{1}\right))={C}_{21}(1{e}^{{\lambda}_{2}{x}_{2}},1{e}^{{\lambda}_{1}{x}_{1}})$$  (122) 
where the conditional copula of ${X}_{2}{X}_{1}$ is defined by :
$$C}_{21}\left(v\rightu)=\frac{{e}^{\theta u}({e}^{\theta v}1)}{{e}^{\theta v}{e}^{\theta u}({e}^{\theta v}1){e}^{\theta}$$  (123) 
The Rosenblatt transformation leads to the normal random vector $\underline{U}$ defined as:
$$\underline{U}=\left(\begin{array}{c}{\Phi}^{1}(1{e}^{{\lambda}_{1}{x}_{1}})\hfill \\ {\displaystyle {\Phi}^{1}\left[{C}_{21}(1{e}^{{\lambda}_{2}{x}_{2}},1{e}^{{\lambda}_{1}{x}_{1}})\right]}\hfill \end{array}\right)$$  (124) 
If we consider the following limit state function $g$ defined by:
$$g({X}_{1},{X}_{2})=8{X}_{1}+2{X}_{2}1$$  (125) 
it is possible to draw the transformations of the limit state surface into the standard space when using the canonical order in the Rosenblatt transformation and its inverse. The following figure draws them in the case where $\theta =10$, ${\lambda}_{1}=1$ and ${\lambda}_{2}=3$.
Mathematical description
The First Order Reliability Method is used under the following context: $\underline{X}$ is the input random vector and ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability function.
Let us denote by $\underline{d}$ a determinist vector, $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and g(X , d) = 0 its boundary.
The objective of FORM is to evaluate the probability content of the event ${\mathcal{D}}_{f}$:
$${P}_{f}=\mathbb{P}\left(g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$$  (126) 
Principle
The method proceeds in three steps :
Map the probabilistic model in terms of $\underline{X}$ thanks to an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ) $T$ which is a diffeomorphism from $supp\left(\underline{X}\right)$ into ${\mathbb{R}}^{n}$, such that the distribution of the random vector $\underline{U}=T\left(\underline{X}\right)$ has the following properties : $\underline{U}$ and $\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{U}$ have the same distribution for all rotations $\underline{\underline{R}}\in {\mathcal{S}\mathcal{P}}_{n}\left(\mathbb{R}\right)$.
The usual isoprobabilistic transformations are the Generalized Nataf transformation (refer to [Generalized Nataf] ) and the Rosenblatt one (refer to [Rosenblatt] ).
The mapping of the limit state function is $G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})=g({T}^{1}\left(\underline{U}\right)\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$. Then, the event probability ${P}_{f}$ rewrites :
$${P}_{f}=\mathbb{P}\left(G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathbb{R}}^{n}}{1}_{G(\underline{u}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}\phantom{\rule{0.166667em}{0ex}}{f}_{\underline{U}}\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}$$  (127) 
where ${f}_{\underline{U}}$ is the density function of the distribution in the standard space : that distribution is spherical (invariant by rotation by definition). That property implies that ${f}_{\underline{U}}$ is a function of $\left\right\underline{U}{\left\right}^{2}$ only. Furthermore, we suppose that outside the sphere which tangents the limit state surface in the standard space, ${f}_{\underline{U}}$ is decreasing.
Find the design point ${\underline{u}}^{*}$ which is the point verifying the event of maximum likelihood : the decreasing hypothesis of the standard distribution ${f}_{\underline{U}}$ outside the sphere which tangents the limit state surface in the standard space implies that the design point is the point on the limit state boundary the nearest to the origin of the standard space. Thus, ${\underline{u}}^{*}$ is the result of a constrained optimization problem.
In the standard space, approximate the limit state surface in the standard space by a linear surface at the design point ${\underline{u}}^{*}$. Then, the probability ${P}_{f}$ (127) where the limit state surface has been approximated by a linear surface (hyperplane) can be obtained exactly, thanks to the rotation invariance of the standard distribution ${f}_{\underline{U}}$ :
$${P}_{f,FORM}=\left\begin{array}{cc}E({\beta}_{HL})\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\text{origin}\phantom{\rule{4.pt}{0ex}}\text{of}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\underline{u}\text{lies}\phantom{\rule{4.pt}{0ex}}\text{in}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}\text{domain}\phantom{\rule{4.pt}{0ex}}{\mathcal{D}}_{f}\hfill \\ E(+{\beta}_{HL})\hfill & \text{otherwise}\hfill \end{array}\right.$$  (128) 
where ${\beta}_{HL}$ is the HasoferLind reliability index (refer to [Reliability index] ), defined as the distance of the design point ${\underline{u}}^{*}$ to the origin of the standard space and $E$ the marginal cumulative density function of the spherical distributions in the standard space.
Let us recall here (refer to [Isoprobabilistic Transformation] ) that in the Rosenblatt standard space, random vectors follow the standard normal distribution (with zero mean, unit variance and unit correlation matrix), which implies that $E=\Phi $. In the Generalized Nataf standard space, random vectors follow some spherical distributions, with zero mean, unit variance, unit correlation matrix and which type $\psi $ is the one of the copula of the physical random vector $\underline{X}$ : in that case, $E$ is the 1D cumulative distribution function with zero mean, unit variance and which type is $\psi $.
Other notations
However, if the event is a threshold exceedance, it is useful to explicite the variable of interest $Z=\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$, evaluated from the model $\tilde{g}(.)$. In that case, the event considered, associated to the threshold ${z}_{s}$ has the formulation: ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}Z=\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})>{z}_{s}\}$ and the limit state function is : $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})={z}_{s}Z={z}_{s}\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$. ${P}_{f}$ is the threshold exceedance probability, defined as : ${P}_{f}=P(Z\ge {z}_{s})={\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$.
Link with OpenTURNS methodology
It requires to have fulfilled the following steps beforehand:
step A: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\tilde{g}(\underline{X},\underline{d})$, result of the model $\tilde{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0$,
step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,
step C: select an appropriate optimization algorithm among those proposed.
The First Order Reliability Method provides the following results:
the FORM probability calculated by eq.128,
the importance factors associated to the event (refer to [Importance Factors] ),
if asked by the user, the sensitivity factors associated to the event (refer to [Sensitivity Factors] ).
References and theoretical basics
The quality of the results obtained by the First Order Reliability Method depends on:
the quality of the optimization algorithm used to find the design point: it is important that the optimization converges towards the global minimum of the distance function
the quality of the computation of the gradients of the limit state function. It is important to choose an optimization algorithm adapted to the model considered
the quality of the design point in the $\underline{u}$space. It has several fields:
the shape of the limit state surface: the boundary is supposed to be well approximated by a plane near the design point,
the unicity of the design point in the $\underline{u}$space: FORM is valid when there is only one point on the limit state surface at a distance minimal to the origin,
the strongness of the design point: FORM is valid under the hypothesis that most of the contribution to ${P}_{f}$ is concentrated in the vicinity of the design point, which is the case both when around ${\underline{P}}^{*}$, the contribution decreases rapidly with the distance to ${\underline{P}}^{*}$ and when there is no local maximum with comparable density.
The first hypothesis can be checked by testing other method to evaluate ${P}_{f}$ : SORM (refer to [SORM] ) that takes into account the curvatures of the surface, or importance sampling techniques (refer to [Importance sampling] ) that makes no hypothesis on the shape of the surface.
The unicity and the strongness of the design point can be checked thanks to the Strong Maximum Test (refer to [Strong Max Test] ).
Accelerated sampling techniques such as directional sampling (refer to [Directional sampling] ) are also still valid if the unicity or strongness are doubtful.
A limitation of FORM (or SORM) approximation is that it is generally impossible to quantify the approximation error. Although the method has been used satisfactorily in many circumstances, it is generally useful, if computationally possible, to validate Form/Som using at least one of the techniques above mentioned.
Let's note some usefull references:
O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark  Maritime engineering, internet publication.
R. Lebrun, A. Dutfoy, 2008, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics 24 (2009), pp. 172178, doi:10.1016/j.probengmech.2008.05.001.
R. Lebrun, A. Dutfoy, 2008, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under temptatively accepted so far.
H. O. Madsen, Krenk, S., Lind, N. C., 1986, "Methods of Structural Safety," Prentice Hall.
Examples
$$\begin{array}{c}\hfill {\displaystyle y(E,F,L,I)=\frac{F{L}^{3}}{3EI}}\end{array}$$ 
The objective is to propagate until $y$ the uncertainties of the variables $(E,F,L,I)$.
The input random vector is $\underline{X}=(E,F,L,I)$, which probabilistic modelisation is (unity is not provided):
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal(50,1)\hfill \\ F\hfill & =& Normal(1,1)\hfill \\ L\hfill & =& Normal(10,1)\hfill \\ I\hfill & =& Normal(5,1)\hfill \end{array}\right.\end{array}$$ 
The event considered is the threshold exceedance : ${\mathcal{D}}_{f}=\{(E,F,L,I)\in {\mathbb{R}}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y(E,F,L,I)\ge 3\}$ We obtain the following results, where the Generalized Nataf transformation has been used :
design point in the $\underline{x}$space, ${P}^{*}=({E}^{*}=49.97,{F}^{*}=1.842,{l}^{*}=10.45,{I}^{*}=4.668)$
the generalized and Hasofer reliability index : ${\beta}_{g}={\beta}_{HL}=1.009$
the FORM probability : ${P}_{f,FORM}=1.564{e}^{1}$
Example 2 : If we evaluate the FORM probability in the example described in [Rosenblatt] , in the case where $\theta =10$, ${\lambda}_{1}=1$ and ${\lambda}_{2}=3$ and where we considered the canonical order and its inverse of the Rosenblatt transformation, we obtain :
$$\left\{\begin{array}{ccc}{\beta}_{CanOrd}\hfill & =& 1.24\hfill \\ {\beta}_{InvOrd}\hfill & =& 1.17\hfill \end{array}\right.$$  (129) 
which leads to different FORM approximations of the failure probability:
$$\left\{\begin{array}{ccc}{P}_{CanOrd}^{FORM}\hfill & =& 1.07\phantom{\rule{0.166667em}{0ex}}{10}^{1}\hfill \\ {P}_{InvOrd}^{FORM}\hfill & =& 1.22\phantom{\rule{0.166667em}{0ex}}{10}^{1}\hfill \end{array}\right.$$  (130) 
Mathematical description
The Second Order Reliability Method is used in the same context as the First Order Reliability: refer to [FORM] for further details. The objective of SORM is to evaluate the probability content of the event ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ :
$${P}_{f}=\mathbb{P}\left(g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$$  (131) 
Principle
The principle is the same as for FORM. After having mapped the physical space into the standard through an isoprobabilistic transformation (refer to [Isoprobabilistic Transformation] ), eq. (131) becomes :
$${P}_{f}=\mathbb{P}\left(G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\right)={\int}_{{\mathbb{R}}^{n}}{1}_{G(\underline{u}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}\phantom{\rule{0.166667em}{0ex}}{f}_{\underline{U}}\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}$$  (132) 
where ${f}_{\underline{U}}$ is the density function of the distribution in the standard space : that distribution is spherical (invariant by rotation by definition). That property implies that ${f}_{\underline{U}}$ is a function of $\left\right\underline{U}{\left\right}^{2}$ only. Furthermore, we suppose that outside the sphere which tangents the limit state surface in the standard space, ${f}_{\underline{U}}$ is decreasing.
The difference with FORM comes from the approximation of the limit state surface at the design point ${\underline{P}}^{*}$ in the standard space : SORM approximates it by a quadratic surface that has the same main curvatures at the design point.
Let us denote by $n$ the dimension of the random vector $\underline{X}$ and ${\left({\kappa}_{i}\right)}_{1\le i\le n1}$ the $n1$ main curvatures of the limit state function at the design point in the standard space.
Several approximations are available in the standard version of OpenTURNS, detailed here in the case where the origin of the standard space does not belong to the failure domain :
Breitung's formula is an asymptotic results (see [Breitung]: the usual formula used in the normal standard space, has been generalized in [R. Lebrun, A. Dutfoy, 2008, b] to standard spaces where the distribution is spherical, with $E$ the marginal cumulative density function of the spherical distributions in the standard space (see [Generalized Nataf] ) :
$${P}_{Breitung}^{generalized}\stackrel{\beta \to \infty}{=}E(\beta )\prod _{i=1}^{n1}\frac{1}{\sqrt{1+\beta {\kappa}_{i}}}$$  (133) 
where $\Phi $ is the cumulative distribution function of the standard 1D normal distribution.
Hohenbichler's formula is an approximation of equation (133):
$$P}_{Hohenbichler}=\Phi ({\beta}_{HL})\prod _{i=1}^{n1}{\left(1+\frac{\phi ({\beta}_{HL})}{\Phi ({\beta}_{HL})}{\kappa}_{i}\right)}^{1/2$$  (134) 
This formula is valid only in the normal standard space and if $\forall i,1+\frac{\phi ({\beta}_{HL})}{\Phi ({\beta}_{HL})}{\kappa}_{i}>0$.
Tvedt's formula (Tvedt, 1988) :
$$\left\{\begin{array}{ccc}{\displaystyle {P}_{Tvedt}}\hfill & =& {A}_{1}+{A}_{2}+{A}_{3}\hfill \\ {\displaystyle {A}_{1}}\hfill & =& {\displaystyle \Phi ({\beta}_{HL})\prod _{i=1}^{N1}{\left(1+{\beta}_{HL}{\kappa}_{i}\right)}^{1/2}}\hfill \\ {\displaystyle {A}_{2}}\hfill & =& {\displaystyle \left[{\beta}_{HL}\Phi ({\beta}_{HL})\phi \left({\beta}_{HL}\right)\right]\left[\prod _{j=1}^{N1}{\left(1+{\beta}_{HL}{\kappa}_{i}\right)}^{1/2}\prod _{j=1}^{N1}{\left(1+(1+{\beta}_{HL}){\kappa}_{i}\right)}^{1/2}\right]}\hfill \\ {\displaystyle {A}_{3}}\hfill & =& {\displaystyle (1+{\beta}_{HL})\left[{\beta}_{HL}\Phi ({\beta}_{HL})\phi \left({\beta}_{HL}\right)\right]\left[\prod _{j=1}^{N1}{\left(1+{\beta}_{HL}{\kappa}_{i}\right)}^{1/2}\right.}\hfill \\ & & {\displaystyle \left.\mathcal{R}e\left(\prod _{j=1}^{N1}{\left(1+(i+{\beta}_{HL}){\kappa}_{j}\right)}^{1/2}\right)\right]}\hfill \end{array}\right.$$  (135) 
where $\mathcal{R}e\left(z\right)$ is the real part of the complex number $z$ and $i$ the complex number such that ${i}^{2}=1$ and $\Phi $ the cumulative distribution function of the standard 1D normal distribution.
This formula is valid only in the normal standard space and if $\forall i,1+\beta {\kappa}_{i}>0$ and $\forall i,1+(1+\beta ){\kappa}_{i}>0$.
Other notations
Link with OpenTURNS methodology
It requires to have fulfilled the following steps beforehand:
step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\tilde{g}(\underline{X},\underline{d})$, result of the model $\tilde{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0$,
step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,
step C: select an appropriate optimization algorithm among those proposed.
The Second Order Reliability Method provides the following results :
the SORM probabilities calculated in Eqs. (133),(134), (135)
the importance factors associated to the event : refer to [Importance Factors] to obtain details,
if asked by user, the sensitivity factors associated to the event : refer to [Sensitivity Factors] to obtain details.
References and theoretical basics
The evaluation of the previous formulas requires that the limit state function be twice differentiable at the design point.
The quality of the results obtained by the Second Order Reliability Method depends on the quality of the design point (same points as the FORM approximation), on the shape of the event boundary which must be well approximated by a quadratic surface near the design point and on the accuracy of the computed curvatures, which depends on the accuracy of both the evaluation of the gradient and the hessian at the design point.
The Tvedt formula is exact for a quadratic surface and asymptotically exact for another types of surfaces. The HohenBichler formula is a variant as regards to the Breitung one.
At last, let us note that the SORM approximation depend on the geometry of the limit state function in the standard space and thus it depends on the isoprobabilistic transformation used. In the normal copula case, [R. Lebrun, A. Dutfoy, 2008, c] has showed that both isoprobabilistic transformations are identical and then the SORM approximations are identical too. But, in all the other cases, both isoprobabilistic transformations lead to different standard spaces and thus to different SORM approximations. As the same way, the conditioning order in the Rosenblatt transformation has also an impact on the SORM approximations when the copula of the random vector $\underline{X}$ is not normal.
Let us note some useful references :
Breitung K. a, "Asymptotic approximation for probability integral," Probability Engineering Mechanics, 1989, Vol 4, No. 4.
Breitung K. b, 1984, "Asymptotic Approximation for multinormal Integrals," Journal of Engineering Mechanics, ASCE, 110(3), 357366.
Hohenbichler M., Rackwitz R., 1988, "Improvement of second order reliability estimates by importance sampling," Journal of Engineering Mechanics, ASCE,114(12), pp 21952199.
R. Lebrun, A. Dutfoy, b, 2008, "A generalisation of the Nataf transformation to distributions with elliptical copula", Probabilistic Engineering Mechanics 24 (2009), pp. 172178, doi:10.1016/j.probengmech.2008.05.001.
R. Lebrun, A. Dutfoy, 2008, c, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under review so far.
Tvedt L. 1988, "Second order reliability by an exact integral," proc. of the IFIP Working Conf. Reliability and Optimization of Structural Systems, ThoftChristensen (Ed), pp377384.
Zhao Y. G., Ono T., 1999, "New approximations for SORM : part 1", Journal of Engineering Mechanics, ASCE,125(1), pp 7985.
Zhao Y. G., Ono T., 1999, "New approximations for SORM : part 2", Journal of Engineering Mechanics, ASCE,125(1), pp 8693.
Adhikari S., 2004, "Reliability analysis using parabolic failure surface approximation", Journal of Engineering Mechanics, ASCE,130(12), pp 14071427.
Examples
$$\begin{array}{c}\hfill {\displaystyle y(E,F,L,I)=\frac{F{L}^{3}}{3EI}}\end{array}$$ 
The objective is to propagate until $y$ the uncertainties of the variables $(E,F,L,I)$.
The input random vector is $\underline{X}=(E,F,L,I)$, which probabilistic modelling is (unity is not provided):
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal(50,1)\hfill \\ F\hfill & =& Normal(1,1)\hfill \\ L\hfill & =& Normal(10,1)\hfill \\ I\hfill & =& Normal(5,1)\hfill \end{array}\right.\end{array}$$ 
The four random variables are independent.
The event considered is the threshold exceedance : ${\mathcal{D}}_{f}=\{(E,F,L,I)\in {\mathbb{R}}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y(E,F,L,I)\ge 3\}$ We obtain the following results :
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}{P}_{Breitung}\hfill & =& 2.5491{e}^{1}\phantom{\rule{0.166667em}{0ex}}\%\hfill \\ {P}_{Hohenbichler}\hfill & =& 2.648{e}^{1}\phantom{\rule{0.166667em}{0ex}}\%\hfill \\ {P}_{Tvedt}\hfill & =& 2.601{e}^{1}\hfill \end{array}\right.\end{array}$$ 
These three approximations are coherent between them, which increases confidence in these results.
Mathematical description
The generalised reliability index $\beta $ is used under the following context : $\underline{X}$ is a probabilistic input vector, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and g(X , d) = 0 its boundary.
The probability content of the event ${\mathcal{D}}_{f}$ is ${P}_{f}$:
$$\begin{array}{c}\hfill {P}_{f}={\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\end{array}$$  (136) 
The generalised reliability index is defined as :
$$\begin{array}{c}\hfill {\beta}_{g}={\Phi}^{1}(1{P}_{f})={\Phi}^{1}\left({P}_{f}\right).\end{array}$$ 
As ${\beta}_{g}$ increases, ${P}_{f}$ decreases rapidly.
Principle
OpenTURNS standard version evaluates :
${\beta}_{FORM}$ the FORM reliability index, where ${P}_{f}$ is obtained with a FORM approximation (refer to [FORM] ~): in this case, the generalised reliability index is equal to the HasoferLindt reliability index ${\beta}_{HL}$, which is the distance of the design point from the origin of the standard space,
${\beta}_{SORM}$ the SORM reliability index, where ${P}_{f}$ is obtained with a SORM approximation : Breitung, HohenBichler or Tvedt (refer to [SORM] ),
${\beta}_{g}$ the generalised reliability index, where ${P}_{f}$ is obtained with another technique : Monte Carlo simulations, importance samplings,... (refer to [Monte Carlo] , [LHS] [Importance samplings] and [Directional Simulation] ~).
Other notations
Link with OpenTURNS methodology
It requires to have fulfilled before the following steps:
step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\tilde{g}(\underline{X},\underline{d})$, result of the model $\tilde{g}\left(\right)$,
step A22: identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0$,
step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,
step C3: select a method to evaluate the probability content of the event : the FORM or SORM approximation (step C31) or a simulation method (step C32).
References and theoretical basics
Cornell, "A probabilitybased structural code," Journal of the American Concrete Institute, 1969, 66(12), 974985.
O. Ditlevsen, 1979, "Generalised Second moment reliability index," Journal of Structural Mechanics, ASCE, Vol.7, pp. 453472.
O. Ditlevsen and H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark  Maritime engineering, internet publication.
Hasofer and Lind, 1974, "Exact and invariant second moment code format," Journal of Engineering Mechanics Division, ASCE, Vol. 100, pp. 111121.
Examples
$$\begin{array}{c}\hfill {\displaystyle y(E,F,L,I)=\frac{F{L}^{3}}{3EI}}\end{array}$$ 
The objective is to propagate until $y$ the uncertainties of the variables $(E,F,L,I)$.
The input random vector is $\underline{X}=(E,F,L,I)$, which probabilistic modelisation is (unity is not provided):
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal(50,1)\hfill \\ F\hfill & =& Normal(1,1)\hfill \\ L\hfill & =& Normal(10,1)\hfill \\ I\hfill & =& Normal(5,1)\hfill \end{array}\right.\end{array}$$ 
The four random variables are independant.
The event considered is the threshold exceedance : ${\mathcal{D}}_{f}=\{(E,F,L,I)\in {\mathbb{R}}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y(E,F,L,I)\ge 3\}$ We obtain the following results :
design point in the $\underline{x}$space, ${P}^{*}=({E}^{*}=49.97,{F}^{*}=1.842,{l}^{*}=10.45,{I}^{*}=4.668)$
generalized and HasoferLind reliability index : ${\beta}_{g}={\beta}_{HL}=1.009$
Breitung generalized reliability index ${\beta}_{Breitung}=6.591{e}^{1}$
HohenBichler generalized reliability index ${\beta}_{HohenBichler}=6.285{e}^{1}$
Tvedt generalized reliability index ${\beta}_{Tvedt}=6.429{e}^{1}$
We note here that the three approximations SORM are consistent between them and different from the FORM one. It may signify that the curvatures are not important to take into account in the evaluation of the event probability.
Mathematical description
Within the context of the First and Second Order of the Reliability Method (refer to [FORM] and [SORM] ), the Strong Maximum Test (refer to [Strong Maximum Test] ) helps to check whether the design point computed is :
the true design point, which means a global maximum point,
a strong design point, which means that there is no other local maximum verifying the event and associated to a value near the global maximum.
The Strong Maximum Test samples a sphere in the standard space. OpenTURNS standard version uses the gaussian random sampling technique described hereafter.
Principle
OpenTURNS standard version uses the gaussian random sampling technique:
sampling of points in ${\mathbb{R}}^{N}$ according to a radial distribution : we generate $N$ independent standard normal samples,
projection of the points onto ${\mathcal{S}}^{*}$ : we map the points different from the origin using the transformation $M\mapsto m$ such as $\mathrm{\mathbf{O}\mathbf{m}}=R\frac{\mathrm{\mathbf{O}\mathbf{M}}}{\parallel \mathrm{\mathbf{O}\mathbf{M}}\parallel}$ where $R$ is the radius of the sphere of interest. This transformation does not depend on the angular coordinates. Thus, the generated points follow a uniform distribution on ${\mathcal{S}}^{*}$.
A result of such an algorithm is drawn on the following figure 4.3.15.
Other notations
Link with OpenTURNS methodology
It requires to have fulfilled the following steps beforehand:
step A: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\tilde{g}(\underline{X},\underline{d})$, result of the model $\tilde{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0$,
step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,
step C: select an appropriate optimization algorithm among those proposed to evaluate the Form or Sorm approximations of ${P}_{f}$; evaluate the quality of the design point resulting from the previous step thanks to the Strong Maximum Test.
References and theoretical basics
The parametric method uses the polar coordinates : the angle parameters are discretized uniformly. It has the inconvenient to generate points principally in the two poles zones.
The exclusion method generates points uniformly within the hypercube containing exactly the sphere. Then we keep only the points located inside the sphere, and we project them on the sphere. This method has the inconvenient to be inefficient for high dimensions : the fraction between the volume of the hypersphere and the volume of the hypercube is less than $0.7\%$ as soon as the dimension is greater than 9. Il means that for a dimension greater than 9, $99.3\%$ of the points generated are rejected.
Let's note some usefull references:
Luban, Marshall, Staunton, 1988, "An efficient method for generating a uniform distribution of points within a hypersphere," Computer in Physics, 2(6), 55.
Mathematical description
The Strong Maximum Test is used under the following context: $\underline{X}$ denotes a random input vector, representing the sources of uncertainties, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, representing the fixed variables $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})=0$ its boundary (also called limit state surface).
The probability content of the event ${\mathcal{D}}_{f}$ :
$$\begin{array}{ccc}\hfill {P}_{f}& =& {\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\hfill \end{array}$$  (4.3) 
may be evaluated with the FORM (refer to [FORM] ) or SORM method (refer to [SORM] ).
In order to evaluate an approximation of ${P}_{f}$, these analytical methods uses the Nataf isoprobabilistic transformation which maps the probabilistic model in terms of $\underline{X}$ onto an equivalent model in terms of $n$ independant standard normal random $\underline{U}$ (refer to [Isoprobabilistic Transformation] to have details on the transformation). In that new $\underline{u}$space, the event has the new expression defined from the transformed limit state function of the model $G$ : ${\mathcal{D}}_{f}=\{\underline{U}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ and its boundary : $\{\underline{U}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})=0\}$.
These analytical methods rely on the assumption that most of the contribution to ${P}_{f}$ comes from points located in the vicinity of a particular point ${P}^{*}$, the design point, defined in the $\underline{u}$space as the point located on the limit state surface and of maximal likelihood. Given the probabilistic caracteristics of the $\underline{u}$space, ${P}^{*}$ has a geometrical interpretation : it is the point located on the event boundary and at minimal distance from the center of the $\underline{u}$space. Thus, the design point ${P}^{*}$ is the result of a constrained optimization problem.
The FORM/SORM methods suppose that ${P}^{*}$ is unique.
One important difficulty comes from the fact that numerical method involved in the determination of ${P}^{*}$ gives no guaranty of a global optimum : the point to which it converges might be a local optimum only. In that case, the contribution of the points in the vicinity of the real design point is not taken into account, and this contribution is the most important one.
Furthermore, even in the case where the global optimum has really been found, there may exist another local optimum ${\tilde{P}}^{*}$ which likelihood is slightly inferior to the design point one, which means its distance from the center of the $\underline{u}$space is slightly superior to the design point one. Thus, points in the vicinity of ${\tilde{P}}^{*}$ may contribute significantly to the probability ${P}_{f}$ and are not taken into account in the Form and Sorm approximations.
In these both cases, the Form and Sorm approximations are of bad quality because they neglict important contributions to ${P}_{f}$ .
The Strong Maximum Test helps to evaluate the quality of the design point resulting from the optimization algorithm. It checks whether the design point computed is :
the true design point, which means a global maximum point,
a strong design point, which means that there is no other local maximum located on the event boundary and which likelihood is slightly inferior to the design point one.
This verification is very important in order to give sense to the FORM and SORM approximations .
Principle
The principle of the Strong Maximum Test, which principles are drawn on the figure (57) relies on the geometrical definition of the design point.
The objective is to detect all the points ${\tilde{P}}^{*}$ in the ball of radius ${R}_{\epsilon}=\beta (1+{\delta}_{\epsilon})$ which are potentially the real design point (case of ${\tilde{P}}_{2}^{*}$) or which contribution to ${P}_{f}$ is not negligeable as regards the approximations Form and Sorm (case of ${\tilde{P}}_{1}^{*}$). The contribution of a point is considered as negligeable when its likelihood in the $\underline{u}$space is more than $\epsilon $times lesser than the design point one. The radius ${R}_{\epsilon}$ is the distance to the $\underline{u}$space center upon which points are considered as negligeable in the evaluation of ${P}_{f}$.
In order to catch the potential points located on the sphere of radius ${R}_{\epsilon}$ (frontier of the zone of prospection), it is necessary to go a little further more : that's why the test samples the sphere of radius $R=\beta (1+\tau {\delta}_{\epsilon})$, with $\tau >0$.
Points on the sampled sphere ( refer to [Sample Sphere] to have details on the generation of the sample) which are in the vicinity of the design point ${P}^{*}$ are less interesting than those verifying the event and located far from the design point : these last ones might reveal a potential ${\tilde{P}}^{*}$ which contribution to ${P}_{f}$ has to be taken into account. The vicinity of the design point is defined with the angular parameter $\alpha $ as the cone centered on ${P}^{*}$ and of halfangle $\alpha $.
The number $N$ of the simulations sampling the sphere of radius $R$ is determined to ensure that the test detect with a probability greater than $(1q)$ any point verifying the event and outside the design point vicinity.
The Strong Maximum Test to validate the quality of the design point : unicity and strongness 
The vicinity of the Design Point is the arc of the sampled sphere which is inside the half space which frontier is the linearized limit state function at the Design Point (see figure (58) : the vicinity is the arc included in the half space ${D}_{1}$).
Vicinity of the Design Point in the standard space : the half space D 1 
Other notations
Link with OpenTURNS methodology
The Strong Maximum Test proceeds as follows. The User selects the parameters :
the importance level $\epsilon $, where $0<\epsilon <1$,
the accuracy level $\tau $, where $\tau >0$,
the confidence level $(1q)$ where $0<q<1$ or the number of points $N$ used to sample the sphere. The parameters are deductible from one other.
The Strong Maximum Test will sample the sphere of radius $\beta (1+\tau {\delta}_{\epsilon})$, where ${\delta}_{\epsilon}=\sqrt{12\frac{ln\left(\epsilon \right)}{{\beta}^{2}}}1$.
The test will detect with a probability greater than $(1q)$ any point of ${\mathcal{D}}_{f}$ which contribution to ${P}_{f}$ is not negligeable (i.e. which density value in the $\underline{u}$space is greater than $\epsilon $ times the density value at the design point).
The Strong Maximum Test provides :
set 1 : all the points detected on the sampled sphere that are in ${\mathcal{D}}_{f}$ and outside the design point vicinity, with the corresponding value of the limit state function,
set 2 : all the points detected on the sampled sphere that are in ${\mathcal{D}}_{f}$ and in the design point vicinity, with the corresponding value of the limit state function ,
set 3 : all the points detected on the sampled sphere that are outside ${\mathcal{D}}_{f}$ and outside the design point vicinity, with the corresponding value of the limit state function,
set 4 : all the points detected on the sampled sphere that are outside ${\mathcal{D}}_{f}$ but in the vicinity of the design point, with the corresponding value of the limit state function.
Points are described by their coordinates in the $\underline{x}$space.
The parameter $\tau $ also serves as a measure of distance from the design point ${\underline{OP}}^{*}$ for a hypothetical local maximum : the greater it is, the further we search for another local maximum.
Numerical experiments show that it is recommanded to take $\tau \le 4$ (see the given reference below).
The following table helps to quantify the parameters of the test for a problem of dimension 5.


As the Strong Maximum Test involves the computation of $N$ values of the limit state function, which is computationally intensive, it is interesting to have more than just an indication about the quality of ${\underline{OP}}^{*}$. In fact, the test gives some information about the trace of the limit state function on the sphere of radius $\beta (1+\tau {\delta}_{\epsilon})$ centered on the origin of the $\underline{u}$space. Two cases can be distinguished:
Case 1: set 1 is empty. We are confident on the fact that ${\underline{OP}}^{*}$ is a design point verifying the hypothesis according to which most of the contribution of ${P}_{f}$ is concentrated in the vicinity of ${\underline{OP}}^{*}$. By using the value of the limit state function on the sample $({\underline{U}}_{1},\cdots ,{\underline{U}}_{N})$, we can check if the limit state function is reasonably linear in the vicinity of ${\underline{OP}}^{*}$, which can validate the second hypothesis of FORM.
If the behaviour of the limit state function is not linear, we can decide to use an importance sampling version of the Monte Carlo method for computing the probability of failure (refer to [Importance sampling] ). However, the information obtained through the Strong Max Test, according to which ${\underline{OP}}^{*}$ is the actual design point, is quite essential : it allows to construct an effective importance sampling density, e.g. a multidimensional gaussian distribution centered on ${\underline{OP}}^{*}$.
Case 2: set 1 is not empty. There are two possibilities:
We have found some points that suggest that ${\underline{OP}}^{*}$ is not a strong maximum, because for some points of the sampled sphere, the value taken by the limit state function is slightly negative;
We have found some points that suggest that ${\underline{OP}}^{*}$ is not even the global maximum, because for some points of the sampled sphere, the value taken by the limit state function is very negative.
In the first case, we can decide to use an importance sampling version of the Monte Carlo method for computing the probability of failure, but with a mixture of e.g. multidimensional gaussian distributions centered on the ${U}_{i}$ in ${\mathcal{D}}_{f}$ (refer to [Importance Sampling] ). In the second case, we can restart the search of the design point by starting at the detected ${U}_{i}$.
More details can be found in the following reference:
A. Dutfoy, R. Lebrun, 2006, "The Strong Maximum Test: an efficient way to assess the quality of a design point," PSAM8, New Orleans.
A. Dutfoy, R. Lebrun, 2007, 'Le test du maximum fort : une façon efficace de valider la qualité d'un point de conception', CFM 07, Grenoble, France.
Mathematical description
Using the probability distribution of a random vector $\underline{X}$, we seek to evaluate the following probability:
$$\begin{array}{c}\hfill {P}_{f}=\mathbb{P}\left(g\left(\underline{X},\underline{d}\right)\le 0\right)\end{array}$$ 
Here, $\underline{X}$ is a random vector, $\underline{d}$ a deterministic vector, $g(\underline{X},\underline{d})$ the function known as "limit state function" which enables the definition of the event ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X},\underline{d})\le 0\}$.
Principle
If we have the set $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of $N$ independent samples of the random vector $\underline{X}$, we can estimate ${\widehat{P}}_{f}$ as follows:
$$\begin{array}{c}\hfill {\widehat{P}}_{f}=\frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\left\{g({\underline{x}}_{i},\underline{d})\le 0\right\}}\end{array}$$ 
where ${\mathbf{1}}_{\left\{g({\underline{x}}_{i},\underline{d})\le 0\right\}}$ describes the indicator function equal to 1 if $g({\underline{x}}_{i},\underline{d})\le 0$ and equal to 0 otherwise; the idea here is in fact to estimate the required probability by the proportion of cases, among the $N$ samples of $\underline{X}$, for which the event ${\mathcal{D}}_{f}$ occurs.
By the law of large numbers, we know that this estimation converges to the required value ${P}_{f}$ as the sample size $N$ tends to infinity.
The Central Limit Theorem allows to build an asymptotic confidence interval using the normal limit distribution as follows:
$$\begin{array}{c}\hfill \underset{N\to \infty}{lim}\mathbb{P}\left({P}_{f}\in [{\widehat{P}}_{f,inf},{\widehat{P}}_{f,sup}]\right)=\alpha \end{array}$$ 
with ${\widehat{P}}_{f,inf}={\widehat{P}}_{f}{q}_{\alpha}\sqrt{\frac{{\widehat{P}}_{f}(1{\widehat{P}}_{f})}{N}}$, ${\widehat{P}}_{f,sup}={\widehat{P}}_{f}+{q}_{\alpha}\sqrt{\frac{{\widehat{P}}_{f}(1{\widehat{P}}_{f})}{N}}$ and ${q}_{\alpha}$ is the $(1+\alpha )/2$quantile of the standard normal distribution.
One can also use a convergence indicator that is independent of the confidence level $\alpha $: the coefficient of variation, which is the ratio between the asymptotic standard deviation deviation of the estimate and its mean value. It is a relative measure of dispersion given by:
$$\begin{array}{c}\hfill {\mathrm{CV}}_{{\widehat{P}}_{f}}=\sqrt{\frac{1{\widehat{P}}_{f}}{N{\widehat{P}}_{f}}}\simeq \frac{1}{\sqrt{N{\widehat{P}}_{f}}}\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}{\widehat{P}}_{f}\ll 1\end{array}$$ 
It must be emphasize that these results are asymptotic and as such needs that $N\gg 1$. The convergence to the standard normal distribution is dominated by the skewness of ${\mathbf{1}}_{\left\{g({\underline{x}}_{i},\underline{d})\le 0\right\}}$ divided by the sample size $N$, it means $\frac{12{P}_{f}}{N\sqrt{{P}_{f}(1{P}_{f})}}$. In the usual case ${P}_{f}\ll 1$, the distribution is highly skewed and this term is approximately equal to $\frac{1}{N\sqrt{{P}_{f}}}\simeq {\mathrm{CV}}_{{\widehat{P}}_{f}}/\sqrt{N}$. A rule of thumb is that if ${\mathrm{CV}}_{{\widehat{P}}_{f}}<0.1$ with $N>{10}^{2}$, the asymptotic nature of the Central Limit Theorem is not problematic.
Other notations
Link with OpenTURNS methodology
This amounts to calculating the cumulative distribution function of the output variable at a point and thus propagating the uncertainty defined in step B using the model defined in step A.
Input data:
$\underline{X}$: random vector modelling the unknown variables defined in step A and for which the joint probability density function has been defined in step B,
$\underline{d}$: vector of deterministic calculation parameters,
$g(\underline{X},\underline{d})\le 0$: probabilistic criterion specified in step A,
Parameters:
$N$: number of simulations to be carried out (samples to be taken) (maximal in the case where ${\left({\mathrm{CV}}_{{\widehat{P}}_{f}}\right)}_{max}$ is specified, see next parameter),
${\left({\mathrm{CV}}_{{\widehat{P}}_{f}}\right)}_{max}$: maximal coefficient of variation of the probability estimator (optional),
$\alpha $: confidence level required for the confidence interval.
Outputs:
${\widehat{P}}_{f}$: estimation of the probability of exceeding the threshold (critical value/region),
$\mathrm{Var}\left({\widehat{P}}_{f}\right)$: estimation of the variance of the probability estimator,
${\widehat{P}}_{f,sup}{\widehat{P}}_{f,inf}$: length of the confidence interval.
References and theoretical basics
$$\begin{array}{c}\hfill \frac{{\displaystyle {t}_{\mathrm{CPU}}\left\{g(\underline{X},\underline{d})\le 0\right\}}}{{\displaystyle {P}_{f}\times {\left(\mathrm{estimation}\mathrm{precision}\right)}^{2}}}\le \mathrm{available}\mathrm{machine}\mathrm{time}\end{array}$$ 
where:
${t}_{\mathrm{CPU}}\left\{g(\underline{X},\underline{d})\le 0\right\}$: CPU time needed to evaluation the criterion $\left\{g(\underline{X},\underline{d})\le 0\right\}$ for given data values of $\underline{X}$ and $\underline{d}$,
estimation precision: desired limit for the coefficient of variation of the estimator,
available machine time: desired limit on the total duration of the estimation.
Readers interested in the problem of estimating the probability of exceeding a threshold are referred to [FORM] , [SORM] , [LHS] , [Importance Sampling] and [Directional Simulation] .
The following provide an interesting bibliographical starting point to further study of this method:
Robert C.P., Casella G. (2004). MonteCarlo Statistical Methods, Springer, ISBN 0387212396, 2nd ed.
Rubinstein R.Y. (1981). Simulation and The MonteCarlo methods, John Wiley & Sons
Mathematical description
Let us note ${\mathcal{D}}_{f}=\{\underline{x}\in {\mathbb{R}}^{n}g(\underline{x},\underline{d})\le 0\}$. The goal is to estimate the following probability:
$$\begin{array}{ccc}\hfill {P}_{f}& =& {\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int}_{{\mathbb{R}}^{n}}{\mathbf{1}}_{\{g(\underline{x},\underline{d})\phantom{\rule{0.222222em}{0ex}}\le 0\phantom{\rule{0.222222em}{0ex}}\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& \mathbb{P}\left(\{g(\underline{X},\underline{d})\le 0\}\right)\hfill \end{array}$$ 
Principles
This is a samplingbased method. The main idea of the Importance Sampling method is to replace the initial probability distribution of the input variables by a more "efficient" one. "Efficient" means that more events will be counted in the failure domain ${\mathcal{D}}_{f}$ and thus reduce the variance of the estimator of the probability of exceeding a threshold. Let $\underline{Y}$ be a random vector such that its probability density function ${f}_{\underline{Y}}\left(\underline{y}\right)>0$ almost everywhere in the domain ${\mathcal{D}}_{f}$,
$$\begin{array}{ccc}\hfill {P}_{f}& =& {\int}_{{\mathbb{R}}^{n}}{\mathbf{1}}_{\{g(\underline{x},\underline{d})\le 0\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int}_{{\mathbb{R}}^{n}}{\mathbf{1}}_{\{g(\underline{x},\underline{d})\le 0\}}\frac{{f}_{\underline{X}}\left(\underline{x}\right)}{{f}_{\underline{Y}}\left(\underline{x}\right)}{f}_{\underline{Y}}\left(\underline{x}\right)d\underline{x}\hfill \end{array}$$ 
The estimator built by Importance Sampling method is:
$$\begin{array}{c}\hfill {\widehat{P}}_{f,IS}^{N}=\frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\{g\left({\underline{Y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right),\underline{d})\le 0\}}\frac{{f}_{\underline{X}}\left({\underline{Y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{{f}_{\underline{Y}}\left({\underline{Y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}\end{array}$$ 
where:
$N$ is the total number of computations,
the random vectors $\{{\underline{Y}}_{i},i=1\cdots N\}$ are independent, identically distributed and following the probability density function ${f}_{\underline{Y}}$
Confidence Intervals
With the notations,
$$\begin{array}{ccc}\hfill {\mu}_{N}& =& \frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\{g\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right),\underline{d})\le 0\}}\frac{{f}_{\underline{X}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{{f}_{\underline{Y}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}\hfill \\ \hfill {\sigma}_{N}^{2}& =& \frac{1}{N}\sum _{i=1}^{N}{({\mathbf{1}}_{\{g\left({\underline{y}}_{i}\right),\underline{d})\le 0\}}\frac{{f}_{\underline{X}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{{f}_{\underline{Y}}\left({\underline{y}}_{\phantom{\rule{0.222222em}{0ex}}i}\right)}{\mu}_{N})}^{2}\hfill \end{array}$$ 
The asymptotic confidence interval of order $1\alpha $ associated to the estimator ${P}_{f,IS}^{N}$ is
$$\begin{array}{c}\hfill [{\mu}_{N}\frac{{q}_{1\alpha /2}.{\sigma}_{N}}{\sqrt{N}}\phantom{\rule{0.222222em}{0ex}};\phantom{\rule{0.222222em}{0ex}}{\mu}_{N}+\frac{{q}_{1\alpha /2}.{\sigma}_{N}}{\sqrt{N}}]\end{array}$$ 
where ${q}_{1\alpha /2}$ is the $1\alpha /2$ quantile from the standard distribution $\mathcal{N}(0,1)$.
Other notations
Link with OpenTURNS methodology
$$\begin{array}{c}\hfill \mathrm{Var}\left[{\widehat{P}}_{f,IS}\right]={\int}_{{\mathcal{D}}_{f}}{\left(\frac{{f}_{\underline{X}}\left(\underline{y}\right)}{{f}_{\underline{Y}}\left(\underline{y}\right)}\right)}^{2}d\underline{y}{P}_{f}^{2}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}<\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\mathrm{Var}\left[{\widehat{P}}_{f,MC}\right]={P}_{f}{P}_{f}^{2}\end{array}$$ 
Moreover, one has to pay attention to define the same support for the joined pdf of the input variables to ensure the convergence.
The following references are a first introduction to the Monte Carlo methods:
W.G. Cochran. Sampling Techniques. John Wiley and Sons, 1977.
M.H. Kalos et P.A. Monte Carlo Methods, volume I: Basics. John Wiley and Sons, 1986.
R.Y. Rubinstein. Simulation and the Monte Carlo Method. John Wiley and Sons, 1981.
Mathematical description
Using the probability distribution of a random vector $\underline{X}$, we seek to evaluate the following probability:
$$\begin{array}{c}\hfill {P}_{f}=\mathbb{P}\left(g\left(\underline{X},\underline{d}\right)<0\right)\end{array}$$ 
Here, $\underline{X}$ is a random vector, $\underline{d}$ a deterministic vector, $g(\underline{X},\underline{d})$ the function known as "limit state function" which enables the definition of the event ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X},\underline{d})\le 0\}$.
Principle
The directional simulation method is an accelerated sampling method. It implies a preliminary [isoprobabilistic transformation] , as for [FORM] and [SORM] methods; however, it remains based on sampling and is thus not an approximation method. In the transformed space, the (transformed) uncertain variables $\underline{U}$ are independent standard gaussian variables (mean equal to zero and standard deviation equal to 1).
Roughly speaking, each simulation of the directional simulation algorithm is made of three steps. For the ${i}^{\mathrm{th}}$ iteration, these steps are the following:
Let $\mathcal{S}=\left\{\underline{u}\left\right\underline{u}\left\right=1\right\}$. A point ${P}_{i}$ is drawn randomly on $\mathcal{S}$ according to an uniform distribution.
In the direction starting from the origin and passing through ${P}_{i}$, solutions of the equation $g(\underline{X},\underline{d})=0$ (i.e. limits of ${\mathcal{D}}_{f}$) are searched. The set of values of $\underline{u}$ that belong to ${\mathcal{D}}_{f}$ is deduced for these solutions: it is a subset ${I}_{i}\subset \mathbb{R}$.
Then, one calculates the probability ${q}_{i}=\mathbb{P}\left(\left\right\underline{U}\left\right\in {I}_{i}\right)$. By property of independent standard variable, $\left\right\underline{U}{\left\right}^{2}$ is a random variable distributed according to a chisquare distribution, which makes the computation effortless.
Finally, the estimate of the probability ${P}_{f}$ after $N$ simulations is the following:
$$\begin{array}{c}\hfill {\widehat{P}}_{f,DS}=\frac{1}{N}\sum _{i=1}^{N}{q}_{i}\end{array}$$ 
The following figure illustrates the principle of an iteration in dimension 2.
The Central Limit Theorem enables the difference between the estimated value and the sought value to be controlled by means of a confidence interval (if N is sufficiently large, typically $N$ > a few dozens even if there is now way to say for sure if the asymptotic behaviour is reached). For a probability $\alpha $ strictly between 0 and 1 chosen by the user, one can, for example, be sure with a confidence $\alpha $, that the true value of ${P}_{f}$ is between ${\widehat{P}}_{f,inf}$ and ${\widehat{P}}_{f,sup}$ calculated analytically from simple formulae. To illustrate, for $\alpha =0.95$:
$$\begin{array}{c}\hfill {\widehat{P}}_{f,inf}={\widehat{P}}_{f,DS}1.96\frac{{\sigma}_{q}}{\sqrt{N}},\phantom{\rule{4pt}{0ex}}{\widehat{P}}_{f,sup}={\widehat{P}}_{f,DS}+1.96\frac{{\sigma}_{q}}{\sqrt{N}}\end{array}$$ 
$$\begin{array}{c}\hfill \mathrm{that}\mathrm{is}\mathrm{to}\mathrm{say}\phantom{\rule{4pt}{0ex}}\mathbb{P}\left({\widehat{P}}_{f,inf}\le {P}_{f}\le {\widehat{P}}_{f,sup}\right)=0.95\end{array}$$ 
where ${\sigma}_{q}$ denotes the empirical standard deviation of the sample $\left\{{q}_{1},...,{q}_{N}\right\}$.
In practice in OpenTURNS, the Directional Sampling simulation requires the choice of:
a Root Strategy :
RiskyAndFast : for each direction, we check whether there is a sign changement of the standard limit state function between the maximum distant point (at distance MaximumDistance from the center of the standard space) and the center of the standard space.
In case of sign changement, we research one root in the segment [origin, maximum distant point] with the selected non linear solver.
As soon as founded, the segment [root, infinity point] is considered within the failure space.
MediumSafe : for each direction, we go along the direction by step of length stepSize from the origin to the maximum distant point (at distance MaximumDistance from the center of the standard space) and we check whether there is a sign changement on each segment so formed.
At the first sign changement, we research one root in the concerned segment with the selected non linear solver. Then, the segment [root, maximum distant point] is considered within the failure space.
If stepSize is small enough, this strategy garantees us to find the root which is the nearest from the origin.
SafeAndSlow : for each direction, we go along the direction by step of length stepSize from the origin to the maximum distant point(at distance MaximumDistance from the center of the standard space) and we check whether there is a sign changement on each segment so formed.
We go until the maximum distant point. Then, for all the segments where we detected the presence of a root, we research the root with the selected non linear solver. We evaluate the contribution to the failure probability of each segment.
If stepSize is small enough, this strategy garantees us to find all the roots in the direction and the contribution of this direction to the failure probability is precisely evaluated.
a Non Linear Solver :
Bisection : bisection algorithm,
Secant : based on the evaluation of a segment between the two last iterated points,
Brent : mix of Bisection, Secant and inverse quadratic interpolation.
and a Sampling Strategy :
RandomDirection : we generate some points on the sphere unity according to the uniform distribution and we consider both opposite directions so formed.
OrthogonalDirection : this strategy is parameterized by $k\in \{1,\cdots ,n\}$, where $n$ is the dimension of the input random vector $\underline{X}$. We generate one direct orthonormalized basis $({e}_{1},\cdots ,{e}_{n})$ uniformly distributed in the set of direct orthonormal bases. We consider all the normalized linear combinations of $k$ vectors chosen within the $n$ vectors of the basis, where the coefficients of the linear combinations are in $\{+1,1\}$. This generates ${C}_{n}^{k}{2}^{k}$ new vectors ${v}_{i}$. We sample according to all the directions defined by the vectors ${v}_{i}$.
If $k=1$, we consider all the axes of the standard space.
Other notations
Link with OpenTURNS methodology
This amounts to calculating the cumulative distribution function of the output variable at a point and thus propagating the uncertainty defined in step B using the model defined in step A.
Input data:
$\underline{X}$: random vector modelling the unknown variables defined in step A and for which the joint probability density function has been defined in step B,
$\underline{d}$: vector of deterministic calculation parameters,
$g(\underline{X},\underline{d})<0$: probabilistic criterion specified in step A,
Parameters:
$N$: number of simulations,
$\alpha $: confidence level required for the confidence interval,
Root Strategy,
Nonlinear Solver,
Sampling Strategy.
Outputs:
${\widehat{P}}_{f,DS}$: estimation of the probability of exceeding the threshold,
$\mathrm{Var}\left({\widehat{P}}_{f}\right)$: estimation of the variance of the probability estimator,
${\widehat{P}}_{f,sup}{\widehat{P}}_{f,inf}$: length of the confidence interval.
References and theoretical basics
The following provide an interesting bibliographical starting point to further study of this method:
Robert C.P., Casella G. (2004). MonteCarlo Statistical Methods, Springer, ISBN 0387212396, 2nd ed.
Rubinstein R.Y. (1981). Simulation and The MonteCarlo methods, John Wiley & Sons
Bjerager, P. (1988). "Probability integration by Directional Simulation". Journal of Engineering Mechanics, vol. 114, no. 8
Mathematical description
Let us note ${\mathcal{D}}_{f}=\{\underline{x}\in {\mathbb{R}}^{n}g(\underline{x},\underline{d})\le 0\}$. The goal is to estimate the following probability:
$$\begin{array}{ccc}\hfill {P}_{f}& =& {\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int}_{{\mathbb{R}}^{n}}{\mathbf{1}}_{\{g(\underline{x},\underline{d})\le 0\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& \mathbb{P}\left(\{g(\underline{X},\underline{d})\le 0\}\right)\hfill \end{array}$$ 
Principles
LHS or Latin Hypercube Sampling is a sampling method enabling to better cover the domain of variations of the input variables, thanks to a stratified sampling strategy. This method is applicable in the case of independent input variables. The sampling procedure is based on dividing the range of each variable into several intervals of equal probability. The sampling is undertaken as follows:
Step 1 The range of each input variable is stratified into isoprobabilistic cells,
Step 2 A cell is uniformly chosen among all the available cells,
Step 3 The random number is obtained by inverting the Cumulative Density Function locally in the chosen cell,
Step 4 All the cells having a common strate with the previous cell are put apart from the list of available cells.
The estimator of the probability of failure with LHS is given by:
$$\begin{array}{c}\hfill {\widehat{P}}_{f,LHS}^{N}=\frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\{g({\underline{X}}^{i},\underline{d})\le 0\}}\end{array}$$ 
where the sample of $\{{\underline{X}}^{i},i=1\cdots N\}$ is obtained as described previously.
One can show that:
$$\begin{array}{c}\hfill \mathrm{Var}\left[{\widehat{P}}_{f,LHS}^{N}\right]\le \frac{N}{N1}.\mathrm{Var}\left[{\widehat{P}}_{f,MC}^{N}\right]\end{array}$$ 
where:
$\mathrm{Var}\left[{\widehat{P}}_{f,LHS}^{N}\right]$ is the variance of the estimator of the probability of exceeding a threshold computed by the LHS technique,
$\mathrm{Var}\left[{\widehat{P}}_{f,MC}^{N}\right]$ is the variance of the estimator of the probability of exceeding a threshold computed by a crude Monte Carlo method.
Confidence Interval
With the notations,
$$\begin{array}{ccc}\hfill {\mu}_{N}& =& \frac{1}{N}\sum _{i=1}^{N}{\mathbf{1}}_{\{g\left({\underline{x}}_{i}\right),\underline{d})\le 0\}}\hfill \\ \hfill {\sigma}_{N}^{2}& =& \frac{1}{N}\sum _{i=1}^{N}{({\mathbf{1}}_{\{g\left({\underline{x}}^{i}\right),\underline{d})\le 0\}}{\mu}_{N})}^{2}\hfill \end{array}$$ 
the asymptotic confidence interval of order $1\alpha $ associated to the estimator ${P}_{f,LHS}^{N}$ is
$$\begin{array}{c}\hfill [{\mu}_{N}\frac{{q}_{1\alpha /2}.{\sigma}_{N}}{\sqrt{N}};{\mu}_{N}+\frac{{q}_{1\alpha /2}.{\sigma}_{N}}{\sqrt{N}}]\end{array}$$ 
where ${q}_{1\alpha /2}$ is the $1\alpha /2$ quantile from the reduced standard gaussian law $\mathcal{N}(0,1)$.
It gives an unbiased estimate for ${P}_{f}$ (reminding that all input variables must be independent).
Other notations
Link with OpenTURNS methodology
This method a priori enables a better exploration of the domain of variations of the input variables. No general rule can guarantee a better efficiency of the LHS sampling than the classical Monte Carlo sampling. Nevertheless, one can show that the LHS strategy leads to a variance reduction if the model is motoneous over each variable.
Be careful, this method is valid only if the input random variables are independent!
Moreover, for reliability problems, when the failure probability is low, the tails of the distributions usually contain the most influent domains in terms of reliability.
A fruitful link towards the global approach can be established with the files
[Monte Carlo Method to evaluate a probability to exceed a threshold] ,
[Importance Sampling to evaluate a probability to exceed a threshold] coming from the methodology.
The following provide an interesting bibliographical starting point to further study of this method:
Mc Kay, Conover, Beckman, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 21 (2), 1979
Latin Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems, J. Helton, F.J. Davis, 2002, SAND 20010417
The Design and Analysis of Computer Experiments by Thomas J. Santner, Brian J. Williams, and William I. Notz, Springer Verlag, New York 2003
A Central Limit Theorem for Latin Hypercube Sampling, Art B. Owen, 1992, Journal of the Royal Statistical Society. Series B (Methodological), Vol. 54, No. 2, pp. 541551
Large Sample Properties of Simulations Using Latin Hypercube Sampling, Michael Stein, Technometrics, Vol. 29, No. 2 (May, 1987), pp. 143151
Examples
For the LHS sampling, each column and each row contains exactly one blue point. For the Monte Carlo sampling, there are some columns which contain no red point (e.g. the $(0.7,0.8)$ column), while some others contain two red points (e.g. the $(0.4,0.5)$ column). The same is true for the rows of the Monte Carlo sampling. Hence, the LHS sampling fills the sample space more evenly.
Mathematical description
Let us note ${\mathcal{D}}_{f}=\{\underline{x}\in {\mathbb{R}}^{n}\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.222222em}{0ex}}g(\underline{x},\underline{d})\le 0\}$. The goal is to estimate the following probability:
$$\begin{array}{ccc}\hfill {P}_{f}& =& {\int}_{{\mathcal{D}}_{f}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& {\int}_{{\mathbb{R}}^{n}}{\mathbf{1}}_{\{g(\underline{x},\underline{d})\le 0\}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}\hfill \\ & =& \mathbb{P}\left(\{\phantom{\rule{0.222222em}{0ex}}g(\underline{X},\underline{d})\le 0\}\right)\hfill \end{array}$$  (4.3) 
QuasiMonte Carlo is a technique which approximates the integral (4.3) using low discrepancy sequences $\{{\underline{x}}^{1},...,{\underline{x}}^{N}\}$ instead of randomly generated sequences, as follows :
$$\begin{array}{c}\hfill {P}_{f}\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}{\mathbf{1}}_{{\mathcal{D}}_{f}}\left({\underline{x}}_{i}\right)f\left({\underline{x}}_{i}\right).\end{array}$$ 
To have information on low discrepancy sequences, refer to [Low Discrepancy Sequence] – see page 4.3.2.
Principles
In general, the integral of a function $f$ on $\Delta ={[0,1]}^{s}$ can be approximated by using some low discrepancy sequence $\{{\underline{x}}_{1},\cdots ,{\underline{x}}_{N}\}$ as follows :
$$\begin{array}{c}\hfill {\int}_{\Delta}f\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}f\left({\underline{x}}_{i}\right).\end{array}$$ 
The low discrepancy sequence is generated on $\Delta $ according to the Lebesgue measure then may be transformed to any measure $\mu $ thanks to the inverse CDF technique in order to approximate the integral :
$$\begin{array}{c}\hfill {\int}_{{\mathbb{R}}^{s}}f\left(\underline{u}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{u}\approx \frac{1}{N}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}f\left({\underline{x}}_{i}\right).\end{array}$$ 
As the sequence is not randomly generated, we can not use the Central Limit Theorem in order to control the quality of the approximation. This quality is given by the KoksmaHlawka inequality that we recall here :
$$\begin{array}{c}\hfill \left\frac{1}{N}\sum _{i=1}^{N}f\left({\underline{x}}_{i}\right){\int}_{I}f\left(\underline{x}\right)d\underline{x}\right\le Var\left(f\right){D}^{N}({\underline{x}}_{1},...,{\underline{x}}_{N})\end{array}$$ 
where ${D}^{N}({\underline{x}}_{1},...,{\underline{x}}_{N})$ is the discrepancy of the sequence $\{{\underline{x}}_{1},\cdots ,{\underline{x}}_{N}\}$.
For Halton, Inverse Halton and Sobol sequences, we have :
$$\begin{array}{c}\hfill {D}^{N}=O\left(\frac{{log}^{s}N}{N}\right)\end{array}$$ 
Thus, asymptotically the error converges in $O\left(\frac{{log}^{s}N}{N}\right)$, instead of $O\left(\frac{1}{\sqrt{N}}\right)$ for traditional Monte Carlo. The convergence rate depends on the dimension $s$ so one must have $N>>{e}^{s}$.
Link with OpenTURNS methodology
The advantages of QuasiMonte Carlo sampling tend to disappear with the increase of the number of dimensions of the input variables.
It is recommended to use the QuasiMonte Carlo technique with high sampling sizes or with very low dimensionality.
This method is valid only if the input random variables are independent.
A fruitful link towards the global approach can be established with the files :
Mathematical description
Let us denote $\underline{Y}=h\left(\underline{X},\underline{d}\right)=\left({Y}^{1},...,{Y}^{{n}_{Y}}\right)$, where $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ is a random vector, and $\underline{d}$ a deterministic vector. We seek here to evaluate, using the probability distribution of the random vector $\underline{X}$, the $\alpha $quantile ${q}_{{Y}^{i}}\left(\alpha \right)$ of ${Y}^{i}$, where $\alpha \in (0,1)$:
$$\begin{array}{c}\hfill \mathbb{P}\left({Y}^{i}\le {q}_{{Y}^{i}}\left(\alpha \right)\right)=\alpha \end{array}$$ 
Principle
If we have a sample $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of $N$ independent samples of the random vector $\underline{X}$, ${q}_{{Y}^{i}}\left(\alpha \right)$ can be estimated as follows:
the sample $\left\{{\underline{x}}_{1},...,{\underline{x}}_{N}\right\}$ of vector $\underline{X}$ is first transformed to a sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$ of the variable ${Y}^{i}$, using $\underline{y}=h({\underline{x}}_{i},\underline{d})$,
the sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$ is then placed in ascending order, which gives the sample $\left\{{y}^{\left(1\right)},...,{y}^{\left(N\right)}\right\}$,
this empirical estimation of the quantile is then calculated by the formula:
$$\begin{array}{c}\hfill {\widehat{q}}_{{y}^{i}}\left(\alpha \right)={y}^{\left(\right[N\alpha ]+1)}\end{array}$$ 
where $\left[N\alpha \right]$ denotes the integral part of $N\alpha $.
For example, if $N=100$ and $\alpha =0.95$, ${\widehat{q}}_{Z}(0.95)$ is equal to ${y}^{\left(96\right)}$, which is the ${5}^{\mathrm{th}}$ largest value of the sample $\left\{{y}_{1}^{i},...,{y}_{N}^{i}\right\}$. We note that this estimation has no meaning unless $1/N\le \alpha \le 11/N$. For example, if $N=100$, one can only consider values of a to be between 1% and 99%.
It is also possible to calculate an upper limit for the quantile with a confidence level $\beta $ chosen by the user; one can then be sure with a $\beta $ level of confidence that the real value of ${q}_{{Y}^{i}}\left(\alpha \right))$ is less than or equal to ${\widehat{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}$:
$$\begin{array}{c}\hfill \mathbb{P}\left({q}_{{Y}^{i}}\left(\alpha \right)\le {\widehat{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}\right)=\beta \end{array}$$ 
The most robust method for calculating this upper limit consists of taking ${\widehat{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}={y}^{\left(j\right(\alpha ,\beta ,N\left)\right)}$ where $j(\alpha ,\beta ,N)$ is an integer between 2 and $N$ found by solving the equation:
$$\begin{array}{c}\hfill \sum _{k=1}^{j(\alpha ,\beta ,N)1}{C}_{N}^{k}{\alpha}^{k}{\left(1\alpha \right)}^{Nk}=\beta \end{array}$$ 
A solution to this does not necessarily exist, i.e. there may be no integer value for $j(\alpha ,\beta ,N)$ satisfying this equality; one can in this case choose the smallest integer $j$ such that:
$$\begin{array}{c}\hfill \sum _{k=1}^{j(\alpha ,\beta ,N)1}{C}_{N}^{k}{\alpha}^{k}{\left(1\alpha \right)}^{Nk}>\beta \end{array}$$ 
which ensures that $\mathbb{P}\left({q}_{{Y}^{i}}\left(\alpha \right)\le {\widehat{q}}_{{Y}^{i}}{\left(\alpha \right)}_{sup}\right)>\beta $; in other words, the level of confidence of the quantile estimation is greater than that initially required.
This formula of the confidence interval can be used in two ways:
either directly to determine $j(\alpha ,\beta ,N)$ for the values $\alpha ,\beta ,N$ chosen by the user,
or in reverse to determine the number $N$ of simulations to be carried out for the values $\alpha ,\beta $ and $j(\alpha ,\beta ,N)$ chosen by the user; this is known as Wilks' formula.
For example for $\alpha =\beta =95\%$, we take $j=59$ with $N=59$ simulations (that is the maximum value out of 59 samples) or else $j=92$ with $N=93$ simulations (that is the second largest result out of the 93 selections). For values of $N$ between 59 and 92, the upper limit is the maximum value of the sample. The following tabular presents the whole results for $N\le 1000$, still for $\alpha =\beta =95\%$.
$N$  Rank of the uper bound of the quantile  Rank of the empirical the quantile 
59  59  57 
93  92  89 
124  122  118 
153  150  146 
181  177  172 
208  203  198 
234  228  223 
260  253  248 
286  278  272 
311  302  296 
336  326  320 
361  350  343 
386  374  367 
410  397  390 
434  420  413 
458  443  436 
482  466  458 
506  489  481 
530  512  504 
554  535  527 
577  557  549 
601  580  571 
624  602  593 
647  624  615 
671  647  638 
694  669  660 
717  691  682 
740  713  704 
763  735  725 
786  757  747 
809  779  769 
832  801  791 
855  823  813 
877  844  834 
900  866  856 
923  888  877 
945  909  898 
968  931  920 
991  953  942 
Other notations
Link with OpenTURNS methodology
Input data:
$\underline{X}$: random vector modelling the unknown variables defined in step A and for which the joint probability density function has been defined in step B,
$\underline{d}$: vector of deterministic calculation parameters,
$\underline{Y}=h(\underline{X},\underline{d})$: output variable / variable of interest specified in step A
Parameters:
$\alpha $: quantile level ($\alpha $quantile),
$\beta $: confidence level for the quantile's upper bound,
$N$: number of simulations to be carried out (which can be computed by OpenTURNS using Wilk's formula)
Outputs:
${\widehat{q}}_{Z}\left(\alpha \right)$: quantile estimate,
${\widehat{q}}_{Z}{\left(\alpha \right)}_{sup}$: quantile upper bound with confidence $\beta $
References and theoretical basics
The following references provide an interesting bibliographical starting point for further study of the method described here:
Wilks, S.S. (1962). "Mathematical Statistics", New York, John Wiley.
Robert C.P., Casella G. (2004). MonteCarlo Statistical Methods, Springer, ISBN 0387212396, 2nd ed.
Rubinstein R.Y. (1981). Simulation and The MonteCarlo methods, John Wiley & Sons
OpenTURNS' methods for Step B: quantification of the uncertainty sources

Table of contents
 OpenTURNS' methods for Step C': ranking uncertainty sources / sensitivity analysis
