# 5 OpenTURNS' methods for Step C': ranking uncertainty sources / sensitivity analysis

Ranking methods can be used to analyse the respective importance of each uncertainty source with respect to a probabilistic criterion. OpenTURNS proposes ranking methods for two probabilistic criteria defined in the [global methodology guide] : probabilist criterion on central dispersion (expectation and variance), probability of exceeding a threshold / failure probability.

## 5.1 Probabilistic criteria

### 5.1.1 Central dispersion probabilistic criterion

Each propagation method available for this criterion (see step C) leads to one or several ranking methods.

## 5.2 Methods description

### 5.2.1 Step C'  – Importance Factors derived from Taylor Variance Decomposition Method

Mathematical description

Goal

The importance factors derived from a quadratic combination method are defined to discriminate the influence of the different inputs towards the output variable for central dispersion analysis.

Principles

The importance factors are derived from the following expression. It can be shown by Taylor expansion of the output variable $z$ (${n}_{Z}=1$) around $\underline{x}={\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$ and computation of the variance that :

 $\begin{array}{c}\hfill \mathrm{Var}\left[Z\right]\approx \nabla h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right).\mathrm{Cov}\left[\underline{X}\right]{.}^{t}\nabla h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$

which can be re written :

 $\begin{array}{c}\hfill \begin{array}{cccc}& 1\hfill & \approx & \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\sum _{i=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {X}^{i}}×\frac{{\sum }_{j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}}{\mathrm{Var}\left[Y\right]}\\ & & \approx & \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{ℱ}_{1}+{ℱ}_{2}+...+{ℱ}_{{n}_{X}}\end{array}\end{array}$

Vectorial definition

 $\begin{array}{c}\hfill \underline{ℱ}=\nabla h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)×\frac{\mathrm{Cov}\left[\underline{X}\right]{.}^{t}\nabla h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\mathrm{Var}\left[Z\right]}\end{array}$

Scalar definition

 $\begin{array}{c}\hfill {ℱ}_{i}=\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{i}}×\frac{{\sum }_{j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}}{\mathrm{Var}\left[Y\right]}\end{array}$

where:

• $\nabla h\left(\underline{x}\right)={\left(\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{i}}\right)}_{i=1,...,{n}_{X}}$ is the gradient of the model at the point $\underline{x}$,

• $\mathrm{Cov}\left[\underline{X}\right]$ is the covariance matrix,

• ${\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}$ is the mean of the input random vector,

• $\mathrm{Var}\left[Z\right]$ is the variance of the output variable.

Interpretation of the importance factors

Let us note that this interpretation supposes that ${\left({X}^{i}\right)}_{i}$ are independent.

Each coefficient $\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{i}}$ is a linear estimate of the number of units change in the variable $y=h\left(\underline{x}\right)$ as a result of a unit change in the variable ${x}^{i}$. This first term depends on the physical units of the variables and is meaningful only when the units of the model are known. In the general case, as the variables have different physical units, it is not possible to compare these sensitivities $\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{i}}$ the one with the others. This is the reason why the importance factor used within OpenTURNS are normalized factors. These factors enable to make the results comparable independently of the original units of the inputs of the model. The second term $\frac{{\sum }_{j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu }}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}}{\mathrm{Var}\left[Z\right]}$ is the renormalization factor.

To summarize, the coefficients ${\left({ℱ}_{i}\right)}_{i=1,...,{n}_{X}}$ represent a linear estimate of the percentage change in the variable $z=h\left(\underline{x}\right)$ caused by one percent change in the variable ${x}^{i}$. The importance factors are independent of the original units of the model, and are comparable with each other.

Other notations

Importance Factors derived from Perturbation Methods

Link with OpenTURNS methodology

These computations are part of the step C' of the global methodology. It requires to have performed the steps A, B and C.

References and theoretical basics

The computation of these importance factors enables to rank the influence of the input variables towards the output variable. These factors are computed 'near' the mean value of the output. Thus, it should not be used to evaluate the importance of the input variable around the tail of the output distribution (high level quantile for example).

Examples

### 5.2.2 Step C'  – Uncertainty ranking using Pearson's correlation

Mathematical description

Goal

This method deals with analysing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure linear relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.

Principle

Pearson's correlation coefficient ${\rho }_{{Y}^{j},{X}^{i}}$, defined in [Pearson's Coefficient] , measures the strength of a linear relation between two random variables ${Y}^{j}$ and ${X}^{i}$. If we have a sample made up of $N$ pairs $\left({y}_{1}^{j},{x}_{1}^{i}\right)$, $\left({y}_{2}^{j},{x}_{2}^{i}\right)$, ..., $\left({y}_{N}^{j},{x}_{N}^{i}\right)$, we can obtain ${\stackrel{^}{\rho }}_{{Y}^{j},{X}^{i}}$ an estimation of Pearson's coefficient. The hierarchical ordering of Pearson's coefficients is of interest in the case where the relationship between ${Y}^{j}$ and ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$ is close to being a linear relation:

 $\begin{array}{c}\hfill {Y}^{j}\simeq {a}_{0}+\sum _{i=1}^{{n}_{X}}{a}_{i}{X}^{i}\end{array}$

To obtain an indication of the role played by each ${X}^{i}$ in the dispersion of ${Y}^{j}$, the idea is to estimate Pearson's correlation coefficient ${\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}$ for each $i$. One can then order the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ taking absolute values of the correlation coefficients: the higher the value of $\left|{\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}\right|$ the greater the impact the variable ${X}^{i}$ has on the dispersion of ${Y}^{j}$.

Other notations

-

Link with OpenTURNS methodology

After a propagation of uncertainty (step C) using [Standard Monte Carlo] simulation, a hierarchy of sources of uncertainty can be obtained using Pearson's correlation coefficients. In fact, the $N$ simulations enable the pairs $\left({y}_{1}^{j},{x}_{1}^{i}\right)$, $\left({y}_{2}^{j},{x}_{2}^{i}\right)$,..., $\left({y}_{N}^{j},{x}_{N}^{i}\right)$ to be generated, where:
• $\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",

• ${Y}^{j}$ describes a variable of interest or output variable defined in the same step.

The results produced as output of this method are the estimated Pearson's correlation coefficients ${\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}$ that the user may use, taking absolute values, to order the variables ${X}^{i}$ hierarchically.

References and theoretical basics
This method of uncertainty ranking is particularly useful:
• when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,

• when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are close to linear relationships (so that Pearson's correlation coefficient can be interpreted),

• when this linear relationship is close to ${Y}^{j}={a}_{0}+{\sum }_{i=1}^{{n}_{X}}{a}_{i}{X}^{i}$ (i.e. no product terms of the type ${X}^{i}{X}^{j}$), and when the components of vector $\underline{X}$ are statistically independent. If this is not the case, $\left|{\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}\right|$ reflects not only the influence of ${X}^{i}$ on ${Y}^{j}$ but equally the influence of other variables ${X}^{j}$ related to ${X}^{i}$ (e.g. an unimportant variable ${X}^{i}$ could have a strong coefficient for the correlation with ${Y}^{j}$ only because it is related – statistically or by a product term – to another variable ${X}^{j}$ which has enormous impact on ${Y}^{j}$).

Readers interested in other methods of uncertainty ranking that can be applied after Monte-Carlo simulation when the assumptions of linearity and/or independence are violated are also referred to [Uncertainty ranking using Spearman] , [Hierarchical Ordering using SRC] , [Uncertainty ranking with Pearson's Partial Correlation Coefficients] and [Uncertainty ranking using Spearman's Partial Correlation Coefficients] .

The following references provide an interesting bibliographic starting point to further study of the method described here:

• Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series

• J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.23-69

• J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in large-scale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147-185

### 5.2.3 Step C'  – Uncertainty ranking using Spearman's correlation

Mathematical description

Goal

This method deals with analyzing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure monotonic relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.

Principle

Spearman's correlation coefficient ${\rho }_{{Y}^{j},{X}^{i}}^{S}$, defined in [Spearman's Coefficient] , measures the strength of a monotonic relation between two random variables ${Y}^{j}$ and ${X}^{i}$. If we have a sample made up of $N$ pairs $\left({y}_{1}^{j},{x}_{1}^{i}\right)$, $\left({y}_{2}^{j},{x}_{2}^{i}\right)$, ..., $\left({y}_{N}^{j},{x}_{N}^{i}\right)$, we can obtain ${\stackrel{^}{\rho }}_{{Y}^{j},{X}^{i}}^{S}$ an estimation of Spearman's coefficient.

Hierarchical ordering using Spearman's coefficients deals with the case where the variable ${Y}^{j}$ monotonically depends on the ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$. To obtain an indication of the role played by each ${X}^{i}$ in the dispersion of ${Y}^{j}$, the idea is to estimate the Spearman correlation coefficients ${\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}^{S}$ for each $i$. One can then order the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ taking absolute values of the Spearman coefficients: the higher the value of $\left|{\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}^{S}\right|$, the greater the impact the variable ${X}^{i}$ has on the dispersion of ${Y}^{j}$.

Other notations

Link with OpenTURNS methodology

After a propagation of uncertainty (step C) using [Standard Monte Carlo] simulation, a hierarchy of sources of uncertainty can be obtained using Spearman's correlation coefficients. In fact, the $N$ simulations enable the pairs $\left({y}_{1}^{j},{x}_{1}^{i}\right)$, $\left({y}_{2}^{j},{x}_{2}^{i}\right)$,..., $\left({y}_{N}^{j},{x}_{N}^{i}\right)$ to be generated, where:
• $\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",

• ${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.

The results produced as output of this method are the estimated Spearman's correlation coefficients ${\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}^{S}$ that the user may use, taking absolute values, to order the variables ${X}^{i}$ hierarchically.

References and theoretical basics
This method of hierarchical ordering is particularly useful:
• when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,

• when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are monotonic relationships (so that Spearman's correlation coefficient can be interpreted),

• when the components of vector $\underline{X}$ are statistically independent. If this is not the case, $\left|{\stackrel{^}{\rho }}_{{X}^{i},{Y}^{j}}^{S}\right|$ reflects not only the influence of ${X}^{i}$ on ${Y}^{j}$ but equally the influence of other variables ${X}^{j}$ related to ${X}^{i}$ (e.g. an unimportant variable ${X}^{i}$ could have a strong coefficient for the correlation with ${Y}^{j}$ only because it is related to another variable ${X}^{j}$ which has enormous impact on ${Y}^{j}$).

Readers interested in other methods of uncertainty ranking that can be applied after Monte-Carlo simulation when the assumptions of independence are violated are also referred to [Uncertainty ranking using SRC] , [Uncertainty ranking with Pearson's Partial Correlation Coefficients] and [Uncertainty ranking using Spearman's Partial Correlation Coefficients] .

The following references provide an interesting bibliographic starting point to further study of the method described here:

• Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series

• J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.23-69

• J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in large-scale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147-185

### 5.2.4 Step C'  – Uncertainty Ranking using Standard Regression Coefficients

Mathematical description

Goal

This method deals with analysing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure linear relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.

Principle

The principle of the multiple linear regression model (see [Linear Regression] for more details) consists of attempting to find the function that links the variable ${Y}^{j}$ to the ${n}_{x}$ variables ${X}^{1}$,...,${X}^{{n}_{X}}$ by means of a linear model:

 $\begin{array}{c}\hfill {Y}^{j}={a}_{0}^{j}+\sum _{i=1}^{{n}_{X}}{a}_{i}^{j}{X}^{i}+{\epsilon }^{j}\end{array}$

where ${\epsilon }^{j}$ describes a random variable with zero mean and standard deviation ${\sigma }_{\epsilon }^{j}$ independent of the input variables ${X}^{i}$. If the random variables ${X}^{1},...,{X}^{{n}_{X}}$ are independent and with finite variance $\mathrm{Var}\left[{X}^{k}\right]={\left({\sigma }_{k}\right)}^{2}$, the variance of ${Y}^{j}$ can be written as follows:

 $\begin{array}{c}\hfill \mathrm{Var}\left[{Y}^{j}\right]=\sum _{i=1}^{n}{\left({a}_{i}^{j}\right)}^{2}\mathrm{Var}\left[{X}^{i}\right]+{\left({\sigma }_{\epsilon }^{j}\right)}^{2}\end{array}$

The estimators for the regression coefficients ${a}_{0}^{j},...,{a}_{{n}_{X}}^{j}$, and the standard deviation ${\sigma }^{j}$ are obtained from a sample of $\left({Y}^{j},{X}^{1},...,{X}^{{n}_{X}}\right)$. Uncertainty ranking by linear regression ranks the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ in terms of the estimated contribution of each ${X}^{k}$ to the variance of ${Y}^{j}$:

 $\begin{array}{c}\hfill {C}_{k}^{j}=\frac{{\left({a}_{k}^{j}\right)}^{2}\mathrm{Var}\left[{X}^{k}\right]}{\mathrm{Var}\left[{Y}^{j}\right]}\end{array}$

which is estimated by :

 $\begin{array}{c}\hfill {\stackrel{^}{C}}_{k}^{j}=\frac{{\left({\stackrel{^}{a}}_{k}^{j}\right)}^{2}{\stackrel{^}{\sigma }}_{k}^{2}}{\sum _{i=1}^{{n}_{X}}{\left({a}_{i}^{j}\right)}^{2}{\stackrel{^}{\sigma }}_{i}^{2}+{\left({\stackrel{^}{\sigma }}_{\epsilon }^{j}\right)}^{2}}\end{array}$

where ${\stackrel{^}{\sigma }}_{i}$ describes the empirical standard deviation of the sample of the input variables. This estimated contribution is by definition between 0 and 1. The closer it is to 1, the greater the impact the variable ${X}^{i}$ has on the dispersion of ${Y}^{j}$.

Other notations
The contribution to the variance ${C}_{i}$ is sometimes described in the literature as the "importance factor", because of the similarity between this approach to linear regression and the method of cumulative variance quadratic which uses the term importance factor (see [Quadratic combination – Perturbation method] and [Importance Factors] ).

Link with OpenTURNS methodology

After a propagation of uncertainty (step C) using [Standard Monte Carlo] simulation, a hierarchy of sources of uncertainty can be obtained using Linear Regression. In fact, the $N$ simulations enable the pairs $\left({y}_{1}^{j},{x}_{1}^{i}\right)$, $\left({y}_{2}^{j},{x}_{2}^{i}\right)$,..., $\left({y}_{N}^{j},{x}_{N}^{i}\right)$ to be generated, where:
• $\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",

• ${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.

The results produced as output of this method are the estimated variance contributions ${\stackrel{^}{C}}_{i}$ that the user may use to order the variables ${X}^{i}$ hierarchically.

References and theoretical basics
This method of hierarchical ordering is particularly useful:
• when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values, item when the relationships between ${Y}^{j}$ and the components of $\underline{X}$ are close to linear relationships, and more generally when all the underlying assumptions of the multiple linear regression model are valid,

• when the components of vector $\underline{X}$ are independent, because if this is not the case the decomposition of the variance of ${Y}^{j}$ given here would be no longer exact,

• when the number $N$ of Monte-Carlo simulations is significantly higher than the number ${n}_{X}$ of input random variables (it is preferable to have $N/{n}_{X}$ at least greater by a factor of 10 so that the estimation of the ${n}_{X}$ correlation coefficients provides a reasonable picture of reality).

Readers interested in the assumptions made for multiple linear regression models and in the tests needed to validate these assumptions are referred to [Linear Regression] .

Other methods of uncertainty ranking can be applied after Monte-Carlo simulation, requiring a lesser number $N$ of simulations or that can deal with non-linear/non-independent cases, are described in [Uncertainty Ranking using Pearson] , [Uncertainty Ranking using Spearman] , [Uncertainty Ranking using Pearson's Partial Correlation Coefficients] and [Uncertainty Ranking using Pearson's Partial Correlation Coefficients] .

The following references provide an interesting bibliographic starting point to further study of the method described here:

• Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series

• J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.23-69

• J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in large-scale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147-185

### 5.2.5 Step C'  – Uncertainty Ranking using Pearson's Partial Correlation Coefficients

Mathematical description

Goal

This method deals with analyzing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure linear relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.

Principle

The basic method of hierarchical ordering using Pearson's coefficients (see [Uncertainty Ranking using Pearson] ) deals with the case where the variable ${Y}^{j}$ linearly depends on ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$ but this can be misleading when statistical dependencies or interactions between the variables ${X}^{i}$ (e.g. a crossed term ${X}^{i}×{X}^{j}$) exist. In such a situation, the partial correlation coefficients can be more useful in ordering the uncertainty hierarchically: the partial correlation coefficients ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$ between the variables ${Y}^{j}$ and ${X}^{i}$ attempts to measure the residual influence of ${X}^{i}$ on ${Y}^{j}$ once influences from all other variables ${X}^{j}$ have been eliminated.

The estimation for each partial correlation coefficient ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$ uses a set made up of $N$ values $\left\{\left({y}_{1}^{j},{x}_{1}^{1},...,{x}_{1}^{{n}_{X}}\right),...,\left({y}_{N}^{j},{x}_{N}^{1},...,{x}_{N}^{{n}_{X}}\right)\right\}$ of the vector $\left({Y}^{j},{X}^{1},...,{X}^{{n}_{X}}\right)$. This requires the following three steps to be carried out:

1. Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${Y}^{j}$ by linear regression (see [Linear Regression] ); when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:

 $\begin{array}{c}\hfill \stackrel{^}{{Y}^{j}}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\stackrel{^}{a}}_{k}{X}^{k}\end{array}$
2. Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${X}^{i}$ by linear regression; when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:

 $\begin{array}{c}\hfill {\stackrel{^}{X}}^{i}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\stackrel{^}{b}}_{k}{X}^{k}\end{array}$
3. ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$ is then equal to the Pearson's correlation coefficient ${\stackrel{^}{\rho }}_{{Y}^{j}-\stackrel{^}{{Y}^{j}},{X}^{i}-{\stackrel{^}{X}}^{i}}$ estimated for the variables ${Y}^{j}-\stackrel{^}{{Y}^{j}}$ and ${X}^{i}-{\stackrel{^}{X}}^{i}$ on the $N$-sample of simulations (see [Pearson's Coefficient] ).

One can then class the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ according to the absolute value of the partial correlation coefficients: the higher the value of $\left|{\mathrm{PCC}}_{{X}^{i},{Y}^{j}}\right|$, the greater the impact the variable ${X}^{i}$ has on ${Y}^{j}$.

Other notations

-

Link with OpenTURNS methodology

After a propagation of uncertainty (step C) using [Standard Monte Carlo] simulation, a hierarchy of sources of uncertainty can be obtained Partial Pearson's Correlation Coefficients. In fact, the $N$ simulations enable the pairs $\left({y}_{1}^{j},{x}_{1}^{i}\right)$, $\left({y}_{2}^{j},{x}_{2}^{i}\right)$,..., $\left({y}_{N}^{j},{x}_{N}^{i}\right)$ to be generated, where:
• $\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",

• ${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.

The results produced as output of this method are Pearson's partial correlation coefficients ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$, that the user may use, taking absolute values, to order the variables ${X}^{i}$ hierarchically.

References and theoretical basics
This method of hierarchical ordering is particularly useful:
• when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,

• when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are close to linear relationships (so that Pearson's correlation coefficient can be interpreted),

• when the number $N$ of Monte-Carlo simulations is significantly higher than the number ${n}_{X}$ of input random variables (it is preferable to have $N/{n}_{X}$ at least greater than a factor of 10 so that the estimation of the ${n}_{X}$ partial correlation coefficients provides a reasonable picture of reality).

Readers interested in the assumptions made for multiple linear regression models and in the tests needed to validate these assumptions are referred to [Linear Regression] .

Other methods of uncertainty ranking can be applied after Monte-Carlo simulation, requiring a lesser number $N$ of simulations or that can treat non-linear cases, are described in [Uncertainty Ranking using Pearson] , [Uncertainty ranking using Spearman] , and [Uncertainty Ranking using Spearman's Partial Correlation Coefficients] .

The following references provide an interesting bibliographic starting point to further study of the method described here:

• Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series

• J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.23-69

• J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in large-scale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147-185

### 5.2.6 Step C'  – Uncertainty Ranking using Partial Rank Correlation Coefficients

Mathematical description

Goal

This method deals with analyzing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on the random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure monotonic relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.

Principle

The basic method of hierarchical ordering using Spearman's coefficients (see [Uncertainty Ranking using Spearman] ) deals with the case where the variable ${Y}^{j}$ monotonically depends on ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$ but this can be misleading when statistical dependencies between the variables ${X}^{i}$ exist. In such a situation, the partial rank correlation coefficients can be more useful in ordering the uncertainty hierarchically: the partial rank correlation coefficients ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$ between the variables ${Y}^{j}$ and ${X}^{i}$ attempts to measure the residual influence of ${X}^{i}$ on ${Y}^{j}$ once influences from all other variables ${X}^{j}$ have been eliminated.

The estimation for each partial rank correlation coefficient ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$ uses a set made up of $N$ values $\left\{\left({y}^{j}1,{x}_{1}^{1},...,{x}_{1}^{{n}_{X}}\right),...,\left({y}^{j}N,{x}_{N}^{1},...,{x}_{N}^{{n}_{X}}\right)\right\}$ of the vector $\left({Y}^{j},{X}^{1},...,{X}^{{n}_{X}}\right)$. This requires the following three steps to be carried out:

1. Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${Y}^{j}$ by linear regression (see [Linear Regression] ); when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:

 $\begin{array}{c}\hfill \stackrel{^}{{Y}^{j}}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\stackrel{^}{a}}_{k}{X}^{k}\end{array}$
2. Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${X}^{i}$ by linear regression; when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:

 $\begin{array}{c}\hfill {\stackrel{^}{X}}^{i}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\stackrel{^}{b}}_{k}{X}^{k}\end{array}$
3. ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$ is then equal to the Spearman's correlation coefficient ${\stackrel{^}{\rho }}_{{Y}^{j}-\stackrel{^}{{Y}^{j}},{X}^{i}-{\stackrel{^}{X}}^{i}}^{S}$ estimated for the variables ${Y}^{j}-\stackrel{^}{{Y}^{j}}$ and ${X}^{i}-{\stackrel{^}{X}}^{i}$ on the $N$-sample of simulations (see [Spearman's Coefficient] ).

One can then class the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ according to the absolute value of the partial rank correlation coefficients: the higher the value of $\left|{\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}\right|$, the greater the impact the variable ${X}^{i}$ has on ${Y}^{j}$.

Other notations
-

Link with OpenTURNS methodology

After a propagation of uncertainty (step C) using [Standard Monte Carlo] simulation, a hierarchy of sources of uncertainty can be obtained Partial Rank Correlation Coefficients. In fact, the $N$ simulations enable the pairs $\left({y}^{j}1,{x}_{1}^{i}\right)$, $\left({y}^{j}2,{x}_{2}^{i}\right)$,..., $\left({y}^{j}N,{x}_{N}^{i}\right)$ to be generated, where:
• $\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",

• ${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.

The results produced as output of this method are partial rank correlation coefficients ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$, that the user may use, taking absolu

References and theoretical basics
This method of hierarchical ordering is particularly useful:
• when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,

• when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are monotonic relationships (so that Spearman's correlation coefficient can be interpreted),

• when the number $N$ of Monte-Carlo simulations is significantly higher than the number ${n}_{X}$ of input random variables (it is preferable to have $N/{n}_{X}$ at least greater than a factor of 10 so that the estimation of the ${n}_{X}$ partial rank correlation coefficients provides a reasonable picture of reality).

Readers interested in the assumptions made for multiple linear regression models and in the tests needed to validate these assumptions are referred to [Linear Regression] .

Other methods of uncertainty ranking can be applied after Monte-Carlo simulation, requiring a lesser number $N$ of simulations, are described in [Uncertainty Ranking using Pearson] , [Uncertainty ranking using Spearman] .

The following references provide an interesting bibliographic starting point to further study of the method described here:

• Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series

• J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.23-69

• J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in large-scale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147-185

### 5.2.7 Step C'  – Sensivity analysis using Sobol indices

Mathematical description

Goal

This method deals with analysing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{k}$ which is being studied for uncertainty. Here we attempt to evaluate the part of variance of ${Y}^{k}$ due to the different components ${X}^{i}$.

Principle

The estimators for the mean of ${m}_{{Y}^{j}}$ and the standard deviation $\sigma$ of ${Y}^{k}$ can be obtained from a first sample, as Sobol indices estimation requires two samples of the input variables : $\left({X}^{1},...,{X}^{{n}_{X}}\right)$, that is two sets of $N$ vectors of dimension ${n}_{X}$ ${\left({x}_{11}^{\left(1\right)},...,{x}_{1{n}_{X}}\right)}^{\left(1\right)}$,...,$\left({x}_{{N}^{1}}^{\left(1\right)},...,{x}_{N{n}_{X}}^{\left(1\right)}\right)$ and ${\left({x}_{11}^{\left(2\right)},...,{x}_{1{n}_{X}}\right)}^{\left(2\right)}$,...,$\left({x}_{{N}^{1}}^{\left(2\right)},...,{x}_{N{n}_{X}}^{\left(2\right)}\right)$

The estimation of sensivity indices for first order consists in estimating the quantity

 ${V}_{i}=\mathrm{Var}\left[𝔼\left[{Y}^{k}|{X}_{i}\right]\right]=𝔼\left[𝔼{\left[{Y}^{k}|{X}_{i}\right]}^{2}\right]-𝔼{\left[𝔼\left[{Y}^{k}|{X}_{i}\right]\right]}^{2}={U}_{i}-𝔼{\left[{Y}^{k}\right]}^{2}$

Sobol proposes to estimate the quantity ${U}_{i}=𝔼\left[𝔼{\left[{Y}^{k}|{X}_{i}\right]}^{2}\right]$ by swaping every variables in the two samples except the variable ${X}_{i}$ between the two calls of the function :

 ${\stackrel{^}{U}}_{i}=\frac{1}{N}\sum _{k=1}^{N}{Y}^{k}\left({x}_{k1}^{\left(1\right)},\cdots ,{x}_{k\left(i-1\right)}^{\left(1\right)},{x}_{ki}^{\left(1\right)},{x}_{k\left(i+1\right)}^{\left(1\right)},\cdots ,{x}_{k{n}_{X}}^{\left(1\right)}\right)×{Y}^{k}\left({x}_{k1}^{\left(2\right)},\cdots ,{x}_{k\left(i-1\right)}^{\left(2\right)},{x}_{ki}^{\left(1\right)},{x}_{k\left(i+1\right)}^{\left(2\right)},\cdots ,{x}_{k{n}_{X}}^{\left(2\right)}\right)$

Then the ${n}_{X}$ first order indices are estimated by

 ${\stackrel{^}{S}}_{i}=\frac{{\stackrel{^}{V}}_{i}}{{\stackrel{^}{\sigma }}^{2}}=\frac{{\stackrel{^}{U}}_{i}-{m}_{{Y}^{k}}^{2}}{{\stackrel{^}{\sigma }}^{2}}$

For the second order, the two variables ${X}_{i}$ and ${X}_{j}$ are not swapped to estimate ${U}_{ij}$, and so on for higher orders, assuming that order $<{n}_{X}$. Then the $\left(\genfrac{}{}{0pt}{}{{n}_{X}}{2}\right)$ second order indices are estimated by

 ${\stackrel{^}{S}}_{ij}=\frac{{\stackrel{^}{V}}_{ij}}{{\stackrel{^}{\sigma }}^{2}}=\frac{{\stackrel{^}{U}}_{ij}-{m}_{{Y}^{k}}^{2}-{\stackrel{^}{V}}_{i}-{\stackrel{^}{V}}_{j}}{{\stackrel{^}{\sigma }}^{2}}$

For the ${n}_{X}$ total order indices ${T}_{i}$, we only swap the variable ${X}_{i}$ between the two samples.

Other notations

Link with OpenTURNS methodology

The results produced as output of this method are the estimated relative (indices values belong to $\left[0;1\right]$ ) variance contributions of subsets of variables ${\stackrel{^}{S}}_{i},{\stackrel{^}{S}}_{ij},{\stackrel{^}{T}}_{i}$ that the user may use to order the variables ${X}^{i}$ hierarchically.

This method of hierarchical ordering is particularly useful :

• when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values.

• when we have no particular hypothesis on the model other than the independance of the input variables ${X}_{i}$.

• when the size $N$ of both samples is high enough to provide a 'reasonable' picture of reality (the law of large numbers assures this method will show a ${N}^{-\frac{1}{2}}$ convergence order).

References and theoretical basics

The following references provide an interesting bibliographic starting point to further study of the method described here:
• Saltelli, A. (2002). “Making best use of model evaluations to compute sensitivity indices", Computer Physics Communication, 145, 580-297

Examples

### 5.2.8 Step C'  – Sensivity analysis for models with correlated inputs

Mathematical description

Goal

The ANCOVA (ANalysis of COVAriance) method, is a variance-based method generalizing the ANOVA (ANalysis Of VAriance) decomposition for models with correlated input parameters.

Principle

Let us consider a model $Y=h\left(\underline{X}\right)$ without making any hypothesis on the dependence structure of $\underline{X}=\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$, a ${n}_{X}$-dimensional random vector. The covariance decomposition requires a functional decomposition of the model. Thus the model response $Y$ is expanded as a sum of functions of increasing dimension as follows:

 $h\left(\underline{X}\right)={h}_{0}+\sum _{u\subseteq \left\{1,\cdots ,{n}_{X}\right\}}{h}_{u}\left({X}_{u}\right)$ (138)

${h}_{0}$ is the mean of $Y$. Each function ${h}_{u}$ represents, for any non empty set $u\subseteq \left\{1,\cdots ,{n}_{X}\right\}$, the combined contribution of the variables ${X}_{u}$ to $Y$.

Using the properties of the covariance, the variance of $Y$ can be decomposed into a variance part and a covariance part as follows:

 $\begin{array}{ccc}\hfill Var\left[Y\right]& =& Cov\left[{h}_{0}+\sum _{u\subseteq \left\{1,\cdots ,{n}_{X}\right\}}{h}_{u}\left({X}_{u}\right),{h}_{0}+\sum _{u\subseteq \left\{1,\cdots ,n\right\}}{h}_{u}\left({X}_{u}\right)\right]\hfill \\ & =& \sum _{u\subseteq \left\{1,\cdots ,{n}_{X}\right\}}Cov\left[{h}_{u}\left({X}_{u}\right),\sum _{u\subseteq \left\{1,\cdots ,{n}_{X}\right\}}{h}_{u}\left({X}_{u}\right)\right]\hfill \\ & =& \sum _{u\subseteq \left\{1,\cdots ,{n}_{X}\right\}}\left[Var\left[{h}_{u}\left({X}_{u}\right)\right]+Cov\left[{h}_{u}\left({X}_{u}\right),\sum _{v\subseteq \left\{1,\cdots ,{n}_{X}\right\},v\cap u=⌀}{h}_{v}\left({X}_{v}\right)\right]\right]\hfill \end{array}$

The total part of variance of $Y$ due to ${X}_{u}$ reads:

 ${S}_{u}=\frac{Cov\left[Y,{h}_{u}\left({X}_{u}\right)\right]}{Var\left[Y\right]}$

The variance formula described above enables to define each sensitivity measure ${S}_{u}$ as the sum of a $\mathrm{𝑝ℎ𝑦𝑠𝑖𝑐𝑎𝑙}$ (or $\mathrm{𝑢𝑛𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑}$) part and a $\mathrm{𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑}$ part such as:

 ${S}_{u}={S}_{u}^{U}+{S}_{u}^{C}$

where ${S}_{u}^{U}$ is the uncorrelated part of variance of $Y$ due to ${X}_{u}$:

 ${S}_{u}^{U}=\frac{Var\left[{h}_{u}\left({X}_{u}\right)\right]}{Var\left[Y\right]}$

and ${S}_{u}^{C}$ is the contribution of the correlation of ${X}_{u}$ with the other parameters:

 ${S}_{u}^{C}=\frac{Cov\left[{h}_{u}\left({X}_{u}\right),\sum _{v\subseteq \left\{1,\cdots ,{n}_{X}\right\},v\cap u=⌀}{h}_{v}\left({X}_{v}\right)\right]}{Var\left[Y\right]}$

As the computational cost of the indices with the numerical model $h$ can be very high, it is suggested to approximate the model response with a polynomial chaos expansion. However, for the sake of computational simplicity, the latter is constructed considering $\mathrm{𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡}$ components $\left\{{X}^{1},\cdots ,{X}^{{n}_{X}}\right\}$. Thus the chaos basis is not orthogonal with respect to the correlated inputs under consideration, and it is only used as a metamodel to generate approximated evaluations of the model response and its summands in Eq. (138).

 $Y\simeq \stackrel{^}{h}=\sum _{j=0}^{P-1}{\alpha }_{j}{\Psi }_{j}\left(x\right)$

Then one may identify the component functions. For instance, for $u=\left\{1\right\}$:

 ${h}_{1}\left({X}_{1}\right)=\sum _{\alpha |{\alpha }_{1}\ne 0,{\alpha }_{i\ne 1}=0}{y}_{\alpha }{\Psi }_{\alpha }\left(\underline{X}\right)$

where $\alpha$ is a set of degrees associated to the ${n}_{X}$ univariate polynomial ${\psi }_{i}^{{\alpha }_{i}}\left({X}_{i}\right)$.

Then the model response $Y$ is evaluated using a sample $X=\left\{{x}_{k},k=1,\cdots ,N\right\}$ of the correlated joint distribution. Finally, the several indices are computed using the model response and its component functions that have been identified on the polynomial chaos.

Other notations

Link with OpenTURNS methodology

The ANCOVA method is a generalization of the well-established Sobol sensitivity indices when the input parameters of the model are correlated. The Sobol indices measure the contribution of the input variables ${X}_{i}$ to the variance of the output $Y$. The ANCOVA decomposition allows one to distinguish which part of this contribution is due to the variable itself and which part is due to its correlation with the other input parameters. So if a variable has a high contribution, this method enables to know if it is due to its physical role in the model $h$ or because it is correlated with variables with high contributions.
References and theoretical basics
The following reference provides more details on the ANCOVA method:
• Caniou, Y. (2012). "Global sensitivity analysis for nested and multiscale modelling." PhD thesis. Blaise Pascal University-Clermont II, France.

Examples

### 5.2.9 Step C'  – Sensivity analysis by Fourier decomposition

Mathematical description

Goal

FAST is a sensitivity analysis method which is based upon the ANOVA decomposition of the variance of the model response $y=f\left(\underline{X}\right)$, the latter being represented by its Fourier expansion. $\underline{X}=\left\{{X}^{1},\cdots ,{X}^{{n}_{X}}\right\}$ is an input random vector of ${n}_{X}$ independent components.

Principle

OpenTURNS implements the extended FAST method consisting in computing alternately the first order and the total-effect indices of each input. This approach relies upon a Fourier decomposition of the model response. Its key idea is to recast this representation as a function of a $\mathrm{𝑠𝑐𝑎𝑙𝑎𝑟}$ parameter $s$, by defining parametric curves $s↦{x}_{i}\left(s\right)$, $i=1,\cdots ,{n}_{X}$ exploring the support of the input random vector $\underline{X}$.

For each input, the same procedure is realized in three steps:

• Sampling:

Deterministic space-filling paths with random starting points are defined, i.e. each input ${X}^{i}$ is transformed as follows:

 ${x}_{j}^{i}=\frac{1}{2}+\frac{1}{\pi }arcsin\left(sin\left({\omega }_{i}{s}_{j}+{\phi }_{i}\right)\right),\phantom{\rule{1.em}{0ex}}i=1,\cdots ,{n}_{X},\phantom{\rule{0.166667em}{0ex}}j=1,\cdots ,N$

where ${n}_{X}$ is the number of input variables. $N$ is the length of the discretization of the s-space, with $s$ varying in $\left(-\pi ,\pi \right)$ by step of $2\pi /N$. ${\phi }_{i}$ is a random phase-shift chosen uniformly in $\left[0,2\pi \right]$ which enables to make the curves start anywhere within the unit hypercube ${K}^{{n}_{X}}=\left(\underline{X}|0\le {x}_{i}\le 1;i=1,\cdots ,{n}_{X}\right)$. The selection of the set $\left\{{\phi }_{1},\cdots ,{\phi }_{{n}_{X}}\right\}$ induces a part of randomness in the procedure. So it can be asked to realize the procedure $Nr$ times and then to calculate the arithmetic means of the results over the $Nr$ estimates. This operation is called $\mathrm{𝑟𝑒𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔}$.

$\left\{{\omega }_{i}\right\},\forall i=1,\cdots ,{n}_{X}$ is a set of integer frequencies assigned to each input ${X}^{i}$. The frequency associated with the input of interest is set to the maximum admissible frequency satisfying the Nyquist criterion (which ensures to avoid aliasing effects):

 ${\omega }_{i}=\frac{N-1}{2M}$

with $M$ the interference factor usually equal to 4 or higher. It corresponds to the truncation level of the Fourier series, i.e. the number of harmonics that are retained in the decomposition realized in the third step of the procedure.

In the paper by Saltelli et al. (1999), for high sample size, it is suggested that $16\le {\omega }_{i}/{N}_{r}\le 64$.

And the maximum frequency of the complementary set of frequencies is:

 $max\left({\omega }_{-i}\right)=\frac{{\omega }_{i}}{2M}=\frac{N-1}{4{M}^{2}}$

with the index '$-i$' which meaning 'all but $i$'.

The other frequencies are distributed uniformly between 1 and $max\left({\omega }_{-i}\right)$. The set of frequencies is the same whatever the number of resamplings is.

Let us make an example with eight input factors, $N=513$ and $M=4$ i.e. ${\omega }_{i}=\frac{N-1}{2M}=64$ and $max\left({\omega }_{-i}\right)=\frac{N-1}{4{M}^{2}}=8$ with $i$ the index of the input of interest.

When computing the sensitivity indices for the first input, the considered set of frequencies is : $\left\{64,1,2,3,4,5,6,8\right\}$.

When computing the sensitivity indices for the second input, the considered set of frequencies is : $\left\{1,64,2,3,4,5,6,8\right\}$.

etc.

The transformation defined above provides a uniformly distributed sample for the ${x}_{i},\forall i=1,\cdots ,{n}_{X}$ oscillating between 0 and 1. In order to take into account the real distributions of the inputs, we apply an isoprobabilistic transformation on each ${x}_{i}$ before the next step of the procedure.

• Simulations:

Output is computed such as: $y=f\left(s\right)=f\left({x}_{1}\left(s\right),\cdots ,{x}_{{n}_{X}}\left(s\right)\right)$

Then $f\left(s\right)$ is expanded onto a Fourier series:

 $f\left(s\right)=\sum _{k\in {ℤ}^{N}}{A}_{k}cos\left(ks\right)+{B}_{k}sin\left(ks\right)$

where ${A}_{k}$ and ${B}_{k}$ are Fourier coefficients defined as follows:

 $\begin{array}{ccc}\hfill {A}_{k}& =& \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left(s\right)cos\left(ks\right)\phantom{\rule{0.166667em}{0ex}}ds\hfill \\ \hfill {B}_{k}& =& \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left(s\right)sin\left(ks\right)\phantom{\rule{0.166667em}{0ex}}ds\hfill \end{array}$

These coefficients are estimated thanks to the following discrete formulations:

 $\begin{array}{ccc}\hfill {\stackrel{^}{A}}_{k}& =& \frac{1}{N}\sum _{j=1}^{N}f\left({x}_{j}^{1},\cdots ,{x}_{j}^{{N}_{X}}\right)cos\left(\frac{2k\pi \left(j-1\right)}{N}\right)\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}-\frac{N}{2}\le k\le \frac{N}{2}\hfill \\ \hfill {\stackrel{^}{B}}_{k}& =& \frac{1}{N}\sum _{j=1}^{N}f\left({x}_{j}^{1},\cdots ,{x}_{j}^{{N}_{X}}\right)sin\left(\frac{2k\pi \left(j-1\right)}{N}\right)\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}-\frac{N}{2}\le k\le \frac{N}{2}\hfill \end{array}$
• Estimations by frequency analysis:

The first order indices are estimated as follows:

 ${\stackrel{^}{S}}_{i}=\frac{{\stackrel{^}{D}}_{i}}{\stackrel{^}{D}}=\frac{{\sum }_{p=1}^{M}{\left({\stackrel{^}{A}}_{p{\omega }_{i}}^{2}+{\stackrel{^}{B}}_{p{\omega }_{i}}^{2}\right)}^{2}}{{\sum }_{n=1}^{\left(N-1\right)/2}{\left({\stackrel{^}{A}}_{n}^{2}+{\stackrel{^}{B}}_{n}^{2}\right)}^{2}}$

where $\stackrel{^}{D}$ is the total variance and ${\stackrel{^}{D}}_{i}$ the portion of $D$ arising from the uncertainty of the ${i}^{th}$ input. $N$ the size of the sample using to compute the Fourier series and $M$ is the interference factor. Saltelli et al. (1999) recommanded to set $M$ to a value in the range $\left[4,6\right]$.

The total order indices are estimated as follows:

 ${\stackrel{^}{T}}_{i}=1-\frac{{\stackrel{^}{D}}_{-i}}{\stackrel{^}{D}}=1-\frac{{\sum }_{k=1}^{{\omega }_{i}/2}{\left({\stackrel{^}{A}}_{k}^{2}+{\stackrel{^}{B}}_{k}^{2}\right)}^{2}}{{\sum }_{n=1}^{\left(N-1\right)/2}{\left({\stackrel{^}{A}}_{n}^{2}+{\stackrel{^}{B}}_{n}^{2}\right)}^{2}}$

where ${\stackrel{^}{D}}_{-i}$ is the part of the variance due to all the inputs except the ${i}^{th}$ input.

Other notations

Link with OpenTURNS methodology

The results produced as output of this method are the estimated relative (indices values belong to $\left[0;1\right]$ ) variance contributions of subsets of variables ${\stackrel{^}{S}}_{i},{\stackrel{^}{T}}_{i}$ that the user may use to order the variables ${X}^{i}$ hierarchically.

This method of hierarchical ordering is particularly useful :

• when the problem focuses on the central dispersion of the variable of interest ${Y}^{j}$ and not on its extreme values.

• when no particular hypothesis is made on the model other than the independance of the input variables ${X}_{i}$.

The extended FAST method is a convenient alternative technique to the method of Sobol'. However the computational cost of the extended FAST method tends to be lower than that of the method of Sobol'. Indeed, FAST estimates both the first-order indices and the total ones with the same set of model evaluations.

References and theoretical basics
The following reference provides more details on the FAST method:
• Saltelli, A., Tarantola, S. & Chan, K. (1999). "A quantitative, model independent method for global sensitivity analysis of model output." Technometrics, 41(1), 39-56.

Examples

### 5.2.10 Step C'  – Importance Factors from FORM-SORM methods

Mathematical description

Goal

Importance Factors are evaluated in the following context : $\underline{X}$ denotes a random input vector, representing the sources of uncertainties, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, representing the fixed variables $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)=0$ its boundary (also called limit state surface).

The probability content of the event ${𝒟}_{f}$ is ${P}_{f}$:

 $\begin{array}{c}\hfill {P}_{f}={\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\end{array}$ (139)

In this context, the probability ${P}_{f}$ can often be efficiently estimated by FORM or SORM approximations (refer to [FORM] and [SORM] ).

The FORM importance factors offer a way to rank the importance of the input components with respect the realization of the event. They are often interpreted also as indicators of the impact of modeling the input components as random variables rather than fixed values. The FORM importance factors are defined as follows.

Principle

The isoprobabilistic transformation $T$ used in the FORM and SORM approximation (refer to [Iso Probabilistic Transformation] ) is a diffeomorphism from $supp\left(\underline{X}\right)$ into ${ℝ}^{n}$, such that the distribution of the random vector $\underline{U}=T\left(\underline{X}\right)$ has the following properties : $\underline{U}$ and $\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{U}$ have the same distribution for all rotations $\underline{\underline{R}}\in {𝒮𝒫}_{n}\left(ℝ\right)$.

In the standard space, the design point ${\underline{u}}^{*}$ is the point on the limit state boundary the nearest to the origin of the standard space. The design point is ${\underline{x}}^{*}$ in the physical space, where ${\underline{x}}^{*}={T}^{-1}\left({\underline{u}}^{*}\right)$. We note ${\beta }_{HL}$ the Hasofer-Lind reliability index : ${\beta }_{HL}=||{\underline{u}}^{*}||$.

When the $U$-space is normal, the litterature proposes to calculate the importance factor ${\alpha }_{i}^{2}$ of the variable ${X}_{i}$ as the square of the co-factors of the design point in the $U$-space :

 ${\alpha }_{i}^{2}=\frac{{\left({u}_{i}^{*}\right)}^{2}}{{\beta }_{HL}^{2}}$ (140)

This definition guarantees the relation : ${\Sigma }_{i}{\alpha }_{i}^{2}=1$.

Let's note that this definition arises the following difficulties :

• Which signification for ${\alpha }_{i}$ when the variables ${X}_{i}$ are correlated? In that case, the isoprobabilistic transformation doesn't associate ${U}_{i}$ to ${X}_{i}$ but ${U}_{i}$ to a set of ${X}_{i}$.

• In the case of dependence of the variables ${X}_{i}$, the shape of the limit state function in the $U$-space depends on the isoprobabilistic transformation and in particular on the order of the variables ${X}_{i}$ within the random vector $\underline{X}$. Thus, changing this order has an impact on the localisation of the design point in the $U$-space and, concequently, on the importance factors ... (see [R. Lebrun, A. Dutfoy, 2008] to compare the different isoprobabilistic transformations).

It is possible to give another definition to the importance factors which may be defined in the elliptical space of the iso-probabilistic transformation, where the marginal distributions are all elliptical, with cumulative distribution function noted $E$, and not yet decorrelated.

 $\begin{array}{c}\hfill {Y}^{*}=\left(\begin{array}{c}{E}^{-1}\circ {F}_{1}\left({X}_{1}^{*}\right)\\ {E}^{-1}\circ {F}_{2}\left({X}_{2}^{*}\right)\\ ⋮\\ {E}^{-1}\circ {F}_{n}\left({X}_{n}^{*}\right)\end{array}\right).\end{array}$ (141)

The importance factor ${\alpha }_{i}^{2}$ writes:

 ${\alpha }_{i}^{2}=\frac{{\left({y}_{i}^{*}\right)}^{2}}{||{\underline{y}}^{*}{||}^{2}}$ (142)

This definition still guarantees the relation : ${\Sigma }_{i}{\alpha }_{i}^{2}=1$.

Other notations

Here, the event considered is explicited directly from the limit state function $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ : this is the classical structural reliability formulation.

However, if the event is a threshold exceedance, it is useful to explicite the variable of interest $Z=\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$, evaluated from the model $\stackrel{˜}{g}\left(.\right)$. In that case, the event considered, associated to the threshold ${z}_{s}$ has the formulation: ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}Z=\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)>{z}_{s}\right\}$ and the limit state function is : $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)={z}_{s}-Z={z}_{s}-\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$. ${P}_{f}$ is the threshold exceedance probability, defined as : ${P}_{f}=P\left(Z\ge {z}_{s}\right)={\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$. Thus, the FORM importance factors offer a way to rank the importance of the input components with respect to the threshold exceedance by the quantity of interest $Z$. They can be seen as a specific sensitity analysis technique dedicated to the quantity Z around a particular threshold rather than to its variance.

Link with OpenTURNS methodology

Within the global methodology, these importance factors are used in the step C': "Ranking sources of uncertainty" in the case of the evaluation of the probability of an event by an approximation method.

It requires to have fulfilled the following steps beforehand:

• step A: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\stackrel{˜}{g}\left(\underline{X},\underline{d}\right)$, result of the model $\stackrel{˜}{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0$,

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,

• step C: select an appropriate optimization algorithm among those proposed to evaluate the event probability : FORM or SORM.

When not specified, OpenTURNS evaluates the importance factors according to relation (140). Otherwise, OpenTURNS evaluates them according to (142).

Note that the relevance of FORM importance factors as a means to rank the importance of the sources of uncertainty is closely dependant on the validity of FORM approximation (refer to [FORM] and [SORM] ).

The sensitivity factors (refer to [Sensitivity Factors] ) indicate the importance on the Hasofer-Lind reliability index (refer to [Reliability Index] ) of the value of the parameters used to define the distribution of the random vector $\underline{X}$.

References and theoretical basics

Interesting litterature on the subject is :
• H.O. Madsen, "Omission Sensitivity Factors," 1988, Structural Safety, 5, 35-45.

• R. Lebrun, A. Dutfoy, 2008, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under temptatively accepted so far.

Examples

Let's apply this method to the following analytical example which considers a cantilever beam, of Young's modulus E, length L, section modulus I. We apply a concentrated bending force at the other end of the beam. The vertical displacement $y$ of the extrême end is equal to :
 $\begin{array}{c}\hfill y\left(E,F,L,I\right)=\frac{F{L}^{3}}{3EI}\end{array}$

The objective is to propagate until $y$ the uncertainties of the variables $\left(E,F,L,I\right)$.

The input random vector is $\underline{X}=\left(E,F,L,I\right)$, which probabilistic modelisation is (unity is not provided):

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal\left(50,1\right)\hfill \\ F\hfill & =& Normal\left(1,1\right)\hfill \\ L\hfill & =& Normal\left(10,1\right)\hfill \\ I\hfill & =& Normal\left(5,1\right)\hfill \end{array}\right\\end{array}$

The four random variables are independant.

The event considered is the threshold exceedance : ${𝒟}_{f}=\left\{\left(E,F,L,I\right)\in {ℝ}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y\left(E,F,L,I\right)\ge 3\right\}$.

The importance factors obtained are :

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}{\alpha }_{E}^{2}\hfill & =& 9.456{e}^{-2}\phantom{\rule{0.166667em}{0ex}}%\hfill \\ {\alpha }_{F}^{2}\hfill & =& 6.959{e}^{+1}\phantom{\rule{0.166667em}{0ex}}%\hfill \\ {\alpha }_{L}^{2}\hfill & =& 1.948{e}^{+1}\phantom{\rule{0.166667em}{0ex}}%\hfill \\ {\alpha }_{I}^{2}\hfill & =& 1.084{e}^{+1}\phantom{\rule{0.166667em}{0ex}}%\hfill \end{array}\right\\end{array}$

### 5.2.11 Step C'  – Sensitivity Factors from FORM method

Mathematical description

Goal

Sensitivity Factors are evaluated under the following context : $\underline{X}$ denotes a random input vector, representing the sources of uncertainties, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, representing the fixed variables $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ the limit state function of the model, ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0\right\}$ the event considered here and $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)=0$ its boundary (also called limit state surface).

The probability content of the event ${𝒟}_{f}$ is ${P}_{f}$:

 $\begin{array}{c}\hfill {P}_{f}={\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\end{array}$ (143)

In this context, the probability ${P}_{f}$ can often be efficiently estimated by FORM or SORM approximations (refer to [FORM] and [SORM] ).

The FORM importance factors offer a way to analyse the sensitivity of the probability the realization of the event with respect to the parameters of the probability distribution of X.

Principle

A sensitivity factor is defined as the derivative of the Hasofer-Lind reliability index with respect to the paramater $\theta$. The paramater $\theta$ is a parameter in a distribution of the random vector $\underline{X}$.

If $\underline{\theta }$ represents the vector of all the parameters of the distribution of $\underline{X}$ which appear in the definition of the isoprobabilistic transformation $T$ (refer to [IsoProbabiliticFunction] ), and ${U}_{\underline{\theta }}^{*}$ the design point associated to the event considered in the $U$-space, and if the mapping of the limit state function by the $T$ is noted $G\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{\theta }\right)=g\left[{T}^{-1}\left(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{\theta }\right),\underline{d}\right]$, then the sensitivity factors vector is defined as :

 $\begin{array}{c}\hfill {\nabla }_{\underline{\theta }}{\beta }_{HL}=+\frac{1}{||{\nabla }_{\underline{\theta }}G\left({U}_{\underline{\theta }}^{*},\underline{d}\right)||}{\nabla }_{\underline{u}}G\left({U}_{\underline{\theta }}^{*},\underline{d}\right).\end{array}$

The sensitivity factors indicate the importance on the Hasofer-Lind reliability index (refer to [Reliability Index] ) of the value of the parameters used to define the distribution of the random vector $\underline{X}$.

Other notations
Here, the event considered is explicited directly from the limit state function $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$ : this is the classical structural reliability formulation.

However, if the event is a threshold exceedance, it is useful to explicite the variable of interest $Z=\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$, evaluated from the model $\stackrel{˜}{g}\left(.\right)$. In that case, the event considered, associated to the threshold ${z}_{s}$ has the formulation: ${𝒟}_{f}=\left\{\underline{X}\in {ℝ}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}Z=\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)>{z}_{s}\right\}$ and the limit state function is : $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)={z}_{s}-Z={z}_{s}-\stackrel{˜}{g}\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)$. ${P}_{f}$ is the threshold exceedance probability, defined as : ${P}_{f}=P\left(Z\ge {z}_{s}\right)={\int }_{g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$. Thus, the FORM sensitivity factors offer a way to rank the importance of the parameters of the input components with respect to the threshold exceedance by the quantity of interest $Z$. They can be seen as a specific sensitity analysis technique dedicated to the quantity Z around a particular threshold rather than to its variance.

Link with OpenTURNS methodology

Within the global methodology, sensitivity factors are evaluated in the step ${C}^{{}^{\text{'}}}$: "Ranking sources of uncertainty" in the case of the evaluation of the probability of an event by an approximation method.

It requires to have fulfilled before the following steps:

• step A: input vector $\underline{X}$, final variable of interest (result of a model), probabilistic criteria (the event considered) $g\left(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d}\right)\le 0$,

• step B: one of the proposed techniques to describe the probabilistic modelisation of the input vector $\underline{X}$,

• step C: one method to evaluate the probability content of the event : the FORM or SORM approximation

References and theoretical basics

The standard version of OpenTURNS takes into account only the sensitivity with respect to the parameters of the distributino of $\underline{X}$ which appear in the definition of the isoprobabilistic transformation $T$. It does not calculate the sensitivity with respect to the other parameters, in particular those of the limit state function $\underline{d}$.

The FORM importance factors (refer to [Importance Factors] ) offer a way to rank the importance of the input components with respect the realization of the event. They are often interpreted also as indicators of the impact of modeling the input components as random variables rather than fixed values.

Let's note some usefull references:

• O. Ditlevsen, H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark - Maritime engineering, internet publication.

Examples

Let's apply this method to the following analytical example which considers a cantilever beam, of Young's modulus E, length L, section modulus I. We apply a concentrated bending force at the other end of the beam. The vertical displacement $y$ of the extrême end is equal to :
 $\begin{array}{c}\hfill y\left(E,F,L,I\right)=\frac{F{L}^{3}}{3EI}\end{array}$

The objective is to propagate until $y$ the uncertainties of the variables $\left(E,F,L,I\right)$.

The input random vector is $\underline{X}=\left(E,F,L,I\right)$, which probabilistic modelisation is (unity is not provided):

 $\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal\left(50,1\right)\hfill \\ F\hfill & =& Normal\left(1,1\right)\hfill \\ L\hfill & =& Normal\left(10,1\right)\hfill \\ I\hfill & =& Normal\left(5,1\right)\hfill \end{array}\right\\end{array}$

The event considered is the threshold exceedance : ${𝒟}_{f}=\left\{\left(E,F,L,I\right)\in {ℝ}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y\left(E,F,L,I\right)\ge 3\right\}$.

If we note $\mu$ the mean and $\sigma$ the standard deviation a the random variable, we obtain the following results, gathered in the following tables.

 ${\beta }_{HL}$ $\mu$ $\sigma$ E 0.0307508 -0.000954364 F -0.834221 -0.000954364 L -0.441319 -0.000954364 I 0.329191 -0.000954364

 ${P}_{f,FORM}$ $\mu$ $\sigma$ E -0.00737194 0.000228791 F 0.199989 0.000228791 L 0.105798 0.000228791 I -0.0789175 0.000228791

 OpenTURNS' methods for Step C: uncertainty propagation Table of contents OpenTURNS' methods for the construction of response surfaces