Ranking methods can be used to analyse the respective importance of each uncertainty source with respect to a probabilistic criterion. OpenTURNS proposes ranking methods for two probabilistic criteria defined in the [global methodology guide] : probabilist criterion on central dispersion (expectation and variance), probability of exceeding a threshold / failure probability.
Each propagation method available for this criterion (see step C) leads to one or several ranking methods.
Approximation methods
[Quadratic combination's importance factors] – see page 5.2
Sampling methods
[Ranking based on Pearson correlation] – see page 5.2.1
[Ranking based on Spearman rank correlation] – see page 5.2.2
[Ranking based on Standard Regression Coefficients (SRC)] – see page 5.2.3
[Ranking based on Partial (Pearson) Correlation Coefficients (PCC)] – see page 5.2.4
[Ranking based on Partial (Spearman) Rank Correlation coefficients (PRCC)] – see page 5.2.5
[Sensivity analysis using Sobol indices] – see page 5.2.6
[Sensivity analysis for models with correlated inputs] – see page 5.2.7
[Sensivity analysis by Fourier decomposition] – see page 5.2.8
Approximation methods
FORMSORM methods
[FORM Importance Factors] – see page 5.2.9
[FORM Sensitivity Factors] – see page 5.2.10
Mathematical description
The importance factors derived from a quadratic combination method are defined to discriminate the influence of the different inputs towards the output variable for central dispersion analysis.
Principles
The importance factors are derived from the following expression. It can be shown by Taylor expansion of the output variable $z$ (${n}_{Z}=1$) around $\underline{x}={\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$ and computation of the variance that :
$$\begin{array}{c}\hfill \mathrm{Var}\left[Z\right]\approx \nabla h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right).\mathrm{Cov}\left[\underline{X}\right]{.}^{t}\nabla h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\end{array}$$ 
which can be re written :
$$\begin{array}{c}\hfill \begin{array}{cccc}& 1\hfill & \approx & \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\sum _{i=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {X}^{i}}\times \frac{{\sum}_{j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}}{\mathrm{Var}\left[Y\right]}\\ & & \approx & \phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}{\mathcal{F}}_{1}+{\mathcal{F}}_{2}+...+{\mathcal{F}}_{{n}_{X}}\end{array}\end{array}$$ 
Vectorial definition
$$\begin{array}{c}\hfill \underline{\mathcal{F}}=\nabla h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)\times \frac{\mathrm{Cov}\left[\underline{X}\right]{.}^{t}\nabla h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\mathrm{Var}\left[Z\right]}\end{array}$$ 
Scalar definition
$$\begin{array}{c}\hfill {\mathcal{F}}_{i}=\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{i}}\times \frac{{\sum}_{j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}}{\mathrm{Var}\left[Y\right]}\end{array}$$ 
where:
$\nabla h\left(\underline{x}\right)={\left(\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{i}}\right)}_{i=1,...,{n}_{X}}$ is the gradient of the model at the point $\underline{x}$,
$\mathrm{Cov}\left[\underline{X}\right]$ is the covariance matrix,
${\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}$ is the mean of the input random vector,
$\mathrm{Var}\left[Z\right]$ is the variance of the output variable.
Interpretation of the importance factors
Let us note that this interpretation supposes that ${\left({X}^{i}\right)}_{i}$ are independent.
Each coefficient $\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{i}}$ is a linear estimate of the number of units change in the variable $y=h\left(\underline{x}\right)$ as a result of a unit change in the variable ${x}^{i}$. This first term depends on the physical units of the variables and is meaningful only when the units of the model are known. In the general case, as the variables have different physical units, it is not possible to compare these sensitivities $\frac{\partial h\left(\underline{x}\right)}{\partial {x}^{i}}$ the one with the others. This is the reason why the importance factor used within OpenTURNS are normalized factors. These factors enable to make the results comparable independently of the original units of the inputs of the model. The second term $\frac{{\sum}_{j=1}^{{n}_{X}}\frac{\partial h\left({\underline{\mu}}_{\phantom{\rule{0.222222em}{0ex}}X}\right)}{\partial {x}^{j}}.{\left(\mathrm{Cov}\left[\underline{X}\right]\right)}_{ij}}{\mathrm{Var}\left[Z\right]}$ is the renormalization factor.
To summarize, the coefficients ${\left({\mathcal{F}}_{i}\right)}_{i=1,...,{n}_{X}}$ represent a linear estimate of the percentage change in the variable $z=h\left(\underline{x}\right)$ caused by one percent change in the variable ${x}^{i}$. The importance factors are independent of the original units of the model, and are comparable with each other.
Other notations
Link with OpenTURNS methodology
References and theoretical basics
Examples
Mathematical description
This method deals with analysing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure linear relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.
Principle
Pearson's correlation coefficient ${\rho}_{{Y}^{j},{X}^{i}}$, defined in [Pearson's Coefficient] , measures the strength of a linear relation between two random variables ${Y}^{j}$ and ${X}^{i}$. If we have a sample made up of $N$ pairs $({y}_{1}^{j},{x}_{1}^{i})$, $({y}_{2}^{j},{x}_{2}^{i})$, ..., $({y}_{N}^{j},{x}_{N}^{i})$, we can obtain ${\widehat{\rho}}_{{Y}^{j},{X}^{i}}$ an estimation of Pearson's coefficient. The hierarchical ordering of Pearson's coefficients is of interest in the case where the relationship between ${Y}^{j}$ and ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$ is close to being a linear relation:
$$\begin{array}{c}\hfill {Y}^{j}\simeq {a}_{0}+\sum _{i=1}^{{n}_{X}}{a}_{i}{X}^{i}\end{array}$$ 
To obtain an indication of the role played by each ${X}^{i}$ in the dispersion of ${Y}^{j}$, the idea is to estimate Pearson's correlation coefficient ${\widehat{\rho}}_{{X}^{i},{Y}^{j}}$ for each $i$. One can then order the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ taking absolute values of the correlation coefficients: the higher the value of $\left{\widehat{\rho}}_{{X}^{i},{Y}^{j}}\right$ the greater the impact the variable ${X}^{i}$ has on the dispersion of ${Y}^{j}$.
Other notations
Link with OpenTURNS methodology
$\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",
${Y}^{j}$ describes a variable of interest or output variable defined in the same step.
The results produced as output of this method are the estimated Pearson's correlation coefficients ${\widehat{\rho}}_{{X}^{i},{Y}^{j}}$ that the user may use, taking absolute values, to order the variables ${X}^{i}$ hierarchically.
when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,
when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are close to linear relationships (so that Pearson's correlation coefficient can be interpreted),
when this linear relationship is close to ${Y}^{j}={a}_{0}+{\sum}_{i=1}^{{n}_{X}}{a}_{i}{X}^{i}$ (i.e. no product terms of the type ${X}^{i}{X}^{j}$), and when the components of vector $\underline{X}$ are statistically independent. If this is not the case, $\left{\widehat{\rho}}_{{X}^{i},{Y}^{j}}\right$ reflects not only the influence of ${X}^{i}$ on ${Y}^{j}$ but equally the influence of other variables ${X}^{j}$ related to ${X}^{i}$ (e.g. an unimportant variable ${X}^{i}$ could have a strong coefficient for the correlation with ${Y}^{j}$ only because it is related – statistically or by a product term – to another variable ${X}^{j}$ which has enormous impact on ${Y}^{j}$).
Readers interested in other methods of uncertainty ranking that can be applied after MonteCarlo simulation when the assumptions of linearity and/or independence are violated are also referred to [Uncertainty ranking using Spearman] , [Hierarchical Ordering using SRC] , [Uncertainty ranking with Pearson's Partial Correlation Coefficients] and [Uncertainty ranking using Spearman's Partial Correlation Coefficients] .
The following references provide an interesting bibliographic starting point to further study of the method described here:
Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series
J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.2369
J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in largescale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147185
Mathematical description
This method deals with analyzing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure monotonic relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.
Principle
Spearman's correlation coefficient ${\rho}_{{Y}^{j},{X}^{i}}^{S}$, defined in [Spearman's Coefficient] , measures the strength of a monotonic relation between two random variables ${Y}^{j}$ and ${X}^{i}$. If we have a sample made up of $N$ pairs $({y}_{1}^{j},{x}_{1}^{i})$, $({y}_{2}^{j},{x}_{2}^{i})$, ..., $({y}_{N}^{j},{x}_{N}^{i})$, we can obtain ${\widehat{\rho}}_{{Y}^{j},{X}^{i}}^{S}$ an estimation of Spearman's coefficient.
Hierarchical ordering using Spearman's coefficients deals with the case where the variable ${Y}^{j}$ monotonically depends on the ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$. To obtain an indication of the role played by each ${X}^{i}$ in the dispersion of ${Y}^{j}$, the idea is to estimate the Spearman correlation coefficients ${\widehat{\rho}}_{{X}^{i},{Y}^{j}}^{S}$ for each $i$. One can then order the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ taking absolute values of the Spearman coefficients: the higher the value of $\left{\widehat{\rho}}_{{X}^{i},{Y}^{j}}^{S}\right$, the greater the impact the variable ${X}^{i}$ has on the dispersion of ${Y}^{j}$.
Other notations
Link with OpenTURNS methodology
$\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",
${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.
The results produced as output of this method are the estimated Spearman's correlation coefficients ${\widehat{\rho}}_{{X}^{i},{Y}^{j}}^{S}$ that the user may use, taking absolute values, to order the variables ${X}^{i}$ hierarchically.
when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,
when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are monotonic relationships (so that Spearman's correlation coefficient can be interpreted),
when the components of vector $\underline{X}$ are statistically independent. If this is not the case, $\left{\widehat{\rho}}_{{X}^{i},{Y}^{j}}^{S}\right$ reflects not only the influence of ${X}^{i}$ on ${Y}^{j}$ but equally the influence of other variables ${X}^{j}$ related to ${X}^{i}$ (e.g. an unimportant variable ${X}^{i}$ could have a strong coefficient for the correlation with ${Y}^{j}$ only because it is related to another variable ${X}^{j}$ which has enormous impact on ${Y}^{j}$).
Readers interested in other methods of uncertainty ranking that can be applied after MonteCarlo simulation when the assumptions of independence are violated are also referred to [Uncertainty ranking using SRC] , [Uncertainty ranking with Pearson's Partial Correlation Coefficients] and [Uncertainty ranking using Spearman's Partial Correlation Coefficients] .
The following references provide an interesting bibliographic starting point to further study of the method described here:
Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series
J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.2369
J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in largescale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147185
Mathematical description
This method deals with analysing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure linear relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.
Principle
The principle of the multiple linear regression model (see [Linear Regression] for more details) consists of attempting to find the function that links the variable ${Y}^{j}$ to the ${n}_{x}$ variables ${X}^{1}$,...,${X}^{{n}_{X}}$ by means of a linear model:
$$\begin{array}{c}\hfill {Y}^{j}={a}_{0}^{j}+\sum _{i=1}^{{n}_{X}}{a}_{i}^{j}{X}^{i}+{\epsilon}^{j}\end{array}$$ 
where ${\epsilon}^{j}$ describes a random variable with zero mean and standard deviation ${\sigma}_{\epsilon}^{j}$ independent of the input variables ${X}^{i}$. If the random variables ${X}^{1},...,{X}^{{n}_{X}}$ are independent and with finite variance $\mathrm{Var}\left[{X}^{k}\right]={\left({\sigma}_{k}\right)}^{2}$, the variance of ${Y}^{j}$ can be written as follows:
$$\begin{array}{c}\hfill \mathrm{Var}\left[{Y}^{j}\right]=\sum _{i=1}^{n}{\left({a}_{i}^{j}\right)}^{2}\mathrm{Var}\left[{X}^{i}\right]+{\left({\sigma}_{\epsilon}^{j}\right)}^{2}\end{array}$$ 
The estimators for the regression coefficients ${a}_{0}^{j},...,{a}_{{n}_{X}}^{j}$, and the standard deviation ${\sigma}^{j}$ are obtained from a sample of $({Y}^{j},{X}^{1},...,{X}^{{n}_{X}})$. Uncertainty ranking by linear regression ranks the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ in terms of the estimated contribution of each ${X}^{k}$ to the variance of ${Y}^{j}$:
$$\begin{array}{c}\hfill {C}_{k}^{j}=\frac{{\displaystyle {\left({a}_{k}^{j}\right)}^{2}\mathrm{Var}\left[{X}^{k}\right]}}{\mathrm{Var}\left[{Y}^{j}\right]}\end{array}$$ 
which is estimated by :
$$\begin{array}{c}\hfill {\widehat{C}}_{k}^{j}=\frac{{\displaystyle {\left({\widehat{a}}_{k}^{j}\right)}^{2}{\widehat{\sigma}}_{k}^{2}}}{{\displaystyle \sum _{i=1}^{{n}_{X}}{\left({a}_{i}^{j}\right)}^{2}{\widehat{\sigma}}_{i}^{2}+{\left({\widehat{\sigma}}_{\epsilon}^{j}\right)}^{2}}}\end{array}$$ 
where ${\widehat{\sigma}}_{i}$ describes the empirical standard deviation of the sample of the input variables. This estimated contribution is by definition between 0 and 1. The closer it is to 1, the greater the impact the variable ${X}^{i}$ has on the dispersion of ${Y}^{j}$.
Link with OpenTURNS methodology
$\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",
${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.
The results produced as output of this method are the estimated variance contributions ${\widehat{C}}_{i}$ that the user may use to order the variables ${X}^{i}$ hierarchically.
when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values, item when the relationships between ${Y}^{j}$ and the components of $\underline{X}$ are close to linear relationships, and more generally when all the underlying assumptions of the multiple linear regression model are valid,
when the components of vector $\underline{X}$ are independent, because if this is not the case the decomposition of the variance of ${Y}^{j}$ given here would be no longer exact,
when the number $N$ of MonteCarlo simulations is significantly higher than the number ${n}_{X}$ of input random variables (it is preferable to have $N/{n}_{X}$ at least greater by a factor of 10 so that the estimation of the ${n}_{X}$ correlation coefficients provides a reasonable picture of reality).
Readers interested in the assumptions made for multiple linear regression models and in the tests needed to validate these assumptions are referred to [Linear Regression] .
Other methods of uncertainty ranking can be applied after MonteCarlo simulation, requiring a lesser number $N$ of simulations or that can deal with nonlinear/nonindependent cases, are described in [Uncertainty Ranking using Pearson] , [Uncertainty Ranking using Spearman] , [Uncertainty Ranking using Pearson's Partial Correlation Coefficients] and [Uncertainty Ranking using Pearson's Partial Correlation Coefficients] .
The following references provide an interesting bibliographic starting point to further study of the method described here:
Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series
J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.2369
J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in largescale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147185
Mathematical description
This method deals with analyzing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure linear relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.
Principle
The basic method of hierarchical ordering using Pearson's coefficients (see [Uncertainty Ranking using Pearson] ) deals with the case where the variable ${Y}^{j}$ linearly depends on ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$ but this can be misleading when statistical dependencies or interactions between the variables ${X}^{i}$ (e.g. a crossed term ${X}^{i}\times {X}^{j}$) exist. In such a situation, the partial correlation coefficients can be more useful in ordering the uncertainty hierarchically: the partial correlation coefficients ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$ between the variables ${Y}^{j}$ and ${X}^{i}$ attempts to measure the residual influence of ${X}^{i}$ on ${Y}^{j}$ once influences from all other variables ${X}^{j}$ have been eliminated.
The estimation for each partial correlation coefficient ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$ uses a set made up of $N$ values $\left\{({y}_{1}^{j},{x}_{1}^{1},...,{x}_{1}^{{n}_{X}}),...,({y}_{N}^{j},{x}_{N}^{1},...,{x}_{N}^{{n}_{X}})\right\}$ of the vector $({Y}^{j},{X}^{1},...,{X}^{{n}_{X}})$. This requires the following three steps to be carried out:
Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${Y}^{j}$ by linear regression (see [Linear Regression] ); when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:
$$\begin{array}{c}\hfill \widehat{{Y}^{j}}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\widehat{a}}_{k}{X}^{k}\end{array}$$ 
Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${X}^{i}$ by linear regression; when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:
$$\begin{array}{c}\hfill {\widehat{X}}^{i}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\widehat{b}}_{k}{X}^{k}\end{array}$$ 
${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$ is then equal to the Pearson's correlation coefficient ${\widehat{\rho}}_{{Y}^{j}\widehat{{Y}^{j}},{X}^{i}{\widehat{X}}^{i}}$ estimated for the variables ${Y}^{j}\widehat{{Y}^{j}}$ and ${X}^{i}{\widehat{X}}^{i}$ on the $N$sample of simulations (see [Pearson's Coefficient] ).
One can then class the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ according to the absolute value of the partial correlation coefficients: the higher the value of $\left{\mathrm{PCC}}_{{X}^{i},{Y}^{j}}\right$, the greater the impact the variable ${X}^{i}$ has on ${Y}^{j}$.
Other notations
Link with OpenTURNS methodology
$\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",
${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.
The results produced as output of this method are Pearson's partial correlation coefficients ${\mathrm{PCC}}_{{X}^{i},{Y}^{j}}$, that the user may use, taking absolute values, to order the variables ${X}^{i}$ hierarchically.
when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,
when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are close to linear relationships (so that Pearson's correlation coefficient can be interpreted),
when the number $N$ of MonteCarlo simulations is significantly higher than the number ${n}_{X}$ of input random variables (it is preferable to have $N/{n}_{X}$ at least greater than a factor of 10 so that the estimation of the ${n}_{X}$ partial correlation coefficients provides a reasonable picture of reality).
Readers interested in the assumptions made for multiple linear regression models and in the tests needed to validate these assumptions are referred to [Linear Regression] .
Other methods of uncertainty ranking can be applied after MonteCarlo simulation, requiring a lesser number $N$ of simulations or that can treat nonlinear cases, are described in [Uncertainty Ranking using Pearson] , [Uncertainty ranking using Spearman] , and [Uncertainty Ranking using Spearman's Partial Correlation Coefficients] .
The following references provide an interesting bibliographic starting point to further study of the method described here:
Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series
J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.2369
J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in largescale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147185
Mathematical description
This method deals with analyzing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on the random variable ${Y}^{j}$ which is being studied for uncertainty. Here we attempt to measure monotonic relationships that exist between ${Y}^{j}$ and the different components ${X}^{i}$.
Principle
The basic method of hierarchical ordering using Spearman's coefficients (see [Uncertainty Ranking using Spearman] ) deals with the case where the variable ${Y}^{j}$ monotonically depends on ${n}_{X}$ variables $\left\{{X}^{1},...,{X}^{{n}_{X}}\right\}$ but this can be misleading when statistical dependencies between the variables ${X}^{i}$ exist. In such a situation, the partial rank correlation coefficients can be more useful in ordering the uncertainty hierarchically: the partial rank correlation coefficients ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$ between the variables ${Y}^{j}$ and ${X}^{i}$ attempts to measure the residual influence of ${X}^{i}$ on ${Y}^{j}$ once influences from all other variables ${X}^{j}$ have been eliminated.
The estimation for each partial rank correlation coefficient ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$ uses a set made up of $N$ values $\left\{({y}^{j}1,{x}_{1}^{1},...,{x}_{1}^{{n}_{X}}),...,({y}^{j}N,{x}_{N}^{1},...,{x}_{N}^{{n}_{X}})\right\}$ of the vector $({Y}^{j},{X}^{1},...,{X}^{{n}_{X}})$. This requires the following three steps to be carried out:
Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${Y}^{j}$ by linear regression (see [Linear Regression] ); when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:
$$\begin{array}{c}\hfill \widehat{{Y}^{j}}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\widehat{a}}_{k}{X}^{k}\end{array}$$ 
Determine the effect of other variables $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ on ${X}^{i}$ by linear regression; when the values of variable $\left\{{X}^{j},\phantom{\rule{4pt}{0ex}}j\ne i\right\}$ are known, the average forecast for the value of ${Y}^{j}$ is then available in the form of the equation:
$$\begin{array}{c}\hfill {\widehat{X}}^{i}=\sum _{k\ne i,\phantom{\rule{4pt}{0ex}}1\le k\le {n}_{X}}{\widehat{b}}_{k}{X}^{k}\end{array}$$ 
${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$ is then equal to the Spearman's correlation coefficient ${\widehat{\rho}}_{{Y}^{j}\widehat{{Y}^{j}},{X}^{i}{\widehat{X}}^{i}}^{S}$ estimated for the variables ${Y}^{j}\widehat{{Y}^{j}}$ and ${X}^{i}{\widehat{X}}^{i}$ on the $N$sample of simulations (see [Spearman's Coefficient] ).
One can then class the ${n}_{X}$ variables ${X}^{1},...,{X}^{{n}_{X}}$ according to the absolute value of the partial rank correlation coefficients: the higher the value of $\left{\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}\right$, the greater the impact the variable ${X}^{i}$ has on ${Y}^{j}$.
Link with OpenTURNS methodology
$\underline{X}=\left\{{X}^{1},...,{X}^{n}\right\}$ describes the input vector specified in step A "Specifying Criteria and the Case Study",
${Y}^{j}$ describes the final variable of interest or output variable defined in the same step.
The results produced as output of this method are partial rank correlation coefficients ${\mathrm{PRCC}}_{{X}^{i},{Y}^{j}}$, that the user may use, taking absolu
when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values,
when the relationships between ${Y}^{j}$ and each of the components of $\underline{X}$ are monotonic relationships (so that Spearman's correlation coefficient can be interpreted),
when the number $N$ of MonteCarlo simulations is significantly higher than the number ${n}_{X}$ of input random variables (it is preferable to have $N/{n}_{X}$ at least greater than a factor of 10 so that the estimation of the ${n}_{X}$ partial rank correlation coefficients provides a reasonable picture of reality).
Readers interested in the assumptions made for multiple linear regression models and in the tests needed to validate these assumptions are referred to [Linear Regression] .
Other methods of uncertainty ranking can be applied after MonteCarlo simulation, requiring a lesser number $N$ of simulations, are described in [Uncertainty Ranking using Pearson] , [Uncertainty ranking using Spearman] .
The following references provide an interesting bibliographic starting point to further study of the method described here:
Saltelli, A., Chan, K., Scott, M. (2000). "Sensitivity Analysis", John Wiley & Sons publishers, Probability and Statistics series
J.C. Helton, F.J. Davis (2003). "Latin Hypercube sampling and the propagation of uncertainty analyses of complex systems". Reliability Engineering and System Safety 81, p.2369
J.P.C. Kleijnen, J.C. Helton (1999). "Statistical analyses of scatterplots to identify factors in largescale simulations, part 1 : review and comparison of techniques". Reliability Engineering and System Safety 65, p.147185
Mathematical description
This method deals with analysing the influence the random vector $\underline{X}=\left({X}^{1},...,{X}^{{n}_{X}}\right)$ has on a random variable ${Y}^{k}$ which is being studied for uncertainty. Here we attempt to evaluate the part of variance of ${Y}^{k}$ due to the different components ${X}^{i}$.
Principle
The estimators for the mean of ${m}_{{Y}^{j}}$ and the standard deviation $\sigma $ of ${Y}^{k}$ can be obtained from a first sample, as Sobol indices estimation requires two samples of the input variables : $({X}^{1},...,{X}^{{n}_{X}})$, that is two sets of $N$ vectors of dimension ${n}_{X}$ ${({x}_{11}^{\left(1\right)},...,{x}_{1{n}_{X}})}^{\left(1\right)}$,...,$({x}_{{N}^{1}}^{\left(1\right)},...,{x}_{N{n}_{X}}^{\left(1\right)})$ and ${({x}_{11}^{\left(2\right)},...,{x}_{1{n}_{X}})}^{\left(2\right)}$,...,$({x}_{{N}^{1}}^{\left(2\right)},...,{x}_{N{n}_{X}}^{\left(2\right)})$
The estimation of sensivity indices for first order consists in estimating the quantity
$${V}_{i}=\mathrm{Var}\left[\mathbb{E}\left[{Y}^{k}{X}_{i}\right]\right]=\mathbb{E}\left[\mathbb{E}{\left[{Y}^{k}{X}_{i}\right]}^{2}\right]\mathbb{E}{\left[\mathbb{E}\left[{Y}^{k}{X}_{i}\right]\right]}^{2}={U}_{i}\mathbb{E}{\left[{Y}^{k}\right]}^{2}$$ 
Sobol proposes to estimate the quantity ${U}_{i}=\mathbb{E}\left[\mathbb{E}{\left[{Y}^{k}{X}_{i}\right]}^{2}\right]$ by swaping every variables in the two samples except the variable ${X}_{i}$ between the two calls of the function :
$${\widehat{U}}_{i}=\frac{1}{N}\sum _{k=1}^{N}{Y}^{k}\left({x}_{k1}^{\left(1\right)},\cdots ,{x}_{k(i1)}^{\left(1\right)},{x}_{ki}^{\left(1\right)},{x}_{k(i+1)}^{\left(1\right)},\cdots ,{x}_{k{n}_{X}}^{\left(1\right)}\right)\times {Y}^{k}\left({x}_{k1}^{\left(2\right)},\cdots ,{x}_{k(i1)}^{\left(2\right)},{x}_{ki}^{\left(1\right)},{x}_{k(i+1)}^{\left(2\right)},\cdots ,{x}_{k{n}_{X}}^{\left(2\right)}\right)$$ 
Then the ${n}_{X}$ first order indices are estimated by
$${\widehat{S}}_{i}=\frac{{\widehat{V}}_{i}}{{\widehat{\sigma}}^{2}}=\frac{{\widehat{U}}_{i}{m}_{{Y}^{k}}^{2}}{{\widehat{\sigma}}^{2}}$$ 
For the second order, the two variables ${X}_{i}$ and ${X}_{j}$ are not swapped to estimate ${U}_{ij}$, and so on for higher orders, assuming that order $<{n}_{X}$. Then the $\left(\genfrac{}{}{0pt}{}{{n}_{X}}{2}\right)$ second order indices are estimated by
$${\widehat{S}}_{ij}=\frac{{\widehat{V}}_{ij}}{{\widehat{\sigma}}^{2}}=\frac{{\widehat{U}}_{ij}{m}_{{Y}^{k}}^{2}{\widehat{V}}_{i}{\widehat{V}}_{j}}{{\widehat{\sigma}}^{2}}$$ 
For the ${n}_{X}$ total order indices ${T}_{i}$, we only swap the variable ${X}_{i}$ between the two samples.
Link with OpenTURNS methodology
This method of hierarchical ordering is particularly useful :
when the study of uncertainty deals with the central dispersion of the variable of interest ${Y}^{j}$ and not with its extreme values.
when we have no particular hypothesis on the model other than the independance of the input variables ${X}_{i}$.
when the size $N$ of both samples is high enough to provide a 'reasonable' picture of reality (the law of large numbers assures this method will show a ${N}^{\frac{1}{2}}$ convergence order).
References and theoretical basics
Saltelli, A. (2002). “Making best use of model evaluations to compute sensitivity indices", Computer Physics Communication, 145, 580297
Examples
Mathematical description
The ANCOVA (ANalysis of COVAriance) method, is a variancebased method generalizing the ANOVA (ANalysis Of VAriance) decomposition for models with correlated input parameters.
Principle
Let us consider a model $Y=h\left(\underline{X}\right)$ without making any hypothesis on the dependence structure of $\underline{X}=\{{X}^{1},...,{X}^{{n}_{X}}\}$, a ${n}_{X}$dimensional random vector. The covariance decomposition requires a functional decomposition of the model. Thus the model response $Y$ is expanded as a sum of functions of increasing dimension as follows:
$$h\left(\underline{X}\right)={h}_{0}+\sum _{u\subseteq \{1,\cdots ,{n}_{X}\}}{h}_{u}\left({X}_{u}\right)$$  (138) 
${h}_{0}$ is the mean of $Y$. Each function ${h}_{u}$ represents, for any non empty set $u\subseteq \{1,\cdots ,{n}_{X}\}$, the combined contribution of the variables ${X}_{u}$ to $Y$.
Using the properties of the covariance, the variance of $Y$ can be decomposed into a variance part and a covariance part as follows:
$$\begin{array}{ccc}\hfill Var\left[Y\right]& =& Cov\left[{h}_{0}+\sum _{u\subseteq \{1,\cdots ,{n}_{X}\}}{h}_{u}\left({X}_{u}\right),{h}_{0}+\sum _{u\subseteq \{1,\cdots ,n\}}{h}_{u}\left({X}_{u}\right)\right]\hfill \\ & =& \sum _{u\subseteq \{1,\cdots ,{n}_{X}\}}Cov\left[{h}_{u}\left({X}_{u}\right),\sum _{u\subseteq \{1,\cdots ,{n}_{X}\}}{h}_{u}\left({X}_{u}\right)\right]\hfill \\ & =& \sum _{u\subseteq \{1,\cdots ,{n}_{X}\}}\left[Var\left[{h}_{u}\left({X}_{u}\right)\right]+Cov[{h}_{u}\left({X}_{u}\right),\sum _{v\subseteq \{1,\cdots ,{n}_{X}\},v\cap u=\u2300}{h}_{v}\left({X}_{v}\right)]\right]\hfill \end{array}$$ 
The total part of variance of $Y$ due to ${X}_{u}$ reads:
$${S}_{u}=\frac{Cov[Y,{h}_{u}\left({X}_{u}\right)]}{Var\left[Y\right]}$$ 
The variance formula described above enables to define each sensitivity measure ${S}_{u}$ as the sum of a $\mathrm{\mathit{p}\u210e\mathit{y}\mathit{s}\mathit{i}\mathit{c}\mathit{a}\mathit{l}}$ (or $\mathrm{\mathit{u}\mathit{n}\mathit{c}\mathit{o}\mathit{r}\mathit{r}\mathit{e}\mathit{l}\mathit{a}\mathit{t}\mathit{e}\mathit{d}}$) part and a $\mathrm{\mathit{c}\mathit{o}\mathit{r}\mathit{r}\mathit{e}\mathit{l}\mathit{a}\mathit{t}\mathit{e}\mathit{d}}$ part such as:
$${S}_{u}={S}_{u}^{U}+{S}_{u}^{C}$$ 
where ${S}_{u}^{U}$ is the uncorrelated part of variance of $Y$ due to ${X}_{u}$:
$${S}_{u}^{U}=\frac{Var\left[{h}_{u}\left({X}_{u}\right)\right]}{Var\left[Y\right]}$$ 
and ${S}_{u}^{C}$ is the contribution of the correlation of ${X}_{u}$ with the other parameters:
$${S}_{u}^{C}=\frac{{\displaystyle Cov[{h}_{u}\left({X}_{u}\right),\sum _{v\subseteq \{1,\cdots ,{n}_{X}\},v\cap u=\u2300}{h}_{v}\left({X}_{v}\right)]}}{Var\left[Y\right]}$$ 
As the computational cost of the indices with the numerical model $h$ can be very high, it is suggested to approximate the model response with a polynomial chaos expansion. However, for the sake of computational simplicity, the latter is constructed considering $\mathrm{\mathit{i}\mathit{n}\mathit{d}\mathit{e}\mathit{p}\mathit{e}\mathit{n}\mathit{d}\mathit{e}\mathit{n}\mathit{t}}$ components $\{{X}^{1},\cdots ,{X}^{{n}_{X}}\}$. Thus the chaos basis is not orthogonal with respect to the correlated inputs under consideration, and it is only used as a metamodel to generate approximated evaluations of the model response and its summands in Eq. (138).
$$Y\simeq \widehat{h}=\sum _{j=0}^{P1}{\alpha}_{j}{\Psi}_{j}\left(x\right)$$ 
Then one may identify the component functions. For instance, for $u=\left\{1\right\}$:
$${h}_{1}\left({X}_{1}\right)=\sum _{\alpha {\alpha}_{1}\ne 0,{\alpha}_{i\ne 1}=0}{y}_{\alpha}{\Psi}_{\alpha}\left(\underline{X}\right)$$ 
where $\alpha $ is a set of degrees associated to the ${n}_{X}$ univariate polynomial ${\psi}_{i}^{{\alpha}_{i}}\left({X}_{i}\right)$.
Then the model response $Y$ is evaluated using a sample $X=\{{x}_{k},k=1,\cdots ,N\}$ of the correlated joint distribution. Finally, the several indices are computed using the model response and its component functions that have been identified on the polynomial chaos.
Link with OpenTURNS methodology
Caniou, Y. (2012). "Global sensitivity analysis for nested and multiscale modelling." PhD thesis. Blaise Pascal UniversityClermont II, France.
Examples
Mathematical description
FAST is a sensitivity analysis method which is based upon the ANOVA decomposition of the variance of the model response $y=f\left(\underline{X}\right)$, the latter being represented by its Fourier expansion. $\underline{X}=\{{X}^{1},\cdots ,{X}^{{n}_{X}}\}$ is an input random vector of ${n}_{X}$ independent components.
Principle
OpenTURNS implements the extended FAST method consisting in computing alternately the first order and the totaleffect indices of each input. This approach relies upon a Fourier decomposition of the model response. Its key idea is to recast this representation as a function of a $\mathrm{\mathit{s}\mathit{c}\mathit{a}\mathit{l}\mathit{a}\mathit{r}}$ parameter $s$, by defining parametric curves $s\mapsto {x}_{i}\left(s\right)$, $i=1,\cdots ,{n}_{X}$ exploring the support of the input random vector $\underline{X}$.
For each input, the same procedure is realized in three steps:
Sampling:
Deterministic spacefilling paths with random starting points are defined, i.e. each input ${X}^{i}$ is transformed as follows:
$${x}_{j}^{i}=\frac{1}{2}+\frac{1}{\pi}arcsin(sin({\omega}_{i}{s}_{j}+{\phi}_{i})),\phantom{\rule{1.em}{0ex}}i=1,\cdots ,{n}_{X},\phantom{\rule{0.166667em}{0ex}}j=1,\cdots ,N$$ 
where ${n}_{X}$ is the number of input variables. $N$ is the length of the discretization of the sspace, with $s$ varying in $(\pi ,\pi )$ by step of $2\pi /N$. ${\phi}_{i}$ is a random phaseshift chosen uniformly in $[0,2\pi ]$ which enables to make the curves start anywhere within the unit hypercube ${K}^{{n}_{X}}=\left(\underline{X}\right0\le {x}_{i}\le 1;i=1,\cdots ,{n}_{X})$. The selection of the set $\{{\phi}_{1},\cdots ,{\phi}_{{n}_{X}}\}$ induces a part of randomness in the procedure. So it can be asked to realize the procedure $Nr$ times and then to calculate the arithmetic means of the results over the $Nr$ estimates. This operation is called $\mathrm{\mathit{r}\mathit{e}\mathit{s}\mathit{a}\mathit{m}\mathit{p}\mathit{l}\mathit{i}\mathit{n}\mathit{g}}$.
$\left\{{\omega}_{i}\right\},\forall i=1,\cdots ,{n}_{X}$ is a set of integer frequencies assigned to each input ${X}^{i}$. The frequency associated with the input of interest is set to the maximum admissible frequency satisfying the Nyquist criterion (which ensures to avoid aliasing effects):
$${\omega}_{i}=\frac{N1}{2M}$$ 
with $M$ the interference factor usually equal to 4 or higher. It corresponds to the truncation level of the Fourier series, i.e. the number of harmonics that are retained in the decomposition realized in the third step of the procedure.
In the paper by Saltelli et al. (1999), for high sample size, it is suggested that $16\le {\omega}_{i}/{N}_{r}\le 64$.
And the maximum frequency of the complementary set of frequencies is:
$$max\left({\omega}_{i}\right)=\frac{{\omega}_{i}}{2M}=\frac{N1}{4{M}^{2}}$$ 
with the index '$i$' which meaning 'all but $i$'.
The other frequencies are distributed uniformly between 1 and $max\left({\omega}_{i}\right)$. The set of frequencies is the same whatever the number of resamplings is.
Let us make an example with eight input factors, $N=513$ and $M=4$ i.e. ${\omega}_{i}=\frac{N1}{2M}=64$ and $max\left({\omega}_{i}\right)=\frac{N1}{4{M}^{2}}=8$ with $i$ the index of the input of interest.
When computing the sensitivity indices for the first input, the considered set of frequencies is : $\{64,1,2,3,4,5,6,8\}$.
When computing the sensitivity indices for the second input, the considered set of frequencies is : $\{1,64,2,3,4,5,6,8\}$.
etc.
The transformation defined above provides a uniformly distributed sample for the ${x}_{i},\forall i=1,\cdots ,{n}_{X}$ oscillating between 0 and 1. In order to take into account the real distributions of the inputs, we apply an isoprobabilistic transformation on each ${x}_{i}$ before the next step of the procedure.
Simulations:
Output is computed such as: $y=f\left(s\right)=f({x}_{1}\left(s\right),\cdots ,{x}_{{n}_{X}}\left(s\right))$
Then $f\left(s\right)$ is expanded onto a Fourier series:
$$f\left(s\right)=\sum _{k\in {\mathbb{Z}}^{N}}{A}_{k}cos\left(ks\right)+{B}_{k}sin\left(ks\right)$$ 
where ${A}_{k}$ and ${B}_{k}$ are Fourier coefficients defined as follows:
$$\begin{array}{ccc}\hfill {A}_{k}& =& \frac{1}{2\pi}{\int}_{\pi}^{\pi}f\left(s\right)cos\left(ks\right)\phantom{\rule{0.166667em}{0ex}}ds\hfill \\ \hfill {B}_{k}& =& \frac{1}{2\pi}{\int}_{\pi}^{\pi}f\left(s\right)sin\left(ks\right)\phantom{\rule{0.166667em}{0ex}}ds\hfill \end{array}$$ 
These coefficients are estimated thanks to the following discrete formulations:
$$\begin{array}{ccc}\hfill {\widehat{A}}_{k}& =& \frac{1}{N}\sum _{j=1}^{N}f({x}_{j}^{1},\cdots ,{x}_{j}^{{N}_{X}})cos\left(\frac{2k\pi (j1)}{N}\right)\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\frac{N}{2}\le k\le \frac{N}{2}\hfill \\ \hfill {\widehat{B}}_{k}& =& \frac{1}{N}\sum _{j=1}^{N}f({x}_{j}^{1},\cdots ,{x}_{j}^{{N}_{X}})sin\left(\frac{2k\pi (j1)}{N}\right)\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\frac{N}{2}\le k\le \frac{N}{2}\hfill \end{array}$$ 
Estimations by frequency analysis:
The first order indices are estimated as follows:
$${\widehat{S}}_{i}=\frac{{\widehat{D}}_{i}}{\widehat{D}}=\frac{{\sum}_{p=1}^{M}{({\widehat{A}}_{p{\omega}_{i}}^{2}+{\widehat{B}}_{p{\omega}_{i}}^{2})}^{2}}{{\sum}_{n=1}^{(N1)/2}{({\widehat{A}}_{n}^{2}+{\widehat{B}}_{n}^{2})}^{2}}$$ 
where $\widehat{D}$ is the total variance and ${\widehat{D}}_{i}$ the portion of $D$ arising from the uncertainty of the ${i}^{th}$ input. $N$ the size of the sample using to compute the Fourier series and $M$ is the interference factor. Saltelli et al. (1999) recommanded to set $M$ to a value in the range $[4,6]$.
The total order indices are estimated as follows:
$${\widehat{T}}_{i}=1\frac{{\widehat{D}}_{i}}{\widehat{D}}=1\frac{{\sum}_{k=1}^{{\omega}_{i}/2}{({\widehat{A}}_{k}^{2}+{\widehat{B}}_{k}^{2})}^{2}}{{\sum}_{n=1}^{(N1)/2}{({\widehat{A}}_{n}^{2}+{\widehat{B}}_{n}^{2})}^{2}}$$ 
where ${\widehat{D}}_{i}$ is the part of the variance due to all the inputs except the ${i}^{th}$ input.
Other notations
Link with OpenTURNS methodology
This method of hierarchical ordering is particularly useful :
when the problem focuses on the central dispersion of the variable of interest ${Y}^{j}$ and not on its extreme values.
when no particular hypothesis is made on the model other than the independance of the input variables ${X}_{i}$.
The extended FAST method is a convenient alternative technique to the method of Sobol'. However the computational cost of the extended FAST method tends to be lower than that of the method of Sobol'. Indeed, FAST estimates both the firstorder indices and the total ones with the same set of model evaluations.
Saltelli, A., Tarantola, S. & Chan, K. (1999). "A quantitative, model independent method for global sensitivity analysis of model output." Technometrics, 41(1), 3956.
Examples
Mathematical description
Importance Factors are evaluated in the following context : $\underline{X}$ denotes a random input vector, representing the sources of uncertainties, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, representing the fixed variables $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})=0$ its boundary (also called limit state surface).
The probability content of the event ${\mathcal{D}}_{f}$ is ${P}_{f}$:
$$\begin{array}{c}\hfill {P}_{f}={\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\end{array}$$  (139) 
In this context, the probability ${P}_{f}$ can often be efficiently estimated by FORM or SORM approximations (refer to [FORM] and [SORM] ).
The FORM importance factors offer a way to rank the importance of the input components with respect the realization of the event. They are often interpreted also as indicators of the impact of modeling the input components as random variables rather than fixed values. The FORM importance factors are defined as follows.
Principle
The isoprobabilistic transformation $T$ used in the FORM and SORM approximation (refer to [Iso Probabilistic Transformation] ) is a diffeomorphism from $supp\left(\underline{X}\right)$ into ${\mathbb{R}}^{n}$, such that the distribution of the random vector $\underline{U}=T\left(\underline{X}\right)$ has the following properties : $\underline{U}$ and $\underline{\underline{R}}\phantom{\rule{0.166667em}{0ex}}\underline{U}$ have the same distribution for all rotations $\underline{\underline{R}}\in {\mathcal{S}\mathcal{P}}_{n}\left(\mathbb{R}\right)$.
In the standard space, the design point ${\underline{u}}^{*}$ is the point on the limit state boundary the nearest to the origin of the standard space. The design point is ${\underline{x}}^{*}$ in the physical space, where ${\underline{x}}^{*}={T}^{1}\left({\underline{u}}^{*}\right)$. We note ${\beta}_{HL}$ the HasoferLind reliability index : ${\beta}_{HL}=\left\right{\underline{u}}^{*}\left\right$.
When the $U$space is normal, the litterature proposes to calculate the importance factor ${\alpha}_{i}^{2}$ of the variable ${X}_{i}$ as the square of the cofactors of the design point in the $U$space :
$$\alpha}_{i}^{2}=\frac{{\left({u}_{i}^{*}\right)}^{2}}{{\beta}_{HL}^{2}$$  (140) 
This definition guarantees the relation : ${\Sigma}_{i}{\alpha}_{i}^{2}=1$.
Let's note that this definition arises the following difficulties :
Which signification for ${\alpha}_{i}$ when the variables ${X}_{i}$ are correlated? In that case, the isoprobabilistic transformation doesn't associate ${U}_{i}$ to ${X}_{i}$ but ${U}_{i}$ to a set of ${X}_{i}$.
In the case of dependence of the variables ${X}_{i}$, the shape of the limit state function in the $U$space depends on the isoprobabilistic transformation and in particular on the order of the variables ${X}_{i}$ within the random vector $\underline{X}$. Thus, changing this order has an impact on the localisation of the design point in the $U$space and, concequently, on the importance factors ... (see [R. Lebrun, A. Dutfoy, 2008] to compare the different isoprobabilistic transformations).
It is possible to give another definition to the importance factors which may be defined in the elliptical space of the isoprobabilistic transformation, where the marginal distributions are all elliptical, with cumulative distribution function noted $E$, and not yet decorrelated.
$$\begin{array}{c}\hfill {Y}^{*}=\left(\begin{array}{c}{E}^{1}\circ {F}_{1}\left({X}_{1}^{*}\right)\\ {E}^{1}\circ {F}_{2}\left({X}_{2}^{*}\right)\\ \vdots \\ {E}^{1}\circ {F}_{n}\left({X}_{n}^{*}\right)\end{array}\right).\end{array}$$  (141) 
The importance factor ${\alpha}_{i}^{2}$ writes:
$$\alpha}_{i}^{2}=\frac{{\left({y}_{i}^{*}\right)}^{2}}{\left\right{\underline{y}}^{*}{\left\right}^{2}$$  (142) 
This definition still guarantees the relation : ${\Sigma}_{i}{\alpha}_{i}^{2}=1$.
Other notations
However, if the event is a threshold exceedance, it is useful to explicite the variable of interest $Z=\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$, evaluated from the model $\tilde{g}(.)$. In that case, the event considered, associated to the threshold ${z}_{s}$ has the formulation: ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}Z=\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})>{z}_{s}\}$ and the limit state function is : $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})={z}_{s}Z={z}_{s}\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$. ${P}_{f}$ is the threshold exceedance probability, defined as : ${P}_{f}=P(Z\ge {z}_{s})={\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$. Thus, the FORM importance factors offer a way to rank the importance of the input components with respect to the threshold exceedance by the quantity of interest $Z$. They can be seen as a specific sensitity analysis technique dedicated to the quantity Z around a particular threshold rather than to its variance.
Link with OpenTURNS methodology
It requires to have fulfilled the following steps beforehand:
step A: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $Z=\tilde{g}(\underline{X},\underline{d})$, result of the model $\tilde{g}\left(\right)$; identify a probabilistic criteria such as a threshold exceedance $Z>{z}_{s}$ or equivalently a failure event $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0$,
step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$,
step C: select an appropriate optimization algorithm among those proposed to evaluate the event probability : FORM or SORM.
When not specified, OpenTURNS evaluates the importance factors according to relation (140). Otherwise, OpenTURNS evaluates them according to (142).
Note that the relevance of FORM importance factors as a means to rank the importance of the sources of uncertainty is closely dependant on the validity of FORM approximation (refer to [FORM] and [SORM] ).
The sensitivity factors (refer to [Sensitivity Factors] ) indicate the importance on the HasoferLind reliability index (refer to [Reliability Index] ) of the value of the parameters used to define the distribution of the random vector $\underline{X}$.
References and theoretical basics
H.O. Madsen, "Omission Sensitivity Factors," 1988, Structural Safety, 5, 3545.
R. Lebrun, A. Dutfoy, 2008, "Do Rosenblatt and Nataf isoprobabilistic transformations really differ?", submitted to Probabilistic Engineering Mechanics in august 2008, under temptatively accepted so far.
Examples
$$\begin{array}{c}\hfill {\displaystyle y(E,F,L,I)=\frac{F{L}^{3}}{3EI}}\end{array}$$ 
The objective is to propagate until $y$ the uncertainties of the variables $(E,F,L,I)$.
The input random vector is $\underline{X}=(E,F,L,I)$, which probabilistic modelisation is (unity is not provided):
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal(50,1)\hfill \\ F\hfill & =& Normal(1,1)\hfill \\ L\hfill & =& Normal(10,1)\hfill \\ I\hfill & =& Normal(5,1)\hfill \end{array}\right.\end{array}$$ 
The four random variables are independant.
The event considered is the threshold exceedance : ${\mathcal{D}}_{f}=\{(E,F,L,I)\in {\mathbb{R}}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y(E,F,L,I)\ge 3\}$.
The importance factors obtained are :
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}{\alpha}_{E}^{2}\hfill & =& 9.456{e}^{2}\phantom{\rule{0.166667em}{0ex}}\%\hfill \\ {\alpha}_{F}^{2}\hfill & =& 6.959{e}^{+1}\phantom{\rule{0.166667em}{0ex}}\%\hfill \\ {\alpha}_{L}^{2}\hfill & =& 1.948{e}^{+1}\phantom{\rule{0.166667em}{0ex}}\%\hfill \\ {\alpha}_{I}^{2}\hfill & =& 1.084{e}^{+1}\phantom{\rule{0.166667em}{0ex}}\%\hfill \end{array}\right.\end{array}$$ 
Mathematical description
Sensitivity Factors are evaluated under the following context : $\underline{X}$ denotes a random input vector, representing the sources of uncertainties, ${f}_{\underline{X}}\left(\underline{x}\right)$ its joint density probability, $\underline{d}$ a determinist vector, representing the fixed variables $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$ the limit state function of the model, ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0\}$ the event considered here and $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})=0$ its boundary (also called limit state surface).
The probability content of the event ${\mathcal{D}}_{f}$ is ${P}_{f}$:
$$\begin{array}{c}\hfill {P}_{f}={\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}.\end{array}$$  (143) 
In this context, the probability ${P}_{f}$ can often be efficiently estimated by FORM or SORM approximations (refer to [FORM] and [SORM] ).
The FORM importance factors offer a way to analyse the sensitivity of the probability the realization of the event with respect to the parameters of the probability distribution of X.
Principle
A sensitivity factor is defined as the derivative of the HasoferLind reliability index with respect to the paramater $\theta $. The paramater $\theta $ is a parameter in a distribution of the random vector $\underline{X}$.
If $\underline{\theta}$ represents the vector of all the parameters of the distribution of $\underline{X}$ which appear in the definition of the isoprobabilistic transformation $T$ (refer to [IsoProbabiliticFunction] ), and ${U}_{\underline{\theta}}^{*}$ the design point associated to the event considered in the $U$space, and if the mapping of the limit state function by the $T$ is noted $G(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{\theta})=g[{T}^{1}(\underline{U}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{\theta}),\underline{d}]$, then the sensitivity factors vector is defined as :
$$\begin{array}{c}\hfill {\displaystyle {\nabla}_{\underline{\theta}}{\beta}_{HL}=+\frac{1}{\left\right{\nabla}_{\underline{\theta}}G({U}_{\underline{\theta}}^{*},\underline{d})\left\right}{\nabla}_{\underline{u}}G({U}_{\underline{\theta}}^{*},\underline{d}).}\end{array}$$ 
The sensitivity factors indicate the importance on the HasoferLind reliability index (refer to [Reliability Index] ) of the value of the parameters used to define the distribution of the random vector $\underline{X}$.
However, if the event is a threshold exceedance, it is useful to explicite the variable of interest $Z=\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$, evaluated from the model $\tilde{g}(.)$. In that case, the event considered, associated to the threshold ${z}_{s}$ has the formulation: ${\mathcal{D}}_{f}=\{\underline{X}\in {\mathbb{R}}^{n}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}Z=\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})>{z}_{s}\}$ and the limit state function is : $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})={z}_{s}Z={z}_{s}\tilde{g}(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})$. ${P}_{f}$ is the threshold exceedance probability, defined as : ${P}_{f}=P(Z\ge {z}_{s})={\int}_{g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}d\underline{x}$. Thus, the FORM sensitivity factors offer a way to rank the importance of the parameters of the input components with respect to the threshold exceedance by the quantity of interest $Z$. They can be seen as a specific sensitity analysis technique dedicated to the quantity Z around a particular threshold rather than to its variance.
Link with OpenTURNS methodology
It requires to have fulfilled before the following steps:
step A: input vector $\underline{X}$, final variable of interest (result of a model), probabilistic criteria (the event considered) $g(\underline{X}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\underline{d})\le 0$,
step B: one of the proposed techniques to describe the probabilistic modelisation of the input vector $\underline{X}$,
step C: one method to evaluate the probability content of the event : the FORM or SORM approximation
References and theoretical basics
The FORM importance factors (refer to [Importance Factors] ) offer a way to rank the importance of the input components with respect the realization of the event. They are often interpreted also as indicators of the impact of modeling the input components as random variables rather than fixed values.
Let's note some usefull references:
O. Ditlevsen, H.O. Madsen, 2004, "Structural reliability methods," Department of mechanical engineering technical university of Denmark  Maritime engineering, internet publication.
Examples
$$\begin{array}{c}\hfill {\displaystyle y(E,F,L,I)=\frac{F{L}^{3}}{3EI}}\end{array}$$ 
The objective is to propagate until $y$ the uncertainties of the variables $(E,F,L,I)$.
The input random vector is $\underline{X}=(E,F,L,I)$, which probabilistic modelisation is (unity is not provided):
$$\begin{array}{c}\hfill \left\{\begin{array}{ccc}E\hfill & =& Normal(50,1)\hfill \\ F\hfill & =& Normal(1,1)\hfill \\ L\hfill & =& Normal(10,1)\hfill \\ I\hfill & =& Normal(5,1)\hfill \end{array}\right.\end{array}$$ 
The event considered is the threshold exceedance : ${\mathcal{D}}_{f}=\{(E,F,L,I)\in {\mathbb{R}}^{4}\phantom{\rule{0.166667em}{0ex}}/\phantom{\rule{0.166667em}{0ex}}y(E,F,L,I)\ge 3\}$.
If we note $\mu $ the mean and $\sigma $ the standard deviation a the random variable, we obtain the following results, gathered in the following tables.


OpenTURNS' methods for Step C: uncertainty propagation

Table of contents
 OpenTURNS' methods for the construction of response surfaces
