6 OpenTURNS' methods for the construction of response surfaces

The current section is dedicated to the building of an analytical approximation of the response of a given model. Such an approximation is commonly referred to as a response surface in the literature. This methodology is relevant if each model evaluation is time consuming. Indeed, once a response surface has been built up, the various methods in Step C and Step C' may be applied at a negligible cost. A special focus will be given to polynomial response surfaces.

6.1 Deterministic response surfaces

In a first time, methods related to the approximation of a function depending on deterministic variables are considered.

6.1.1 Polynomial response surfaces

• Local approximation at the vicinity of a given set of input parameters:

• Global approximation over the whole domain of definition of the input parameters:

6.2 Stochastic response surfaces

In this section, the approximation of a model depending on random variables is of interest.

6.3 Methods description

6.3.1 Step Resp. Surf.  – Linear and Quadratic Taylor Expansions

Mathematical description

Goal

The approximation of the model response $\underline{y}=h\left(\underline{x}\right)$ around a specific set ${\underline{x}}_{0}=\left({x}_{0,1},\cdots ,{x}_{0,{n}_{X}}\right)$ of input parameters may be of interest. One may then substitute $h$ for its Taylor expansion at point ${\underline{x}}_{0}$. Hence $h$ is replaced with a first or second-order polynomial $\stackrel{^}{h}$ whose evaluation is inexpensive, allowing the analyst to apply the uncertainty anaysis methods (Step C).

Principles

We consider the first and second order Taylor expansions around $\underline{x}={\underline{x}}_{0}$.

First order Taylor expansion

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left({\underline{x}}_{0}\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}\frac{\partial h}{\partial {x}_{i}}\left({\underline{x}}_{0}\right).\left({x}_{i}-{x}_{0,i}\right)\end{array}$

Introducing a vector notation, the previous equation rewrites:

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{y}}_{0}\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\underline{\underline{L}}\phantom{\rule{0.222222em}{0ex}}\left(\underline{x}-{\underline{x}}_{0}\right)\end{array}$

where:

• $\underline{{y}_{0}}={\left({y}_{0,1},\cdots ,{y}_{0,{n}_{Y}}\right)}^{𝖳}=h\left({\underline{x}}_{0}\right)$ is the vector model response evaluated at ${\underline{x}}_{0}$;

• $\underline{x}$ is the current set of input parameters;

• $\underline{\underline{L}}=\left(\frac{\partial {y}_{0,j}}{\partial {x}_{i}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i=1,...,{n}_{X}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}j=1,...,{n}_{Y}\right)$ is the transposed Jacobian matrix evaluated at ${\underline{x}}_{0}$.

Second order Taylor expansion

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left({\underline{x}}_{0}\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}\frac{\partial h}{\partial {x}_{i}}\left({\underline{x}}_{0}\right).\left({x}_{i}-{x}_{0,i}\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\frac{1}{2}\phantom{\rule{0.277778em}{0ex}}\sum _{i,j=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}\frac{{\partial }^{2}h}{\partial {x}_{i}\partial {x}_{j}}\left({\underline{x}}_{0}\right).\left({x}_{i}-{x}_{0,i}\right).\left({x}_{j}-{x}_{0,j}\right)\end{array}$

Introducing a vector notation, the previous equation rewrites:

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{y}}_{0}\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\underline{\underline{L}}\phantom{\rule{0.222222em}{0ex}}\left(\underline{x}-{\underline{x}}_{0}\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\frac{1}{2}\phantom{\rule{0.277778em}{0ex}}〈〈\underline{\underline{\underline{Q}}}\phantom{\rule{0.222222em}{0ex}},\underline{x}-{\underline{x}}_{0}〉,\phantom{\rule{0.222222em}{0ex}}\underline{x}-{\underline{x}}_{0}〉\end{array}$

where $\underline{\underline{Q}}=\left\{\frac{{\partial }^{2}{y}_{0,k}}{\partial {x}_{i}\partial {x}_{j}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i,j=1,...,{n}_{X}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}k=1,...,{n}_{Y}\right\}$ is the transposed Hessian matrix.

Other notations

The described method is similar to the perturbation method presented in:

This method is aimed at building a response surface prior to tackling Step C “Uncertainty Propagation”. Then the various uncertainty propagation techniques may be applied at a negligible computational cost. The Taylor expansion allows the analytical derivation of:
• statistical moments of the response (Step C), see:

• local sensitivity indices named importance factors (Step C'), see:

References and theoretical basics

A Taylor expansion-based response surface is well suited when a local approximation of the model under consideration is sufficient. It has to be noticed that this may be a strong assumption though for a large class of models, possibly inducing erratic results. In particular, when the estimation of the probability of exceeding a threshold is of interest, the quality of the Taylor response surface has to be strongly justified. However, this method often allows the analyst to fairly study the central trend of the model response at a low computational cost.

6.3.2 Step Resp. Surf.  – Numerical methods to solve least squares problems

Mathematical description

Goal
This section presents numerical methods that can be used in order to solve least squares problems, which can be encountered when the construction of a response surface (i.e. of a meta-model) is of interest, or when one wishes to perform a statistical regression.

Least squares problem

Given a matrix $\underline{\underline{\Psi }}\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{ℝ}^{N×P}$, $N>P$, and a vector $\underline{y}\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{ℝ}^{N}$, we want to find a vector $\underline{a}\phantom{\rule{3.33333pt}{0ex}}\in {ℝ}^{P}$ such that $\underline{\underline{\Psi }}\phantom{\rule{0.222222em}{0ex}}\underline{a}$ is the best approximation to $\underline{y}$ in the least squares sense. Mathematically speaking, we want to solve the following minimization problem:

 $\begin{array}{c}\hfill \underset{\underline{a}}{min}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{∥\phantom{\rule{0.277778em}{0ex}}\underline{\underline{\Psi }}\phantom{\rule{0.166667em}{0ex}}\underline{a}\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\underline{y}\phantom{\rule{0.277778em}{0ex}}∥}_{2}\end{array}$

In the following, it is assumed that the rank of matrix $\underline{\underline{\Psi }}$ is equal to $P$.

Several algorithms can be applied to compute the least squares solution, as shown in the sequel.

Numerical solving schemes

Method of normal equations

It is shown that the solution of the least squares problem satisfies the so-called normal equations, which read using a matrix notation:

 $\begin{array}{c}\hfill {\underline{\underline{\Psi }}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{\Psi }}\phantom{\rule{0.277778em}{0ex}}\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{\underline{\Psi }}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{y}\end{array}$

The matrix $\underline{\underline{C}}\equiv {\underline{\underline{\Psi }}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{\Psi }}$ is symmetric and positive definite. The system can be solved using the following Cholesky factorization:

 $\begin{array}{c}\hfill \underline{\underline{C}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{\underline{R}}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{R}}\end{array}$

where $\underline{\underline{R}}$ is an upper triangular matrix with positive diagonal entries. Solving the normal equations is equivalent to solving the two following triangular systems, which can be easily solved by backwards substitution:

 $\begin{array}{c}\hfill {\underline{\underline{R}}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{z}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{\underline{\Psi }}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{y}\phantom{\rule{2.em}{0ex}},\phantom{\rule{2.em}{0ex}}\underline{\underline{R}}\phantom{\rule{0.277778em}{0ex}}\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underline{z}\end{array}$

It has to be noted that this theoretical approach is seldom used in practice though. Indeed the resulting least squares solution is quite sensitive to a small change in the data (i.e. in $\underline{y}$ and $\underline{\underline{\Psi }}$). More precisely, the normal equations are always more badly conditioned than the original overdetermined system, as their condition number is squared compared to the original problem:

 $\begin{array}{c}\hfill \kappa \left({\underline{\underline{\Psi }}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{\Psi }}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\kappa {\left(\underline{\underline{\Psi }}\right)}^{2}\end{array}$

As a consequence more robust numerical methods should be adopted.

Method based on QR factorization

It is shown that the matrix $\underline{\underline{\Psi }}$ can be factorized as follows:

 $\begin{array}{c}\hfill \underline{\underline{\Psi }}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underline{\underline{Q}}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{R}}\end{array}$

where $\underline{\underline{Q}}$ is a $N$-by-$P$-matrix with orthonormal columns and $\underline{\underline{R}}$ is a $P$-by-$P$-upper triangular matrix. Such a QR decomposition may be constructed using several schemes, such as Gram-Schmidt orthogonalization, Householder reflections or Givens rotations.

In this setup the least squares problem is equivalent to solving:

 $\begin{array}{c}\hfill \underline{\underline{R}}\phantom{\rule{0.277778em}{0ex}}\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{\underline{Q}}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{y}\end{array}$

This upper triangular system can be solved using backwards substitution.

The solving scheme based on Householder QR factorization leads to a relative error that is proportional to:

 $\begin{array}{c}\hfill \kappa \left(\underline{\underline{\Psi }}\right)+{\parallel \underline{r}\parallel }_{2}\kappa {\left(\underline{\underline{\Psi }}\right)}^{2}\end{array}$

where $\underline{r}=\underline{\underline{\Psi }}\phantom{\rule{0.166667em}{0ex}}\underline{a}\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\underline{y}$. Thus this error is expected to be much smaller than the one associated with the normal equations provided that the residual $\underline{r}$ is “small”.

Method based on singular value decomposition

The so-called singular value decomposition (SVD) of matrix $\underline{\underline{\Psi }}$ reads:

 $\begin{array}{c}\hfill \underline{\underline{\Psi }}\phantom{\rule{1.em}{0ex}}=\phantom{\rule{1.em}{0ex}}\underline{\underline{U}}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{S}}\phantom{\rule{0.277778em}{0ex}}{\underline{\underline{V}}}^{\mathsf{\text{T}}}\end{array}$

where $\underline{\underline{U}}\phantom{\rule{3.33333pt}{0ex}}\in {ℝ}^{N×N}$ and $\underline{\underline{V}}\phantom{\rule{3.33333pt}{0ex}}\in {ℝ}^{P×P}$ are orthogonal matrices, and $\underline{\underline{S}}\phantom{\rule{3.33333pt}{0ex}}\in {ℝ}^{N×P}$ can be cast as:

 $\begin{array}{c}\hfill \underline{\underline{S}}\phantom{\rule{1.em}{0ex}}=\phantom{\rule{1.em}{0ex}}\left(\begin{array}{c}{\underline{\underline{S}}}_{1}\\ \underline{\underline{0}}\end{array}\right)\end{array}$

In the previous equation, ${\underline{\underline{S}}}_{1}\phantom{\rule{3.33333pt}{0ex}}\in {ℝ}^{P×P}$ is a diagonal matrix containing the singular values ${\sigma }_{1}\ge {\sigma }_{2}\ge \cdots \ge {\sigma }_{P}>0$ of $\underline{\underline{\Psi }}$.

It can be shown that the least squares solution is equal to:

 $\begin{array}{c}\hfill \stackrel{^}{\underline{a}}\phantom{\rule{1.em}{0ex}}=\phantom{\rule{1.em}{0ex}}\underline{\underline{V}}\phantom{\rule{0.277778em}{0ex}}\left(\begin{array}{c}{\underline{\underline{S}}}_{1}^{-1}\\ \underline{\underline{0}}\end{array}\right)\phantom{\rule{0.277778em}{0ex}}{\underline{\underline{U}}}^{\mathsf{\text{T}}}\phantom{\rule{0.277778em}{0ex}}\underline{y}\end{array}$

In practice it is not common to compute a “full” SVD as shown above. Instead, it is often sufficient and more economical in terms of time and memory to compute a reduced version of SVD. The latter reads:

 $\begin{array}{c}\hfill \underline{\underline{\Psi }}\phantom{\rule{1.em}{0ex}}=\phantom{\rule{1.em}{0ex}}{\underline{\underline{U}}}_{1}\phantom{\rule{0.277778em}{0ex}}{\underline{\underline{S}}}_{1}\phantom{\rule{0.277778em}{0ex}}{\underline{\underline{V}}}^{\mathsf{\text{T}}}\end{array}$

where ${\underline{\underline{U}}}_{1}$ is obtained by extracting the $P$ first columns of $\underline{\underline{U}}$.

Note that it is also possible to perform SVD in case of a rank-deficient matrix $\underline{\underline{\Psi }}$. In this case the resulting vector $\stackrel{^}{\underline{a}}$ corresponds to the minimum norm least squares solution.

The computational cost of the method is proportional to $\left(N{P}^{2}+{P}^{3}\right)$ with a factor ranging from 4 to 10, depending on the numerical scheme used to compute the SVD decomposition. This cost is higher than those associated with the normal equations and with QR factorization. However SVD is relevant insofar as it provides a very valuable information, that is the singular values of matrix $\underline{\underline{\Psi }}$.

Comparison of the methods

Several conclusions may be drawn concerning the various methods considered so far:

• If $N\approx P$, normal equations and Householder QR factorization require about the same computational work. If $N>>P$, then the QR approach requires about twice as much work as normal equations.

• However QR appears to be more accurate than normal equations, so it should be almost always preferred in practice.

• SVD is also robust but it reveals the most computationally expensive scheme. Nonetheless the singular values are obtained as a by-product, which may be particularly useful for analytical and computational purposes.

Other notations

-

Numerical methods to solve least square problems can be used at several stages of the global methodology, such as:

References and theoretical basics

Details related to the least squares methods may be found in:
• Å. Bjorck, 1996, “Numerical methods for least squares problems”, SIAM Press, Philadelphia, PA.

6.3.3 Step Resp. Surf.  – Polynomial response surface based on least squares

Mathematical description

Goal

Instead of replacing the model response $h\left(\underline{x}\right)$ for a local approximation around a given set ${\underline{x}}_{0}$ of input parameters as in  [Taylor approximations] , one may seek a global approximation of $h\left(\underline{x}\right)$ over its whole domain of definition. A common choice to this end is global polynomial approximation. For the sake of simplicity, a scalar model response $y=h\left(\underline{x}\right)$ will be considered from now on. Nonetheless, the following derivations hold for a vector-valued response.

Principles

In the sequel, one considers global approximations of the model response using:

• a linear function, i.e. a polynomial of degree one;

• a quadratic function, i.e. a polynomial of degree two.

Linear approximation

 $\begin{array}{c}\hfill y\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{a}_{0}\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}{a}_{i}\phantom{\rule{0.277778em}{0ex}}{x}_{i}\end{array}$

where $\left({a}_{j}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}j=0,\cdots ,{n}_{X}\right)$ is a set of unknown coefficients.

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{a}_{0}\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}{a}_{i}\phantom{\rule{0.277778em}{0ex}}{x}_{i}\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}\sum _{j=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}{a}_{i,j}\phantom{\rule{0.277778em}{0ex}}{x}_{i}\phantom{\rule{0.277778em}{0ex}}{x}_{j}\end{array}$

General expression

The two previous equations may be recast using a unique formalism as follows:

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j=0}^{P-1}\phantom{\rule{0.277778em}{0ex}}{a}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{x}\right)\end{array}$

where $P$ denotes the number of terms, which is equal to $\left({n}_{X}+1\right)$ (resp. to $\left(1+2{n}_{X}+{n}_{X}\left({n}_{X}-1\right)/2\right)$) when using a linear (resp. a quadratic) approximation, and the family $\left({\psi }_{j},j=0,\cdots ,P-1\right)$ gathers the constant monomial 1, the monomials of degree one ${x}_{i}$ and possibly the cross-terms ${x}_{i}{x}_{j}$ as well as the monomials of degree two ${x}_{i}^{2}$. Using the vector notation $\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\left({a}_{0},\cdots ,{a}_{P-1}\right)}^{𝖳}$ and $\underline{\psi }\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\left({\psi }_{0}\left(\underline{x}\right),\cdots ,{\psi }_{P-1}\left(\underline{x}\right)\right)}^{𝖳}$, this rewrites:

 $\begin{array}{c}\hfill \underline{y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{a}}^{𝖳}\phantom{\rule{0.277778em}{0ex}}\underline{\psi }\left(\underline{x}\right)\end{array}$

Computation of the coefficients

A global approximation of the model response over its whole definition domain is sought. To this end, the coefficients ${a}_{j}$ may be computed using a least squares regression approach. In this context, an experimental design $\underline{𝒳}=\left({x}^{\left(1\right)},\cdots ,{x}^{\left(N\right)}\right)$, i.e. a set of realizations of input parameters is required, as well as the corresponding model evaluations $\underline{𝒴}=\left({y}^{\left(1\right)},\cdots ,{y}^{\left(N\right)}\right)$. Note that the experimental design $𝒳$ may be built using the various techniques introduced in  [Min-Max Approach using Design of Experimentss ] .

The following minimization problem has to be solved:

 $\begin{array}{c}\hfill \text{Find}\phantom{\rule{1.em}{0ex}}\stackrel{^}{\underline{a}}\phantom{\rule{1.em}{0ex}}\text{that}\phantom{\rule{4.pt}{0ex}}\text{minimizes}\phantom{\rule{1.em}{0ex}}𝒥\left(\underline{a}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}\phantom{\rule{0.277778em}{0ex}}{\left({y}^{\left(i\right)}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}{\underline{a}}^{𝖳}\underline{\psi }\left({\underline{x}}^{\left(i\right)}\right)\right)}^{2}\end{array}$

The solution is given by:

 $\begin{array}{c}\hfill \stackrel{^}{\underline{a}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\left({\underline{\underline{\Psi }}}^{𝖳}\underline{\underline{\Psi }}\right)}^{-1}\phantom{\rule{0.277778em}{0ex}}{\underline{\underline{\Psi }}}^{𝖳}\phantom{\rule{0.277778em}{0ex}}\underline{𝒴}\end{array}$

where:

 $\begin{array}{c}\hfill \underline{\underline{\Psi }}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left({\psi }_{j}\left({\underline{x}}^{\left(i\right)}\right)\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{0.277778em}{0ex}}i=1,\cdots ,N\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{0.277778em}{0ex}}j=0,\cdots ,P-1\right)\end{array}$

It is clear that the above equation is only valid for a full rank information matrix. A necessary condition is that the size $N$ of the experimental design is not less than the number $P$ of PC coefficients to estimate. In practice, it is not recommended to directly invert ${\underline{\underline{\Psi }}}^{𝖳}\underline{\underline{\Psi }}$ since the solution may be particularly sensitive to an ill-conditioning of the matrix. The least-square problem is rather solved using more robust numerical methods such as singular value decomposition (SVD) or QR-decomposition (see [Numerical methods to solve least squares problems] for more details).

Other notations

This method is aimed at building a response surface prior to tackling Step C “Uncertainty Propagation”. It requires an experimental design together with the corresponding model evaluations. Then the various uncertainty propagation techniques may be applied at a negligible computational cost.

References and theoretical basics

Provided that the model under consideration is sufficiently smooth, linear or quadratic polynomials may be accurate approximations. This is especially true for studying the model response not too far from its mean value, i.e. for a central trend analysis. Nonetheless, one should be careful when the estimation of the probability of exceeding a threshold is of interest, since the polynomial approximation in the tails of the response distribution may reveal poor.

Numerical details related to the least-square method may be found in:

• Å. Bjorck, 1996, “Numerical methods for least squares problems”, SIAM Press, Philadelphia, PA.

6.3.4 Step Resp. Surf.  – Polynomial response surface based on sparse least squares

Mathematical description

Goal

The response of the model under consideration may be globally approximated by a multivariate polynomial. In this setup, the response is characterized by a finite number of coefficients which have to be estimated. This may be achieved by least squares (see [Polynomial response surface based on least squares] ). However, this method cannot be applied if the number of unknown coefficients is of similar size to the number of model evaluations, or even possibly greater. Indeed, the resulting underdetermined least squares problem would be ill-posed.

To overcome this difficulty, sparse least squares schemes may be employed (they are also known as variable selection techniques in statistics). These methods are aimed at identifying a small set of significant coefficients in the model response approximation. Eventually, a sparse polynomial response surface, i.e. a polynomial which only contains a low number of non-zero coefficients, is obtained by means of a reduced number of possibly costly model evaluations. In the sequel, we focus on sparse regression schemes based on ${L}_{1}$-penalization.

Actually the proposed approaches do not provide only one approximation, but a collection of less and less sparse approximations. Eventually an optimal approximation may be retained with regard to a given criterion.

Principles

Context

Consider the mathematical model $h$ of a physical system depending on ${n}_{X}$ input parameters $\underline{x}={\left({x}_{1},\cdots ,{x}_{{n}_{X}}\right)}^{𝖳}$. Note that these input variables are assumed to be deterministic in this section. The model response may be approximated by a polynomial as follows:

 $h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j=0}^{P-1}\phantom{\rule{0.277778em}{0ex}}{a}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{x}\right)$ (144)

Let us consider a set of values taken by the input vector (i.e. an experimental design) $\underline{\underline{𝒳}}={\left({\underline{x}}^{\left(1\right)},\cdots ,{\underline{x}}^{\left(N\right)}\right)}^{𝖳}$ as well as the vector $\underline{𝒴}={\left(h\left({\underline{x}}^{\left(1\right)}\right),\cdots ,h\left({\underline{x}}^{\left(N\right)}\right)\right)}^{𝖳}={\left({y}^{\left(1\right)},\cdots ,{y}^{\left(N\right)}\right)}^{𝖳}$ of the corresponding model evaluations. It is assumed that the number of terms $P$ in the polynomial basis is of similar size to $N$, or even possibly significantly larger than $N$. In such a situation it is not possible to compute the polynomial coefficients by ordinary least squares, since the corresponding system is ill-posed. Methods that may be employed as an alternative are outlined in the sequel.

LASSO

The so-called LASSO (for Least absolute shrinkage and selection operator) method is based on a ${ℒ}^{1}$-penalized regression as follows:

 $\mathrm{Minimize}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\sum _{i=1}^{N}\phantom{\rule{0.277778em}{0ex}}{\left(h\left({\underline{x}}^{\left(i\right)}\right)\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}{\underline{a}}^{𝖳}\underline{\psi }\left({\underline{x}}^{\left(i\right)}\right)\right)}^{2}\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}C\phantom{\rule{0.277778em}{0ex}}{\parallel \underline{a}\parallel }_{1}^{2}$ (145)

where $\parallel \underline{a}{\parallel }_{1}={\sum }_{j=0}^{P-1}|{a}_{j}|$ and $C$ is a non negative constant. A nice feature of LASSO is that it provides a sparse metamodel, i.e. it discards insignificant variables from the set of predictors. The obtained response surface is all the sparser since the value of the tuning parameter $C$ is high.

For a given $C\ge 0$, the solving procedure may be implemented via quadratic programming. Obtaining the whole set of coefficient estimates for $C$ varying from 0 to a maximum value may be computationally expensive though since it requires solving the optimization problem for a fine grid of values of $C$.

Forward stagewise regression

Another procedure, known as forward stagewise regression, appears to be different from LASSO, but turns out to provide similar results. The procedure first picks the predictor that is most correlated with the vector of observations. However, the value of the corresponding coefficient is only increased by a small amount. Then the predictor with largest correlation with the current residual (possible the same term as in the previous step) is picked, and so on. Let us introduce the vector notation:

 ${\underline{\psi }}_{j}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\left({\psi }_{j}\left({\underline{x}}^{\left(1\right)}\right),\cdots ,{\psi }_{j}\left({\underline{x}}^{\left(N\right)}\right)\right)}^{𝖳}$ (146)

The forward stagewise algorithm is outlined below:

1. Start with $\underline{R}=𝒴$ and ${a}_{0}=\cdots ={a}_{P-1}=0$.

2. Find the predictor ${\underline{\psi }}_{j}$ that is most correlated with $\underline{R}$.

3. Update ${\stackrel{^}{a}}_{j}={\stackrel{^}{a}}_{j}+{\delta }_{j}$, where ${\delta }_{j}=\epsilon ·\text{sign}\left({\underline{\psi }}_{j}^{𝖳}\underline{R}\right)$.

4. Update $\underline{R}=\underline{R}-{\delta }_{j}{\underline{\psi }}_{j}$ and repeats Steps 2 and 3 until no predictor has any correlation with $\underline{R}$.

Note that parameter $\epsilon$ has to be set equal to a small value in practice, say $\epsilon =0.01$. This procedure is known to be more stable than traditional stepwise regression.

Least Angle Regression

Least Angle Regression (LAR) may be viewed as a version of forward stagewise that uses mathematical derivations to speed up the computations. Indeed, instead of taking many small steps with the basis term most correlated with current residual $\underline{R}$, the related coefficient is directly increased up to the point where some other basis predictor has as much correlation with $\underline{R}$. Then the new predictor is entered, and the procedure is continued. The LAR algorithm is detailed below:

1. Initialize the coefficients to ${a}_{0},\cdots ,{a}_{P-1}=0$. Set the initial residual equal to the vector of observations $𝒴$.

2. Find the vector ${\underline{\psi }}_{j}$ which is most correlated with the current residual.

3. Move ${a}_{j}$ from 0 toward the least-square coefficient of the current residual on ${\underline{\psi }}_{j}$, until some other predictor ${\underline{\psi }}_{k}$ has as much correlation with the current residual as does ${\underline{\psi }}_{j}$.

4. Move jointly ${\left({a}_{j},{a}_{k}\right)}^{𝖳}$ in the direction defined by their joint least-square coefficient of the current residual on $\left({\underline{\psi }}_{j},{\underline{\psi }}_{k}\right)$, until some other predictor ${\underline{\psi }}_{l}$ has as much correlation with the current residual.

5. Continue this way until $m=min\left(P,N-1\right)$ predictors have been entered.

Steps 2 and 3 correspond to a “move” of the active coefficients toward their least-square value. It corresponds to an updating of the form ${\stackrel{^}{\underline{a}}}^{\left(k+1\right)}={\stackrel{^}{\underline{a}}}^{\left(k\right)}+{\gamma }^{\left(k\right)}{\stackrel{˜}{\underline{w}}}^{\left(k\right)}$. Vector ${\stackrel{˜}{\underline{w}}}^{\left(k\right)}$ and coefficient ${\gamma }^{\left(k\right)}$ are referred to as the LAR descent direction and step, respectively. Both quantities may be derived algebraically.

Note that if $N\ge P$, then the last step of LAR provides the ordinary least-square solution. It is shown that LAR is noticeably efficient since it only requires the computational cost of ordinary least-square regression on $P$ predictors for producing a collection of $m$ metamodels.

LASSO and Forward Stagewise as variants of LAR

It has been shown that with only one modification, the LAR procedure provides in one shot the entire paths of LASSO solution coefficients as the tuning parameter $C$ in Eq.(145) is increased from 0 up to a maximum value. The modified algorithm is as follows:

• Run the LAR procedure from Steps 1 to 4.

• If a non zero coefficient hits zero, discard it from the current metamodel and recompute the current joint least-square direction.

• Continue this way until $m=min\left(P,N-1\right)$ predictors have been entered.

Note that the LAR-based LASSO procedure may take more than $m$ steps since the predictors are allowed to be discarded and introduced later again into the metamodel. In a similar fashion, a limiting version of the forward stagewise method when $\epsilon \to 0$ may be obtained by slightly modifying the original LAR algorithm. In the literature, one commonly uses the label LARS when referring to all these LAR-based algorithms (with S referring to Stagewise and LASSO).

Selection of the optimal LAR solution

The LAR-based approaches (i.e. original LAR, LASSO and forward stagewise) provide a collection of less and less sparse PC approximations. The accuracy of each PC metamodel may be assessed by cross validation [Error estimation based on cross-validation] . Eventually the PC representation associated with the lowest error estimate is retained.

Other notations
Within the global methodology, the method is used to build up a deterministic polynomial approximation of the model response prior to tackling the step C: “Uncertainty propagation”. It requires an experimental design together with the corresponding model evaluations. Then the various uncertainty propagation techniques may be applied at a negligible computational cost.
References and theoretical basics
Provided that the model under consideration is sufficiently smooth, linear or quadratic polynomials may be accurate approximations. This is especially true for studying the model response not too far from its mean value, i.e. for a central trend analysis. Nonetheless, one should be careful when the estimation of the probability of exceeding a threshold is of interest, since the polynomial approximation in the tails of the response distribution may reveal poor.

The principles of the original LAR procedure are detailed in:

• B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, 2004, “Least angle regression”, Annals of Statistics 32, 407–499.

Furthermore, the modifications of LAR allowing one to solve the LASSO and the forward stagewise problems are presented in:

• T. Hastie, J. Taylor, R. Tibshirani, and G. Walther, 2007, “Forward stagewise regression and the monotone Lasso”, Electronic J. Stat. 1, 1–29.

6.3.5 Step Resp. Surf.  – Kriging

Mathematical description

Goal

Kriging (also known as Gaussian process regression) is a Bayesian technique that aim at approximating functions (most often in order to surrogate it because it is expensive to evaluate). In the following it is assumed we aim at surrogating a scalar-valued model $ℳ:\underline{x}↦y$. Note the OpenTURNS implementation of Kriging can deal with vector-valued functions ($ℳ:\underline{x}↦\underline{y}$), but it simply loops over each output. It is also assumed the model was runned over a design of experiments in order to produce a set of observations gathered in the following dataset: $\left(\left({\underline{x}}^{\left(i\right)},{y}^{\left(i\right)}\right),i=1,...,m\right)$. Ultimately Kriging aims at producing a predictor (also known as a response surface or metamodel) denoted as $\stackrel{˜}{ℳ}$.

Mathematical framework

We put the following Gaussian process prior on the model $ℳ$:

 $Y\left(\underline{x}\right)={\underline{f}\left(\underline{x}\right)}^{t}\underline{\beta }+Z\left(\underline{x}\right)$ (147)

where:

• ${\underline{f}\left(\underline{x}\right)}^{t}\underline{\beta }$ is a generalized linear model based upon a functional basis $\underline{f}=\left({f}_{j},j=1,...,p\right)$ and a vector of coefficients $\underline{\beta }=\left({\beta }_{j},j=1,...,p\right)$,

• $Z$ is a zero-mean stationary Gaussian process whose covariance function reads:

 $𝔼\left[Z\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}Z\left(\underline{{x}^{\text{'}}}\right)\right]={\sigma }^{2}R\left(\underline{x}-\underline{{x}^{\text{'}}},\underline{\theta }\right)$ (148)

where ${\sigma }^{2}>0$ is the variance and $R$ is the correlation function that solely depends on the Manhattan distance between input points $\underline{x}-\underline{{x}^{\text{'}}}$ and a vector of parameters $\underline{\theta }\in {ℝ}^{{n}_{\theta }}$.

Under the Gaussian process prior assumption, the observations $\underline{Y}=\left({Y}_{i},i=1,...,m\right)$ and a prediction $Y\left(\underline{x}\right)$ at some unobserved input $\underline{x}$ are jointly normally distributed:

 $\left(\begin{array}{c}\underline{Y}\\ Y\left(\underline{x}\right)\end{array}\right)\sim {𝒩}_{m+1}\left(\left(\begin{array}{c}\underline{\underline{F}}\underline{\beta }\\ {\underline{f}\left(\underline{x}\right)}^{t}\underline{\beta }\end{array}\right),\phantom{\rule{0.166667em}{0ex}}{\sigma }^{2}\left(\begin{array}{cc}\underline{\underline{R}}& \underline{r}\left(\underline{x}\right)\\ \underline{r}{\left(\underline{x}\right)}^{t}& 1\end{array}\right)\right)$ (149)

where:

 $\underline{\underline{F}}=\left[{f}_{j}\left({\underline{x}}^{\left(i\right)}\right),i=1,...,m,j=1,...,p\right]$ (150)

is the regression matrix,

 $\underline{\underline{R}}=\left[R\left({\underline{x}}^{\left(i\right)}-{\underline{x}}^{\left(j\right)},\underline{\theta }\right),i,\phantom{\rule{0.166667em}{0ex}}j=1,...,m\right]$ (151)

is the observations' correlation matrix, and:

 $\underline{r}\left(\underline{x}\right)={\left(R\left(\underline{x}-{\underline{x}}^{\left(i\right)},\underline{\theta }\right),i=1,...,m\right)}^{t}$ (152)

is the vector of cross-correlations between the prediction and the observations.

As such, the Kriging predictor is defined as the following conditional distribution:

 $\stackrel{˜}{Y}\left(\underline{x}\right)=\left[Y\left(\underline{x}\right)\mid \underline{Y}=\underline{y},\underline{\theta }={\underline{\theta }}^{*},{\sigma }^{2}={\left({\sigma }^{2}\right)}^{*}\right]$ (153)

where ${\underline{\theta }}^{*}$ and ${\left({\sigma }^{2}\right)}^{*}$ are the maximum likelihood estimates of the correlation parameters $\underline{\theta }$ and variance ${\sigma }^{2}$ (see references).

It can be shown (see references) that the predictor $\stackrel{˜}{Y}\left(\underline{x}\right)$ is also Gaussian:

 $\stackrel{˜}{Y}\left(\underline{x}\right)={𝒩}_{1}\left({\mu }_{\stackrel{˜}{Y}}\left(\underline{x}\right),{\sigma }_{\stackrel{˜}{Y}}^{2}\left(\underline{x}\right)\right)$ (154)
• with mean:

 ${\mu }_{\stackrel{˜}{Y}}\left(\underline{x}\right)={\underline{f}\left(\underline{x}\right)}^{t}\stackrel{˜}{\underline{\beta }}+{\underline{r}\left(\underline{x}\right)}^{t}\underline{\gamma }$ (155)

where $\underline{\stackrel{˜}{\beta }}$ is the generalized least squares solution of the underlying linear regression problem:

 $\stackrel{˜}{\underline{\beta }}={\left({\underline{\underline{F}}}^{t}{\underline{\underline{R}}}^{-1}\underline{\underline{F}}\right)}^{-1}{\underline{\underline{F}}}^{t}{\underline{\underline{R}}}^{-1}\underline{y}$ (156)

and

 $\underline{\gamma }={\underline{\underline{R}}}^{-1}\left(\underline{y}-\underline{\underline{F}}\stackrel{˜}{\underline{\beta }}\right)$ (157)
• and variance:

 ${\sigma }_{\stackrel{˜}{Y}}^{2}\left(\underline{x}\right)={\sigma }^{2}\left[1-{\underline{r}\left(\underline{x}\right)}^{t}{\underline{\underline{R}}}^{-1}\underline{r}\left(\underline{x}\right)+{\underline{u}\left(\underline{x}\right)}^{t}{\left({\underline{\underline{F}}}^{t}{\underline{\underline{R}}}^{-1}\underline{\underline{F}}\right)}^{-1}\underline{u}\left(\underline{x}\right)\right]$ (158)

where:

 $\underline{u}\left(\underline{x}\right)={\underline{\underline{F}}}^{t}{\underline{\underline{R}}}^{-1}\underline{r}\left(\underline{x}\right)-\underline{f}\left(\underline{x}\right)$ (159)

For now only the Kriging mean is returned as the metamodel $\stackrel{˜}{ℳ}\equiv {\mu }_{\stackrel{˜}{Y}}$.

Other notations

Kriging may also be referred to as Gaussian process regression.

References and theoretical basics
More resources in Kriging may be found in:
• V. Dubourg, 2011, “Adaptative surrogate models for reliability and reliability-based design optimization”, University Blaise Pascal - Clermont II. http://bruno.sudret.free.fr/dubourg.html

• S. Lophaven, H. Nielsen and J. Sondergaard, 2002, “DACE, A Matlab kriging toolbox”, Technichal University of Denmark. http://www2.imm.dtu.dk/ hbni/dace/

• T. Santner, B. Williams and W. Notz, 2003. “The design and analysis of computer experiments”, Springer, New York.

• C. Rasmussen and C. Williams, 2006, T. Dietterich (Ed.), “Gaussian processes for machine learning”, MIT Press.

6.3.6 Step Resp. Surf.  – Assessment of the polynomial approximations by cross validation

Mathematical description

Goal

Once a polynomial response surface $\stackrel{^}{h}$ has been built up, it is crucial to estimate the approximation error, i.e. the discrepancy between the response surface and the true model response in terms of a suitable norm such as the ${L}_{2}$-norm:

 $Err\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{∥h\left(\underline{x}\right)\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}h\left(\underline{x}\right)∥}_{{L}_{2}}^{2}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\int }_{𝒟}\phantom{\rule{0.277778em}{0ex}}{\left(h\left(\underline{x}\right)\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}h\left(\underline{x}\right)\right)}^{2}\phantom{\rule{0.277778em}{0ex}}d\underline{x}$ (160)

where $𝒟$ denotes the domain of variation of the input parameters. As any model evaluation may be time-consuming, error estimates that are only based on already performed computer experiments are of interest. The so-called cross validation techniques may be used in this context.

Principles

Any cross-validation scheme consists in dividing the data sample (i.e. the experimental design) into two subsamples. A metamodel $\stackrel{^}{h}$ is built from one subsample, i.e. the training set, and its performance is assessed by comparing its predictions to the other subset, i.e. the test set. A refinement of the method is the $\nu$-fold cross-validation, in which the observations are randomly assigned to one of $\nu$ partitions of nearly equal size. The learning set contains in turn all but one of the partitions which is considered as the test set. The generalization error is estimated for each of the $\nu$ sets and then averaged over $\nu$.

Classical leave-one-out error estimate

The leave-one-out (LOO) cross-validation corresponds to the special case of $\nu$-fold cross-validation with $\nu =N$. It is recalled that a $P$-term polynomial approximation of the model response reads:

 $\stackrel{^}{h}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j=0}^{P-1}\phantom{\rule{0.277778em}{0ex}}{\stackrel{^}{a}}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{x}\right)$ (161)

where the ${\stackrel{^}{a}}_{j}$'s are estimates of the coefficients obtained by a specific method, e.g. least squares.

Let us denote by ${\stackrel{^}{h}}^{\left(-i\right)}$ the approximation that has been built from the experimental design $𝒳\setminus \left\{{\underline{x}}^{\left(i\right)}\right\}$, i.e. when removing the $i$-th observation. The predicted residual is defined as the difference between the model evaluation at ${\underline{x}}^{\left(i\right)}$ and its prediction based on ${\stackrel{^}{h}}^{\left(-i\right)}$:

 ${\Delta }^{\left(i\right)}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left({\underline{x}}^{\left(i\right)}\right)-{\stackrel{^}{h}}^{\left(-i\right)}\left({\underline{x}}^{\left(i\right)}\right)$ (162)

The expected risk is then estimated by the following LOO error:

 $Er{r}_{LOO}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{1}{N}\sum _{i=1}^{N}{{\Delta }^{\left(i\right)}}^{2}$ (163)

A nice feature of the LOO method is that the quantity $Er{r}_{LOO}$ may be computed without performing further regression calculations when the PC coefficients have been estimated by regression. Indeed, the LOO error is given in this case by:

 ${\Delta }^{\left(i\right)}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\frac{h\left({\underline{x}}^{\left(i\right)}\right)-\stackrel{^}{h}\left({\underline{x}}^{\left(i\right)}\right)}{1-{h}_{i}}$ (164)

where ${h}_{i}$ is the $i$-th diagonal term of the matrix $\underline{\Psi }{\left({\underline{\Psi }}^{𝖳}\underline{\Psi }\right)}^{-1}{\underline{\Psi }}^{𝖳}$, using the notation:

 ${\underline{\Psi }}_{ij}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{{\psi }_{j}\left({\underline{x}}^{\left(i\right)}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},i=1,\cdots ,N\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}j=0,\cdots ,P-1\right\}$ (165)

In practice, one often computes the following normalized LOO error:

 ${\epsilon }_{LOO}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{Er{r}_{LOO}}{\stackrel{^}{\mathrm{Cov}\left[𝒴\right]}}$ (166)

where $\stackrel{^}{\mathrm{Cov}\left[𝒴\right]}$ denotes the empirical covariance of the response sample $𝒴$:

 $\stackrel{^}{\mathrm{Cov}\left[𝒴\right]}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{1}{N-1}\phantom{\rule{0.277778em}{0ex}}\sum _{i=1}^{N}\phantom{\rule{0.277778em}{0ex}}{\left({y}^{\left(i\right)}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}\overline{𝒴}\right)}^{2}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\overline{𝒴}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{1}{N}\phantom{\rule{0.277778em}{0ex}}\sum _{i=1}^{N}\phantom{\rule{0.277778em}{0ex}}{y}^{\left(i\right)}$ (167)

Corrected leave-one-out error estimate

A penalized variant of ${\epsilon }_{LOO}$ may be used in order to increase its robustness with respect to overfitting, i.e. to penalize a large number of terms in the PC expansion compared to the size of the experimental design:

 ${\epsilon }_{LOO}^{*}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\epsilon }_{LOO}\phantom{\rule{0.166667em}{0ex}}T\left(P,N\right)$ (168)

The penality factor is defined by:

 $T\left(P,N\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{N}{N-P}\phantom{\rule{0.277778em}{0ex}}\left(1\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.277778em}{0ex}}\frac{\text{tr}\left({\underline{\underline{C}}}_{emp}^{-1}\right)}{N}\right)$ (169)

where:

 ${\underline{\underline{C}}}_{emp}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{1}{N}{\underline{\underline{\Psi }}}^{𝖳}\phantom{\rule{0.277778em}{0ex}}\underline{\underline{\Psi }}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\underline{\underline{\Psi }}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{{\psi }_{j}\left({\underline{x}}^{\left(i\right)}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i=1,\cdots ,N\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}j=0,\cdots ,P-1\right\}$ (170)
Other notations
Leave-one-out cross validation is also known as jackknife in statistics.

Within the global methodology, the method is used to assess the accuracy of a polynomial response surface of a model output prior to tackling the step C: “Uncertainty propagation”.
References and theoretical basics
The use of leave-one-out error goes back to:
• D. Allen, 1971, “The prediction sum of squares as a criterion for selecting prediction variables”, Technical Report 23, Dept. of Statistics, University of Kentucky.

The definition of the penalized variant has been inspired from:

• O. Chapelle, V. Vapnik, and Y. Bengio, 2002, “Model selection for small sample regression”, Machine Learning 48 (1), 9–23.

6.3.7 Step Resp. Surf.  – Functional Chaos Expansion

Mathematical description

Goal

Accounting for the joint probability density function (PDF) ${f}_{\underline{X}}\left(\underline{x}\right)$ of the input random vector $\underline{X}$, one seeks the joint PDF of the model response $\underline{Y}=h\left(\underline{X}\right)$. This may be achieved using Monte Carlo (MC) simulation ( [Estimating the mean and variance using the Monte Carlo Method] ), i.e. by evaluating the model $h$ at a large number of realizations ${\underline{x}}^{\left(i\right)}$ of $\underline{X}$ and then by estimating the empirical distribution of the corresponding sample of model output $h\left({\underline{x}}^{\left(i\right)}\right)$. However it is well-known that the MC method requires a large number of model evaluations, i.e. a great computational cost, in order to obtain accurate results.

In fact, when using MC simulation, each model run is performed independently. Thus, whereas it is expected that $h\left({\underline{x}}^{\left(i\right)}\right)\approx h\left({\underline{x}}^{\left(j\right)}\right)$ if ${\underline{x}}^{\left(i\right)}\approx {\underline{x}}^{\left(j\right)}$, the model is evaluated twice without accounting for this information. In other words, the functional dependence between $\underline{X}$ and $\underline{Y}$ is lost.

A possible solution to overcome this problem and thereby to reduce the computational cost of MC simulation is to represent the random response $\underline{Y}$ in a suitable functional space, such as the Hilbert space ${L}^{2}$ of square-integrable functions with respect to the PDF ${f}_{\underline{X}}\left(\underline{x}\right)$. Precisely, an expansion of the model response onto an orthonormal basis of ${L}^{2}$ is of interest.

Mathematical framework

The principles of the building of a (infinite numerable) basis of this space, i.e. the PC basis, are described in the sequel.

Principle of the functional chaos expansion

Consider a model $h$ depending on a set of random variables $\underline{X}={\left({X}_{1},\cdots ,{X}_{{n}_{X}}\right)}^{𝖳}$. We call functional chaos expansion the class of spectral methods which gathers all types of response surface that can be seen as a projection of the physical model in an orthonormal basis. This class of methods uses the Hilbertian space (square-integrable space: ${L}^{2}$) to construct the response surface.

Assuming that the physical model has a finite second order measure (i.e. $E\left(\parallel h\left(\underline{X}\right){\parallel }^{2}\right)<+\infty$), it may be uniquely represented as a converging series onto an orthonormal basis as follows:

 $\begin{array}{c}\hfill h\left(\underline{x}\right)=\sum _{i=0}^{+\infty }{\underline{y}}_{i}{\Phi }_{i}\left(\underline{x}\right).\end{array}$

where the ${\underline{y}}_{i}={\left({y}_{i,1},\cdots ,{y}_{i,{n}_{Y}}\right)}^{𝖳}$'s are deterministic vectors that fully characterize the random vector $\underline{Y}$, and the ${\Phi }_{i}$'s are given basis functions (e.g. orthonormal polynomials, wavelets).

The orthonormality property of the functional chaos basis reads:

 $\begin{array}{c}\hfill 〈{\Phi }_{i},{\Phi }_{j}〉={\int }_{D}{\Phi }_{i}\left(\underline{x}\right){\Phi }_{j}\left(\underline{x}\right)\phantom{\rule{3.33333pt}{0ex}}{f}_{\underline{X}}\left(\underline{x}\right)d\underline{x}={\delta }_{i,j}.\end{array}$

where ${\delta }_{i,j}=1$ if $i=j$ and 0 otherwise. The metamodel $\stackrel{^}{h}\left(\underline{x}\right)$ is represented by a finite subset of coefficients $\left\{{y}_{i},i\in 𝒜\subset \left(N\right)\right\}$ in a truncated basis $\left\{{\Phi }_{i},i\in 𝒜\subset \left(N\right)\right\}$ as follows:

 $\begin{array}{c}\hfill \stackrel{^}{h}\left(\underline{x}\right)=\sum _{i\in 𝒜\subset N}{y}_{i}{\Phi }_{i}\left(\underline{x}\right)\end{array}$

As an example of this type of expansion, one can mention responses by wavelet expansion, polynomial chaos expansion, etc.

Other notations
-

References and theoretical basics
The chaos representation is also known as orthogonal series decomposition. An elegant mathematical framework of chaos representations may be found in:
• C. Soize and R. Ghanem, 2004, “Physical systems with random uncertainties: chaos representations with arbitrary probability measure”, SIAM J. Sci. Comput. 26(2), 395-410.

6.3.8 Step Resp. Surf.  – Orthogonal polynomials

Mathematical description

Goal

This section provides some mathematical details on sequences of orthogonal polynomials. Some of these sequences will be used to construct the basis of the so-called polynomial chaos expansion.

Mathematical framework

The orthogonal polynomials are associated to an inner product, defined as follows:

Given an interval of orthogonality $\left[\alpha ,\beta \right]$ ($\alpha \in ℝ\cup \left\{-\infty \right\}$, $\beta \in ℝ\cup \left\{\infty \right\}$, $\alpha <\beta$) and a weight function $w\left(x\right)>0$, every pair of polynomials $P$ and $Q$ are orthogonal iff :

 $\begin{array}{c}\hfill 〈P,Q〉={\int }_{\alpha }^{\beta }P\left(x\right)Q\left(x\right)\phantom{\rule{3.33333pt}{0ex}}w\left(x\right)dx=0\end{array}$

Therefore, a sequence of orthogonal polynomials ${\left({P}_{n}\right)}_{n\ge 0}$ (${P}_{n}$: polynomial of degree $n$) verifies:

 $\begin{array}{c}\hfill 〈{P}_{m},{P}_{n}〉=0\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{every}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}m\ne n\end{array}$

The chosen inner product induces a norm on polynomials in the usual way:

 $\begin{array}{c}\hfill \parallel {P}_{n}\parallel ={〈{P}_{n},{P}_{n}〉}^{1/2}\end{array}$

In the following, we consider weight functions $w\left(x\right)$ corresponding to probability density functions, which satisfy:

 $\begin{array}{c}\hfill {\int }_{\alpha }^{\beta }\phantom{\rule{0.277778em}{0ex}}w\left(x\right)\phantom{\rule{0.277778em}{0ex}}dx\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}1\end{array}$

Moreover, we consider families of orthonormal polynomials ${\left({P}_{n}\right)}_{n\ge 0}$, that is polynomials with a unit norm:

 $\begin{array}{c}\hfill \parallel {P}_{n}\parallel \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}1\end{array}$

Recurrence: Any sequence of orthogonal polynomials has a recurrence formula relating any three consecutive polynomials as follows:

 $\begin{array}{c}\hfill {P}_{n+1}\phantom{\rule{4pt}{0ex}}=\phantom{\rule{4pt}{0ex}}\left({a}_{n}x+{b}_{n}\right)\phantom{\rule{4pt}{0ex}}{P}_{n}\phantom{\rule{4pt}{0ex}}+\phantom{\rule{4pt}{0ex}}{c}_{n}\phantom{\rule{4pt}{0ex}}{P}_{n-1}\end{array}$

Orthogonormal polynomials with respect to usual probability distributions

Below, a table showing an example of particular (normalized) orthogonal polynomials associated with continuous weight functions.

Note that the orthogonormal polynomials determined by OpenTURNS are orthonormal with respect to the standard representative distribution of the given distribution. Refer to [Standard Parametric Models] to get information on the parameters and the moments of the standard representative of each distribution.

Ortho. poly.   ${P}_{n}\left(x\right)$   Weight $w{\left(x\right)}^{\phantom{\left(}}$   Recurrence coefficients $\left({a}_{n},{b}_{n},{c}_{n}\right)$
Hermite   ${He}_{n}{\left(x\right)}^{\phantom{\left(}}$   $\frac{1}{\sqrt{2\pi }}{e}^{-\frac{{x}^{2}}{2}}$   $\begin{array}{ccc}{a}_{n}& =& \frac{1}{\sqrt{n+1}}\\ {b}_{n}& =& 0\\ {c}_{n}& =& -\sqrt{\frac{n}{n+1}}\end{array}$
Legendre   $\begin{array}{c}{Le}_{n}\left(x\right)\\ \\ \alpha >-1\end{array}$   ${\frac{1}{2}}^{\phantom{\left(}}×{𝕀}_{\left[-1,1\right]}\left(x\right)$   $\begin{array}{ccc}{a}_{n}& =& \frac{\sqrt{\left(2n+1\right)\left(2n+3\right)}}{n+1}\\ {b}_{n}& =& 0\\ {c}_{n}& =& -\frac{n\sqrt{2n+3}}{\left(n+1\right)\sqrt{2n-1}}\end{array}$
 Generalized Laguerre

${L}_{n}^{\left(\alpha \right)}\left(x\right)$   $\frac{{x}^{k-1}}{\Gamma \left(k\right)}\phantom{\rule{3.33333pt}{0ex}}{e}^{-x}{𝕀}_{\left[0,+\infty \left[}\left(x\right)$   $\begin{array}{ccc}{\omega }_{n}& =& {\left(\left(n+1\right)\left(n+k+1\right)\right)}^{-1/2}\\ {a}_{n}& =& {\omega }_{n}\\ {b}_{n}& =& -\left(2n+k+1\right)\phantom{\rule{3.33333pt}{0ex}}{\omega }_{n}\\ {c}_{n}& =& -\sqrt{\left(n+k\right)n}\phantom{\rule{3.33333pt}{0ex}}{\omega }_{n}\end{array}$
Jacobi   $\begin{array}{c}{J}_{n}^{\left(\alpha ,\beta \right)}\left(x\right)\\ \\ \\ \alpha ,\beta >-1\end{array}$   $\frac{{\left(1-x\right)}^{\alpha }{\left(1+x\right)}^{\beta }}{{2}^{\alpha +\beta +1}B\left(\beta +1,\alpha +1\right)}{𝕀}_{\left[-1,1\right]}\left(x\right)$   $\begin{array}{ccc}{K}_{1,n}& =& \frac{2n+\alpha +\beta +3}{\left(n+1\right)\left(n+\alpha +1\right)\left(n+\beta +1\right)\left(n+\alpha +\beta +1\right)}\\ \\ {K}_{2,n}& =& \frac{1}{2}\sqrt{\left(2n+\alpha +\beta +1\right){K}_{1,n}}\\ \\ {a}_{n}& =& {K}_{2,n}\left(2n+\alpha +\beta +2\right)\\ \\ {b}_{n}& =& {K}_{2,n}\frac{\left(\alpha -\beta \right)\left(\alpha +\beta \right)}{2n+\alpha +\beta }\\ \\ {c}_{n}& =& -\frac{2n+\alpha +\beta +2}{2n+\alpha +\beta }\left[\left(n+\alpha \right)\left(n+\beta \right)\\ & & ×\left(n+\alpha +\beta \right)n\frac{{K}_{1,n}}{2n+\alpha +\beta -1}{\right]}^{1/2}\end{array}$

Furthermore, two families of orthonormal polynomials with respect to discrete probability distributions are available in OpenTURNS, namely the so-called Charlier and Krawtchouk polynomials:

 Ortho. poly. ${P}_{n}\left(x\right)$ Probability mass function Recurrence coefficients $\left({a}_{n},{b}_{n},{c}_{n}\right)$ Charlier $\begin{array}{c}C{h}_{n}^{\left(\lambda \right)}\left(x\right)\\ \\ \lambda >0\end{array}$ $\begin{array}{c}\frac{{\lambda }^{k}}{k!}\phantom{\rule{3.33333pt}{0ex}}{e}^{-\lambda }\\ \\ k=0,1,2,\cdots \end{array}$ $\begin{array}{ccc}{a}_{n}& =& -\frac{1}{\sqrt{\lambda \left(n+1\right)}}\\ \\ {b}_{n}& =& \frac{n+\lambda }{\sqrt{\lambda \left(n+1\right)}}\\ \\ {c}_{n}& =& -\sqrt{1-\frac{1}{n+1}}\end{array}$ Krawtchouk${}^{†}$ $\begin{array}{c}K{r}_{n}^{\left(m,p\right)}\left(x\right)\\ \\ m\in ℕ\phantom{\rule{3.33333pt}{0ex}},\phantom{\rule{3.33333pt}{0ex}}p\in \left[0,1\right]\end{array}$ $\begin{array}{c}\left(\genfrac{}{}{0pt}{}{m}{k}\right){p}^{k}{\left(1-p\right)}^{m-k}\\ \\ k=0,1,2,\cdots \end{array}$ $\begin{array}{ccc}{a}_{n}& =& -\frac{1}{\sqrt{\left(n+1\right)\left(m-n\right)p\left(1-p\right)}}\\ \\ {b}_{n}& =& \frac{p\left(m-n\right)+n\left(1-p\right)}{\sqrt{\left(n+1\right)\left(m-n\right)p\left(1-p\right)}}\\ \\ {c}_{n}& =& -\sqrt{\left(1-\frac{1}{n+1}\right)\left(1+\frac{1}{m-n}\right)}\end{array}$

${}^{†}$ The Krawtchouk polynomials are only defined up to a degree $n$ equal to $m-1$. Indeed, for $n=m$, some factors of the denominators of the recurrence coefficients would be equal to zero.

To sum up, the distribution types handled by OpenTURNS are reported in the table below together with the associated families of orthonormal polynomials.

 Distribution Support Polynomial Normal $𝒩\left(0,1\right)$ $ℝ$ Hermite Uniform $𝒰\left(-1,1\right)$ $\left[-1,1\right]$ Legendre Gamma $\Gamma \left(k,1,0\right)$ $\left(0,+\infty \right)$ Laguerre Beta $B\left(\alpha ,\beta ,-1,1\right)$ $\left(-1,1\right)$ Jacobi Poisson $𝒫\left(\lambda \right)$ $ℕ$ Charlier Binomial $ℬ\left(m,p\right)$ $\left\{0,\cdots ,m\right\}$ Krawtchouk${}^{†}$

${}^{†}$ It is recalled that the Krawtchouk polynomials are only defined up to a degree $n$ equal to $m-1$.

Orthogonal polynomials with respect to arbitrary probability distributions

It is also possible to generate a family of orthonormal polynomials with respect to an arbitrary probability distribution $w\left(x\right)$. The well-known Gram-Schmidt algorithm can be used to this end. Note that this algorithm gives a constructive proof of the existence of orthonormal bases.

However it is known to be numerically unstable, so alternative procedures are often used in practice. The two available algorithms in OpenTURNS so far are the so-called modified Gram-Schmidt algorithm and the modified Chebyshev algorithm. Another procedure based on Cholesky decomposition should be also available soon.

Other notations
-

Within the global methodology, the orthogonal polynomial families are generated to construct the basis of the polynomial chaos expansion. The latter corresponds to a specific approach in order to tackle step C: “Uncertainty propagation”. Then the various uncertainty propagation techniques may be applied at a negligible computational cost. The method requires to have fulfilled the two following steps:
• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $\underline{Y}=h\left(\underline{X}\right)$;

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$.

References and theoretical basics

-

6.3.9 Step Resp. Surf.  – Polynomial chaos basis

Mathematical description

Goal

The current section is focused on a specific kind of functional chaos representation (see [Functional Chaos Expansion] ) that has been implemented in OpenTURNS, namely polynomial chaos expansions.

Mathematical framework

Throughout this section, the model response is assumed to be a scalar random variable $Y=h\left(\underline{X}\right)$. However, the following derivations hold in case of a vector-valued response.

Let us suppose that:

• $Y=h\left(\underline{X}\right)$ has a finite variance, i.e. $\mathrm{Var}\left[h\left(\underline{X}\right)\right]<\infty$;

• $\underline{X}$ has independent components.

Then it is shown that $\underline{Y}$ may be expanded onto the PC basis as follows:

 $Y\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j=0}^{\infty }\phantom{\rule{0.277778em}{0ex}}{a}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{X}\right)$ (171)

where the ${\psi }_{j}$'s are multivariate polynomials that are orthonormal with respect to the joint PDF ${f}_{\underline{X}}\left(\underline{x}\right)$, that is:

 $〈{\psi }_{j}\left(\underline{X}\right)\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{0.277778em}{0ex}}{\psi }_{k}\left(\underline{X}\right)〉\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}𝔼\left[{\psi }_{j}\left(\underline{X}\right)\phantom{\rule{0.277778em}{0ex}}{\psi }_{k}\left(\underline{X}\right)\right]\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\delta }_{j,k}$ (172)

where ${\delta }_{j,k}=1$ if $j=k$ and 0 otherwise, and the ${a}_{j}$'s are deterministic coefficients that fully characterize the response $\underline{Y}$.

Building of the PC basis – independent random variables

We first condider the case of independent input random variables. In practice, the components ${X}_{i}$ of random vector $\underline{X}$ are rescaled using a specific mapping ${T}_{i}$, usually referred to as an isoprobabilistic transformation (see  [Isoprobabilistic transformation] ). The set of scaled random variables reads:

 ${U}_{i}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{T}_{i}\left({X}_{i}\right)\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i=1,\cdots ,n$ (173)

Common choices for ${U}_{i}$ are standard distributions such as a standard normal distribution or a uniform distribution over $\left[-1,1\right]$. For simplicity, it is assumed from now on that the components of the original input random vector $\underline{X}$ have been already scaled, i.e. ${X}_{i}={U}_{i}$.

Let us first notice that due to the independence of the input random variables, the input joint PDF may be cast as:

 ${f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\prod _{i=1}^{n}{f}_{{X}_{i}}\left({x}_{i}\right)$ (174)

where ${f}_{{X}_{i}}\left({x}_{i}\right)$ is the marginal PDF of ${X}_{i}$. Let us consider a family $\left\{{\pi }_{j}^{\left(i\right)},j\in ℕ\right\}$ of orthonormal polynomials with respect to ${f}_{{X}_{i}}$, i.e. :

 $〈{\pi }_{j}^{\left(i\right)}\left({X}_{i}\right)\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{0.277778em}{0ex}}{\pi }_{k}^{\left(i\right)}\left({X}_{i}\right)〉\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}𝔼\left[{\pi }_{j}^{\left(i\right)}\left({X}_{i}\right)\phantom{\rule{0.277778em}{0ex}}{\pi }_{k}^{\left(i\right)}\left({X}_{i}\right)\right]\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\delta }_{j,k}$ (175)

The reader is referred to  [Orthogonal polynomials] for details on the selection of suitable families of orthogonal polynomials. It is assumed that the degree of ${\pi }_{j}^{\left(i\right)}$ is $j$ for $j>0$ and ${\pi }_{0}^{\left(i\right)}\equiv 1$ ($i=1,\cdots ,n$). Upon tensorizing the $n$ resulting families of univariate polynomials, one gets a set of orthonormal multivariate polynomials $\left\{{\psi }_{\underline{\alpha }},\underline{\alpha }\in {ℕ}^{n}\right\}$ defined by:

 ${\psi }_{\underline{\alpha }}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\pi }_{{\alpha }_{1}}^{\left(1\right)}\left({x}_{1}\right)×\cdots ×{\pi }_{{\alpha }_{n}}^{\left(n\right)}\left({x}_{n}\right)$ (176)

where the multi-index notation $\underline{\alpha }\equiv \left\{{\alpha }_{1},\cdots ,{\alpha }_{M}\right\}$ has been introduced.

Building of the PC basis – dependent random variables

In case of dependent variables, it is possible to build up an orthonormal basis as follows:

 ${\psi }_{\underline{\alpha }}\left(\underline{x}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}K\left(\underline{x}\right)\phantom{\rule{0.277778em}{0ex}}\prod _{i=1}^{M}{\pi }_{{\alpha }_{i}}^{\left(i\right)}\left({x}_{i}\right)$ (177)

where $K\left(\underline{x}\right)$ is a function of the copula of $\underline{X}$. Note that such a basis is no longer polynomial. When dealing with independent random variables, one gets $K\left(\underline{x}\right)=1$ and each basis element may be recast as in Eq.(176). Determinating $K\left(\underline{x}\right)$ is usually computionally expensive though, hence an alternative strategy for specfific types of input random vectors.

If $\underline{X}$ has an elliptical copula instead of an independent one, it may be recast as a random vector $\underline{U}$ with independent components using a suitable mapping $T:\underline{X}↦\underline{U}$ such as the Nataf tranformation [Generalized Nataf Transformation] . The so-called Rosenblatt tranformation may also be applied in case of a Gaussian copula [Rosenblatt Transformation] . Thus the model response $Y$ may be regarded as a function of $\underline{U}$ and expanded onto a polynomial chaos expansion with basis elements cast as in Eq.(176).

Link with classical deterministic polynomial approximation

In a deterministic setting (i.e. when the input parameters are considered to be deterministic), it is of common practice to substitute the model function $h$ by a polynomial approximation over its whole domain of definition as shown in [Polynomial response surface based on least squares] . Actually this approach is strictly equivalent to:

1. Regarding the input parameters as random uniform random variables

2. Expanding any quantity of interest provided by the model onto a PC expansion made of Legendre polynomials

Other notations

Within the global methodology, the polynomial chaos expansion of the model under consideration is built up prior to tackling the step C: “Uncertainty propagation”. Then the various uncertainty propagation techniques may be applied at a negligible computational cost. The method requires to have fulfilled the two following steps:
• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $\underline{Y}=h\left(\underline{X}\right)$;

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$.

References and theoretical basics

The method is sometimes also known as orthogonal series decomposition. The reader is referred to the following pioneering work in mechanical engineering:
• R. Ghanem and P. Spanos, 1991, “Stochastic finite elements – A spectral approach”, Springer Verlag. (Reedited by Dover Publications, 2003).

Examples

Let us consider the so-called (scalar valued) Ishigami function defined by:
 $Y\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}sin{X}_{1}+7{sin}^{2}{X}_{2}+0.1{X}_{3}^{4}sin{X}_{1}$ (178)

where the ${X}_{i}$'s ($i=1,...,3$) are independent random variables that are uniformly distributed over $\left[-\pi ,\pi \right]$.

The input random variables are first transformed into uniform variables over $\left[-1,1\right]$ by applying Eq.(173):

 ${U}_{i}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{T}_{i}\left({X}_{i}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\frac{X}{\pi }\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\phantom{\rule{0.166667em}{0ex}}i=1,\cdots ,3$ (179)

Then the model response $Y$ is expanded onto the following PC basis made of normalized Legendre polynomials:

 $Y\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{\left({\alpha }_{1},{\alpha }_{2},{\alpha }_{3}\right)\in {ℕ}^{3}}\phantom{\rule{0.277778em}{0ex}}{a}_{{\alpha }_{1},{\alpha }_{2},{\alpha }_{3}}\phantom{\rule{0.277778em}{0ex}}{\psi }_{{\alpha }_{1},{\alpha }_{2},{\alpha }_{3}}\left({U}_{1},{U}_{2},{U}_{3}\right)$ (180)

where:

 ${\psi }_{{\alpha }_{1},{\alpha }_{2},{\alpha }_{3}}\left({U}_{1},{U}_{2},{U}_{3}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}L{e}_{{\alpha }_{1}}\left({U}_{1}\right)×L{e}_{{\alpha }_{2}}\left({U}_{2}\right)×L{e}_{{\alpha }_{3}}\left({U}_{3}\right)$ (181)

denoting by $L{e}_{j}$ the (univariate) normalized Legendre polynomial of degree $j$.

The basis polynomials of total degree not greater than 2 in the above representation are reported in the following table.

 Total degree ${\sum }_{i=1}^{3}{\alpha }_{i}$ Multi-index $\underline{\alpha }$ Polynomial ${\psi }_{\underline{\alpha }}\left(\underline{U}\right)$ 0 $\left\{0,0,0\right\}$ 1 $\left\{1,0,0\right\}$ $\sqrt{3}\phantom{\rule{3.33333pt}{0ex}}{U}_{1}$ 1 $\left\{0,1,0\right\}$ $\sqrt{3}\phantom{\rule{3.33333pt}{0ex}}{U}_{2}$ $\left\{0,0,1\right\}$ $\sqrt{3}\phantom{\rule{3.33333pt}{0ex}}{U}_{3}$ $\left\{2,0,0\right\}$ $\left(3{U}_{1}^{2}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}1\right)\sqrt{5}/2$ $\left\{0,2,0\right\}$ $\left(3{U}_{2}^{2}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}1\right)\sqrt{5}/2$ 2 $\left\{0,0,2\right\}$ $\left(3{U}_{3}^{2}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}1\right)\sqrt{5}/2$ $\left\{1,1,0\right\}$ $3\phantom{\rule{3.33333pt}{0ex}}{U}_{1}{U}_{2}$ $\left\{1,0,1\right\}$ $3\phantom{\rule{3.33333pt}{0ex}}{U}_{1}{U}_{3}$ $\left\{0,1,1\right\}$ $3\phantom{\rule{3.33333pt}{0ex}}{U}_{2}{U}_{3}$

6.3.10 Step Resp. Surf.  – Strategies for enumerating the polynomial chaos basis

Mathematical description

Goal

The polynomial chaos (PC) expansion allows one to obtain an explicit representation of the random response $\underline{Y}$ of the model under consideration, see  [PC basis] . More precisely, the response is cast as a converging series featuring orthonormal polynomials. For computational purpose, it is necessary though to retain a finite number of terms by truncating the expansion. First of all, a specific strategy for enumerating the infinite PC series has to be defined. This is the scope of the current section.

Enumerating strategies

Given an input random vector $\underline{X}$ with prescribed probability density function (PDF) ${f}_{\underline{X}}\left(\underline{x}\right)$, it is possible to build up a polynomial chaos (PC) basis $\left\{{\psi }_{\underline{\alpha }},\underline{\alpha }\in {ℕ}^{{n}_{X}}\right\}$ [Polynomial chaos basis] . Of interest is the definition of enumeration strategies for exploring this basis, i.e. of suitable enumeration functions $\tau$ from $ℕ$ to ${ℕ}^{{n}_{X}}$, which creates a one-to-one mapping between an integer $j$ and a multi-index $\underline{\alpha }$.

Linear enumeration strategy

Let us first define the total degree of any multi-index $\underline{\alpha }$ in ${ℕ}^{{n}_{X}}$ by ${\sum }_{i=1}^{{n}_{X}}{\alpha }_{i}$. A natural choice to sort the PC basis (i.e. the multi-indices $\underline{\alpha }$) is the lexicographical order with a constraint of increasing total degree. Mathematically speaking, a bijective enumeration function $\tau$ is defined by:

 $\begin{array}{cccc}\tau \phantom{\rule{0.166667em}{0ex}}:\hfill & ℕ\hfill & ⟶& {ℕ}^{{n}_{X}}\hfill \\ & j\hfill & ↦& \left\{{\alpha }_{1},\cdots ,{\alpha }_{M}\right\}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\left\{{\tau }_{1}\left(j\right),\cdots ,{\tau }_{M}\left(j\right)\right\}\hfill \end{array}$ (182)

such that:

 $\tau \left(0\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{0,\cdots ,0\right\}$ (183)

and

 $\forall 1\le j (184)

Such an enumeration strategy is illustrated in a two-dimensional case (i.e. ${n}_{X}=2$) in the figure below:

This corresponds to the following enumeration of the multi-indices:

 $j$ $\underline{\alpha }\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\left\{{\alpha }_{1},{\alpha }_{2}\right\}$ 0 $\left\{0,0\right\}$ 1 $\left\{0,1\right\}$ 2 $\left\{1,0\right\}$ 3 $\left\{2,0\right\}$ 4 $\left\{1,1\right\}$ 5 $\left\{0,2\right\}$ 6 $\left\{3,0\right\}$ 7 $\left\{2,1\right\}$ 8 $\left\{1,2\right\}$ 9 $\left\{0,3\right\}$

Hyperbolic enumeration strategy

The hyperbolic truncation strategy is inspired by the so-called sparsity-of-effects principle, which states that most models are principally governed by main effects and low-order interactions. Accordingly, one wishes to define an enumeration strategy which first selects those multi-indices related to main effects, i.e. with a reasonably small number of nonzero components, prior to selecting those associated with higher-order interactions.

For any real number $q$ in $\left(0,1\right]$, one defines the $q$-hyperbolic norm (or $q$-norm for short) of a multi-index $\underline{\alpha }$ by:

 $\parallel \underline{\alpha }{\parallel }_{q}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\left(\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}{\alpha }^{q}\right)}^{1/q}$ (185)

Strictly speaking, ${\parallel ·\parallel }_{q}$ is not properly a norm but rather a quasi-norm since it does not satisfy the triangular inequality. However this abuse of language will be used in the following. Note that the case $q=1$ corresponds to the definition of the total degree.

Let $\lambda$ be a real positive number. One defines the set of multi-indices with $q$-norm not greater than $\lambda$ as follows:

 ${𝒜}_{\lambda }\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{\underline{\alpha }\in {ℕ}^{{n}_{X}}\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}\parallel \underline{\alpha }{\parallel }_{q}\phantom{\rule{0.166667em}{0ex}}\le \lambda \right\}$ (186)

Moreover, one defines the front of ${𝒜}_{\lambda }$ by:

 $\partial {𝒜}_{\lambda }\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{\underline{\alpha }\in {𝒜}_{\lambda }\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}\exists \phantom{\rule{0.277778em}{0ex}}i\phantom{\rule{0.277778em}{0ex}}\in \phantom{\rule{0.277778em}{0ex}}\left\{1,\cdots ,{n}_{X}\right\}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underline{\alpha }\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\underline{{e}_{i}}\phantom{\rule{0.166667em}{0ex}}\notin \phantom{\rule{0.166667em}{0ex}}{𝒜}_{\lambda }\right\}$ (187)

where $\underline{{e}_{i}}$ is a multi-index with a unit $i$-entry and zero $k$-entries, $k\ne i$.

The idea consists in exploring the space ${ℕ}^{{n}_{X}}$ by progressively increasing the $q$-norm of its elements. In this purpose, one wants to construct an enumeration function that relies upon (1) the bijection $\tau$ defined in the previous paragraph and (2) an appropriate increasing sequence ${\left({\lambda }_{n}\right)}_{ℕ}$ that tends to infinity. Such a sequence can be used to define a specific partition of ${ℕ}^{{n}_{X}}$ into stratas ${\left({\Delta }_{n}\right)}_{ℕ}$. Precisely, the enumeration of the multi-indices is achieved by sorting the elements of ${\Delta }_{n}$ in ascending order of the $q$-norm, and then by sorting the possible elements having the same $q$-norm using the bijection $\tau$. Several examples of partition are given in the sequel.

Partition based on the total degree. We can simply define the sequence ${\left({\lambda }_{n}\right)}_{ℕ}$ as the set of natural integers $ℕ$. Thus we build up a sequence ${\left({\Delta }_{n}\right)}_{ℕ}$ of stratas as follows:

 $\left\{\begin{array}{c}{\Delta }_{0}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{\underline{0}\right\}\hfill \\ \forall \phantom{\rule{0.277778em}{0ex}}n\ge 1\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\Delta }_{n}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{𝒜}_{n}\phantom{\rule{0.277778em}{0ex}}\setminus \phantom{\rule{0.277778em}{0ex}}{𝒜}_{n-1}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{\underline{\alpha }\in {ℕ}^{{n}_{X}}\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}n-1\phantom{\rule{0.166667em}{0ex}}<\phantom{\rule{0.166667em}{0ex}}\parallel \underline{\alpha }{\parallel }_{q}\phantom{\rule{0.166667em}{0ex}}\le n\right\}\hfill \end{array}\right\$ (188)

The progressive exploration of ${ℕ}^{{n}_{X}}$ is depicted in the two-dimensional case for various values of the parameter $q$:

As expected, the hyperbolic norms penalize the indices associated with high-order interactions all the more since $q$ is low. Note that setting $q$ equal to 1 corresponds to the usual linear enumeration strategy. Then the retained polynomials are located under a straight line, hence the label linear enumeration strategy. In contrast, when $q<1$, the retained basis polynomials are located under an hyperbola, hence the name hyperbolic enumeration strategy.

Partition based on disjoint fronts. Instead of considering the sequence of natural integers, we define the sequence ${\left({\lambda }_{n}\right)}_{ℕ}$ recursively by:

 $\left\{\begin{array}{c}{\lambda }_{0}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}0\hfill \\ \forall \phantom{\rule{0.277778em}{0ex}}n\ge 1\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\lambda }_{n}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underset{\lambda \in {ℝ}^{+}}{inf}\phantom{\rule{0.277778em}{0ex}}\left\{\lambda \ge {\lambda }_{n-1}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{and}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\partial {𝒜}_{\lambda }\phantom{\rule{0.166667em}{0ex}}\cap \phantom{\rule{0.166667em}{0ex}}\partial {𝒜}_{{\lambda }_{n-1}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\varnothing \right\}\hfill \end{array}\right\$ (189)

In other words, ${\lambda }_{n}$ is the infimum of the real numbers $\lambda$ for which the new front contains only element which do not belong to the former one. Hence the sequence of stratas:

 $\left\{\begin{array}{c}{\Delta }_{0}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{\underline{0}\right\}\hfill \\ \forall \phantom{\rule{0.277778em}{0ex}}n\ge 1\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\Delta }_{n}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{𝒜}_{{\lambda }_{n}}\phantom{\rule{0.277778em}{0ex}}\setminus \phantom{\rule{0.277778em}{0ex}}{𝒜}_{{\lambda }_{n-1}}\hfill \end{array}\right\$ (190)

Note that this partition of ${ℕ}^{{n}_{X}}$ is finer than the one based on total degrees, since the cardinality of the stratas is smaller.

Anisotropic hyperbolic enumeration strategy

One might also consider enumeration strategies based on an anisotropic hyperbolic norm defined by:

 $\parallel \underline{\alpha }{\parallel }_{\underline{w},q}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\left(\sum _{i=1}^{{n}_{X}}\phantom{\rule{0.277778em}{0ex}}{w}_{i}\phantom{\rule{0.277778em}{0ex}}{\alpha }^{q}\right)}^{1/q}$ (191)

where the ${w}_{i}$'s are real positive numbers. This would lead to first select the basis polynomials depending on a specific subset of input variables.

In this setup, it is also possible to explore the space ${ℕ}^{{n}_{X}}$ of the multi-indices by partitioning it according to one of the two schemes outlined in the previous paragraph (it is only necessary to replace the isotropic $q$-norm in Eq.(186) with the $\left(\underline{w},q\right)$-anisotropic one).

We may also build up an alternative partition related to the partial degree of the most important variable, i.e. the one associated to the smallest weight ${w}_{i}$. Then the sequence ${\left({\lambda }_{n}\right)}_{ℕ}$ is equal to $ℕ$ and the sets ${𝒜}_{\lambda }$ are defined by:

 ${𝒜}_{\lambda }\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\left\{\underline{\alpha }\in {ℕ}^{{n}_{X}}\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}{\alpha }_{{i}^{*}}\phantom{\rule{0.166667em}{0ex}}\le \lambda \right\}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}},\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}{i}^{*}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\text{arg}min\left\{{w}_{i}\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{0.277778em}{0ex}}1\le i\le {n}_{X}\right\}$ (192)

If stratas with larger cardinalities are of interest, one may rather consider the partial degree of the least significant variable, i.e. the one associated with the greatest weight ${w}_{i}$. To this end, the index ${i}^{*}$ in the previous formula has to be defined by:

 ${i}^{*}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\text{arg}max\left\{{w}_{i}\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{0.277778em}{0ex}}1\le i\le {n}_{X}\right\}$ (193)

Other notations

Within the global methodology, the polynomial chaos expansion of the model under consideration is built up prior to tackling the step C: “Uncertainty propagation”. Then the various uncertainty propagation techniques may be applied at a negligible computational cost. The method requires to have fulfilled the two following steps:
• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $\underline{Y}=h\left(\underline{X}\right)$;

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$.

References and theoretical basics

The hyperbolic enumeration strategy has been inspired by the following work:
• G. Blatman, 2009, “Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis”, PhD thesis, Clermont Université.

6.3.11 Step Resp. Surf.  – Truncation schemes for the polynomial chaos expansion

Mathematical description

Goal

The polynomial chaos (PC) expansion allows one to obtain an explicit representation of the random response $\underline{Y}$ of the model under consideration, see  [PC basis] . More precisely, the response $Y$ is cast as a converging series featuring orthonormal polynomials. For computational purpose, it is necessary though to retain a finite number of terms by truncating the expansion. Strategies for enumerating the infinite PC basis have been introduced [Strategies for enumerating the polynomial chaos basis] . Then it is possible to device several strategies in order to truncate the representation, as shown in the sequel.

Principle

As shown in  [PC basis] , the random model response $\underline{Y}$ may be expanded onto the PC basis as follows:

 $\underline{Y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{\underline{\alpha }\in {ℕ}^{{n}_{X}}}\phantom{\rule{0.277778em}{0ex}}{\underline{a}}_{\underline{\alpha }}\phantom{\rule{0.277778em}{0ex}}{\psi }_{\underline{\alpha }}\left(\underline{X}\right)$ (194)

where $n$ denotes the dimension of vector $\underline{X}$. For practical implementation, finite dimensional polynomial chaoses have to be built. In other words, a suitable finite subset $𝒜$ of ${ℕ}^{{n}_{X}}$ has to be selected. Using a specific enumeration strategy, a one-to-one mapping between natural integers $j$ and the multi-indices $\underline{\alpha }$ in ${ℕ}^{{n}_{X}}$ is created and the PC series rewrites:

 $\underline{Y}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j=0}^{\infty }\phantom{\rule{0.277778em}{0ex}}{\underline{a}}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{X}\right)$ (195)

Fixed strategy

The so-called fixed strategy simply consists in retaining the first $P$ elements of the PC basis, the latter being ordered according to a given enumeration scheme (hyperbolic or not). The retained set is built in a single pass. The truncated PC expansion is given by:

 $\stackrel{^}{h}\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j=0}^{P-1}\phantom{\rule{0.277778em}{0ex}}{\underline{a}}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{X}\right)$ (196)

In case of a linear enumeration strategy, a usual choice is to set $P$ equal to $\left(\genfrac{}{}{0pt}{}{{n}_{X}+p}{p}\right)=\frac{\left({n}_{X}+p\right)!}{{n}_{X}!p!}$, for a given natural integer $p$. This way the set of retained basis functions $\left\{{\psi }_{j},j=0,\cdots ,P-1\right\}$ gathers all the polynomials with total degree not greater than $p$. The number of terms $P$ grows polynomially both in ${n}_{X}$ and $p$ though, which may lead to difficulties in terms of computational efficiency and memory requirements when dealing with high-dimensional problems.

Sequential strategy

The sequential strategy consists in constructing the basis of the truncated PC iteratively. Precisely, one begins with the first term ${\psi }_{0}$, that is ${𝒜}_{0}=\left\{0\right\}$, and one complements the current basis as follows: ${𝒜}_{k+1}={𝒜}_{k}\cup \left\{{\psi }_{k+1}\right\}$. The construction process is stopped when a given accuracy criterion is reached, or when $P$ is equal to a prescribed maximum basis size ${P}_{max}$.

Cleaning strategy

The cleaning strategy is aimed at building a PC expansion containing at most $P$ significant coefficients, i.e. at most $P$ significant basis functions. It proceeds as follows:

• Generate an initial PC basis made of the $P$ first polynomials (according to the adopted enumeration strategy), or equivalently an initial set of indices $𝒜=\left\{0,\cdots ,P-1\right\}$

• Discard from the basis all those polynomials ${\psi }_{j}$ associated with insignificant coefficients, i.e. the coefficients that satisfy:

 $|{a}_{j}|\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\le \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\epsilon \phantom{\rule{0.277778em}{0ex}}·\phantom{\rule{0.277778em}{0ex}}\underset{k\in 𝒜}{max}|{a}_{k}|$ (197)

where $\epsilon$ is a given tolerance, e.g. $\epsilon ={10}^{-4}$.

• Add the next basis term ${\psi }_{P}$ to the current basis $𝒜$

• Reiterate the procedure until either $P$ terms have been retained or if a given number ${P}_{max}$ of candidate terms have been considered

Other notations

Within the global methodology, the polynomial chaos expansion of the model under consideration is built up prior to tackling the step C: “Uncertainty propagation”. Then the various uncertainty propagation techniques may be applied at a negligible computational cost. The method requires to have fulfilled the two following steps:
• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $\underline{Y}=h\left(\underline{X}\right)$;

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$.

References and theoretical basics

6.3.12 Step Resp. Surf.  – Computation of the polynomial chaos coefficients

Mathematical description

Goal

The polynomial chaos (PC) expansion allows one to obtain an explicit representation of the random response $\underline{Y}=h\left(\underline{X}\right)$ of the model under consideration, see  [Polynomial chaos basis] . More precisely, the response $\underline{Y}$ is cast as a converging series featuring orthonormal polynomials. For computational purpose, it is necessary though to retain a finite number of terms by truncating the expansion as shown in  [Truncation schemes for the polynomial chaos expansion] . Then it is necessary to estimate the PC coefficients in order to characterize $\underline{Y}$. This may be achieved using a projection strategy as shown in the sequel.

Projection strategy

The model response is assumed to be scalar for the sake of simplicity, i.e. $\underline{Y}=Y$. However the following derivations hold componentwise in case of a vector-valued random response. Let us consider the following truncated PC representation of the model response:

 $Y\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{j\in 𝒜}\phantom{\rule{0.277778em}{0ex}}{a}_{j}\phantom{\rule{0.277778em}{0ex}}{\psi }_{j}\left(\underline{X}\right)$ (198)

where $𝒜$ is a non empty finite set of indices, whose cardinality is denoted by $P$. Using the matrix notation $\underline{a}=\left\{{a}_{0},\cdots ,{a}_{P-1}\right\}$ and $\underline{\psi }\left(\underline{X}\right)=\phantom{\rule{4pt}{0ex}}{\left\{{\psi }_{0}\left(\underline{X}\right),\cdots ,{\psi }_{P-1}\left(\underline{X}\right)\right\}}^{𝖳}$, the PC representation in Eq.(198) rewrites:

 $Y\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\equiv \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\underline{a}}^{𝖳}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underline{\psi }\left(\underline{X}\right)$ (199)

The coefficients may be computed by a ${L}^{2}$-projection onto the PC basis as follows:

 $\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underset{\underline{b}\in {ℝ}^{P}}{argmin}𝔼\left[{\left(h\left(\underline{X}\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}{\underline{b}}^{𝖳}\phantom{\rule{0.166667em}{0ex}}\underline{\psi }\left(\underline{X}\right)\right)}^{2}\right]$ (200)

Due to the orthonormality of the PC basis, it may be shown that the previous equation rewrites:

 $\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}𝔼\left[h\left(\underline{X}\right)\phantom{\rule{0.277778em}{0ex}}\underline{\psi }\left(\underline{X}\right)\right]$ (201)

Two kinds of estimates can be derived depending in a discretization of either Eq.(200) or Eq.(201)

Least squares strategy

The PC coefficients can be estimated by solving a discretized version of Eq.(200). An experimental design $\underline{\underline{𝒳}}=\left\{{x}^{\left(1\right)},\cdots ,{x}^{\left(N\right)}\right\}$, i.e. a set of realizations of input parameters is required, as well as the corresponding model evaluations $\underline{𝒴}=\left\{{y}^{\left(1\right)},\cdots ,{y}^{\left(N\right)}\right\}$. Note that the experimental design $\underline{\underline{𝒳}}$ may be built using the various techniques introduced in  [Min-Max Approach using Design of Experimentss ] .

The following least squares problem has to be solved:

 $\stackrel{^}{\underline{a}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\underset{\underline{b}\in {ℝ}^{P}}{argmin}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}\phantom{\rule{0.277778em}{0ex}}{\left({y}^{\left(i\right)}\phantom{\rule{0.277778em}{0ex}}-\phantom{\rule{0.277778em}{0ex}}{\underline{b}}^{𝖳}\phantom{\rule{0.277778em}{0ex}}\underline{\psi }\left({\underline{x}}^{\left(i\right)}\right)\right)}^{2}$ (202)

A closed-form solution may be obtained as shown in [Polynomial response surface based on least squares] . The numerical solving schemes for the least squares problem are studied in  [Numerical methods to solve least squares problems] .

Sparse least squares strategy

If the size $P$ of the PC basis is of similar size to $N$, or even possibly significantly larger than $N$, then the ordinary least squares problem in Eq.(202) is ill-posed. The sparse least squares approaches outlined in [Polynomial response surface based on sparse least squares] , namely LAR, LASSO and Forward Stagewise, may be employed instead. Eventually a sparse PC representation is obtained, that is an approximation which only contains a small number of active basis functions.

Integration strategy

As an alternative, the PC coefficients may be estimated by discretizing the “simplified” expression (201). Although this approach has not been implemented in OpenTURNS yet, we provide the basic principles of the solving scheme.

By definition of the mathematical expectation operator $𝔼\left[·\right]$, this corresponds to calculating the following multi-dimensional integral:

 $\underline{a}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}𝔼\left[h\left(\underline{X}\right)\phantom{\rule{0.277778em}{0ex}}\underline{\psi }\left(\underline{X}\right)\right]\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\int }_{𝒟}\phantom{\rule{0.277778em}{0ex}}h\left(\underline{x}\right)\phantom{\rule{0.277778em}{0ex}}\underline{\psi }\left(\underline{x}\right)\phantom{\rule{0.277778em}{0ex}}{f}_{\underline{X}}\left(\underline{x}\right)\phantom{\rule{0.277778em}{0ex}}d\underline{x}$ (203)

where $𝒟$ and ${f}_{\underline{X}}\left(\underline{x}\right)$ are respectively the support and the probability density function of the random vector $\underline{X}$. The integral can be approximated by numerical quadrature as follows:

 $\stackrel{^}{\underline{a}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\sum _{i=1}^{N}\phantom{\rule{0.277778em}{0ex}}{w}^{\left(i\right)}\phantom{\rule{0.277778em}{0ex}}h\left({\underline{x}}^{\left(i\right)}\right)\phantom{\rule{0.277778em}{0ex}}\underline{\psi }\left({\underline{x}}^{\left(i\right)}\right)$ (204)

where the ${w}^{\left(i\right)}$'s and the ${\underline{x}}^{\left(i\right)}$'s are referred to as the quadrature weights and nodes, respectively. Both quantities may be selected according to well-known quadrature rules, such as Gauss quadrature or Clenshaw-Curtis quadrature. Otherwise, it is commonplace to generate independent realizations ${\underline{x}}^{\left(i\right)}$ of $\underline{X}$ and to set all the ${w}^{\left(i\right)}$'s equal to $1/N$. Such a scheme is known as Monte Carlo simulation (see   [Standard Monte Carlo simulation] ).

Other notations

Within the global methodology, the method is used to build up a PC approximation of the model response prior to tackling the step C: “Uncertainty propagation”. It requires a set of output values obtained with the initial model computed at different input values or the set of inputs and the initial model. Then the various uncertainty propagation techniques may be applied at a negligible computational cost. The method requires to have fulfilled the two following steps:
• step A1: identify of an input vector $\underline{X}$ of sources of uncertainties and an output variable of interest $\underline{Y}=h\left(\underline{X}\right)$;

• step B: identify one of the proposed techniques to estimate a probabilistic model of the input vector $\underline{X}$.

References and theoretical basics

The computation of the polynomial chaos coefficients by ordinary least squares was proposed in:
• M. Berveiller, 2005, “Eléments finis stochastiques : approches intrusive et non intrusive pour des analyses de fiabilité.”, PhD thesis, Clermont Université.

Besides, the use of sparse least squares approaches in the same purpose was advocated in:

• G. Blatman, 2009, “Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis”, PhD thesis, Clermont Université.