# 4 Construction of a response surface

The objective of this section is to build a response surface from a function. This response surface may be built from :

• the linear or quadratic Taylor approximations of the function at a particular point (see UC.4.1.1),

• or a linear approximation by least squares method from a sample of the input vector and the function (see UC.4.2.1),

• or a linear approximation by least squares method from a sample of the input vector and one of the output variable (see UC.4.2.2).

## 4.1 Taylor approximations

### 4.1.1 UC : Linear and Quadratic Taylor approximations

This section details the first method to construct a response surface : from the linear or quadratic Taylor approximations of the function at a particular point.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Polynomial Response Surfaces : principles and – Taylor Expansion ).

 Requirements $•$ a function : myFunc type: NumericalMathFunction Results $•$ the linear Taylor approximation myLinearTaylor type: LinearTaylor $•$ the quadratic Taylor approximation myQuadraticTaylor type: QuadraticTaylor

Python script for this UseCase :

# Taylor approximations at point 'center'                center = NumericalPoint(myFunc.getInputNumericalPointDimension())                for i in range(center.getDimension()) :                center[i] = 1.0+i                  # Create the linear Taylor approximation                myLinearTaylor = LinearTaylor(center, myFunc)                  # Create the quadratic Taylor approximation                myQuadraticTaylor = QuadraticTaylor(center, myFunc)                  # Perform the approximations                # linear Taylor approximation                myLinearTaylor.run()                print "my linear Taylor =" , myLinearTaylor                  # quadratic Taylor approximation                myQuadraticTaylor.run()                print "my quadratic Taylor =" , myQuadraticTaylor                  # Stream out the result                # linear Taylor approximation                linearResponseSurface = myLinearTaylor.getResponseSurface()                print "responseSurface =" , linearResponseSurface                  # quadratic Taylor approximation                quadraticResponseSurface = myQuadraticTaylor.getResponseSurface()                print "quadraticResponseSurface =" , quadraticResponseSurface                  # Compare the approximations and the function at a particluar point                # point 'center'                print "myFunc(" , center , ")=" , myFunc(center)                print "linearResponseSurface(" , center , ")=" , linearResponseSurface(center)                print "quadraticResponseSurface(" , center , ")=" , quadraticResponseSurface(center)

## 4.2 Least Squares approximation

This section details another method to construct a response surface : with the least squares method, evaluated on a sample of the input vector and the associated sample of the output variable of interest.

If one is interested in building a probabilistic model for the residuals, it is no more a least square approximation problem but rather a regression problem (see UC : "Building and validating a linear model from two samples").

### 4.2.1 UC : Linear Least Squares approximation from a sample of the input vector and the real function

This Use Case details the second method to construct a response surface : by least squares method from a sample of the input vector and the real function. The output sample is obtained by evaluating the function on the input sample.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Polynomial Response Surfaces : principles and – Least Square Method ).

 Requirements $•$ the limit state function : myFunc type: NumericalMathFunction $•$ a sample of the input vector : inputSample type: NumericalSample Results $•$ linear approximation by least squares method responseSurface type: NumericalMathFunction $•$ the coefficients of the linear approximation $myFunc\left(\underline{X}\right)=\underline{\underline{A}}\underline{X}+\underline{B}$ type: Matrix ($\underline{\underline{A}}$) , NumericalPoint ($\underline{B}$)

Python script for this Use Case :

# Create the LinearLeastSquares algorithm                myLeastSquares = LinearLeastSquares(inputSample, myFunc)                  # Perform the algorithm                myLeastSquares.run()                print "myLeastSquares=", myLeastSquares                  # Stream out the results :                # get the matrix A :                linear = myLeastSquares.getLinear()                print "A = ", linear                  # Get the constant term B :                constant = myLeastSquares.getConstant()                print "B = ", constant                  # Get the linear response surface                responseSurface = myLeastSquares.getResponseSurface()                print "responseSurface=", responseSurface                  # Compare the two models at a particular point 'inPoint'                print "(myFunc", inPoint, ")=", myFunc(inPoint)                print "responseSurface(", inPoint, ")=", responseSurface(inPoint)

### 4.2.2 UC : Linear Least Squares approximation from a sample of the input vector and a sample of the output vector

In this Use Case, the sample of the output variable associated to the sample of the input vector is given.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Polynomial Response Surfaces : principles and – Least Square Method ).

 Requirements $•$ a sample of the input vector : inputSample type: NumericalSample $•$ the associated sample of the output vector : outputSample type: NumericalSample Results $•$ linear approximation by least squares method responseSurface type: NumericalMathFunction $•$ the coefficients of the linear approximation $\underline{\underline{A}}\underline{X}+\underline{B}$ type: Matrix ($\underline{\underline{A}}$) , NumericalPoint ($\underline{B}$)

Python script for this UseCase :

# Create the LinearLeastSquares algorithm                myLeastSquares = LinearLeastSquares(inputSample, outputSample)                  # Perform the algorithm                myLeastSquares.run()                print "myLeastSquares=", myLeastSquares                  # Stream out the results :                # get the matrix A :                linear = myLeastSquares.getLinear()                print "A = ", linear                  # Get the constant term B :                constant = myLeastSquares.getConstant()                print "B = ", constant                  # Get the linear response surface                responseSurface = myLeastSquares.getResponseSurface()                print "responseSurface=", responseSurface

## 4.3 Polynomial chaos expansion

The polynomial chaos expansion enables to approximate the output random variable of interest $\underline{Y}=g\left(\underline{X}\right)\in {ℝ}^{p}$ where $\underline{X}$ is the input random vector of dimension $n$ by the response surface :

 $\begin{array}{c}\hfill \stackrel{˜}{\underline{Y}}=\sum _{k\in K}{\underline{\alpha }}_{k}{\Psi }_{k}\circ T\left(\underline{X}\right)\end{array}$

where ${\underline{\alpha }}_{k}\in {ℝ}^{p}$, $T$ is an isoprobabilistic transformation which maps the multivariate distribution of $\underline{X}$ into the multivariate distribution $\mu ={\prod }_{i=1}^{n}{\mu }_{i}$, and ${\left({\Psi }_{k}\right)}_{k\in ℕ}$ a multivariate polynomial basis of ${ℒ}_{\mu }^{2}\left({ℝ}^{n},ℝ\right)$ which is orthonornal according to the distribution $\mu$. $K$ is a finite subset of $ℕ$.

The distribution $\mu$ is supposed to have an independent copula and $Y$ be of finite second moment.

### 4.3.1 UC : Creation of a polynomial chaos algorithm

The objective of this Use Case is to create a polynomial chaos algorithm in order to evaluate in fine the probabilistic indicators of the ouput variable of interest without any problem of CPU time.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Functional Chaos Expansion and – Polynomial Chaos Expansion ).

When $\underline{Y}$ is of dimension $p>1$, OpenTURNS operates marginal by marginal, using the same multivariate orthonormal basis ${\left({\Psi }_{k}\left(\underline{x}\right)\right)}_{k\in ℕ}$ for all the marginals. Then, we obtain the following response surfaces for each marginal ${Y}_{i}$ :

 $\begin{array}{c}\hfill {\stackrel{˜}{Y}}_{i}=\sum _{k\in K}{\alpha }_{k}^{i}{\Psi }_{k}\circ T\left(\underline{X}\right)\end{array}$

The final multivariate response surface is presented as follows :

 $\begin{array}{c}\hfill \stackrel{˜}{\underline{Y}}=\sum _{k\in K}{\underline{\alpha }}_{k}{\Psi }_{k}\circ T\left(\underline{X}\right)\end{array}$

where ${\underline{\alpha }}_{k}=\left({\alpha }_{k}^{1},\cdots ,{\alpha }_{k}^{p}\right)$.

We detail here all the steps required in order to create a polynomial chaos algorithm when the output random variable $Y$ is scalar.

Step 1 : Construction of the multivariate orthonormal basis : OpenTURNS proposes to build the the multivariate orthonornal basis ${\left({\Psi }_{k}\left(\underline{x}\right)\right)}_{k\in ℕ}$ as the cartesian product of orthonornal univariate polynomial family ${\left({\Psi }_{l}^{i}\left({z}_{i}\right)\right)}_{l\in ℕ}$ :

 $\begin{array}{c}\hfill {\Psi }_{k}\left(\underline{z}\right)={\Psi }_{{k}_{1}}^{1}\left({z}_{1}\right)*{\Psi }_{{k}_{2}}^{2}\left({z}_{2}\right)*\cdots *{\Psi }_{{k}_{n}}^{n}\left({z}_{n}\right)\end{array}$

The possible univariate polynomial families associated to continuous measures are :

• Hermite, which is orthonormal with respect to the $Normal\left(0,1\right)$ distribution,

• Jacobi($\alpha$, $\beta$, param), which is orthonormal with respect to the $Beta\left(\beta +1,\alpha +\beta +2,-1,1\right)$ distribution if $param=0$ (default value) or to the $Beta\left(\alpha ,\beta ,-1,1\right)$ distribution if $param=1$,

• Laguerre($k$), which is orthonormal with respect to the $Gamma\left(k+1,1,0\right)$ distribution,

• Legendre, which is orthonormal with respect to the $Uniform\left(-1,1\right)$ distribution.

The possible univariate polynomial families associated to discrete measures are :

• Charlier, parameterized by $\lambda \in {ℝ}^{+*}$, which is orthonormal with respect to the $Poisson\left(\lambda \right)$ distribution,

• Krawtchouk, parameterized by $\left(N,p\right)\in \left({ℕ}^{*},\left[0,1\right]\right)$, which is orthonormal with respect to the $Binomial\left(N,p\right)$ distribution.

• Meixner, parameterized by $\left(r,p\right)\in \left({ℝ}^{+*},\right]0,1\left[\right)$, which is orthonormal with respect to the $NegativeBinomial\left(r,p\right)$ distribution.

Furthermore, the numerotation of the multivariate orthonormal basis ${\left({\Psi }_{k}\left(\underline{z}\right)\right)}_{k}$ is given by an enumerate function which defines a regular way to generate the collection of degres used for the univariate polynomials : an enumerate function represents a bijection from $ℕ$ into ${ℕ}^{dim}$. The possible enumerate functions are :

• Linear enumeration, through LinearEnumerateFunction,

• Hyperbolic and anisotropic enumeration, thanks to HyperbolicAnisotropicEnumerateFunction.

In case of the linear enumeration, the bijection is based on a particular procedure of enumerating the set of multi-indices based on an increasing total degree. It begins from the multi-index $\left[0,0,...,0\right]$ and follows as written below in dimension 4 :

LinearEnumerateFunction(4)(0) = [0,0,0,0]
LinearEnumerateFunction(4)(1) = [1,0,0,0]
LinearEnumerateFunction(4)(2) = [0,1,0,0]
LinearEnumerateFunction(4)(3) = [0,0,1,0]
LinearEnumerateFunction(4)(4) = [0,0,0,1]
LinearEnumerateFunction(4)(5) = [2,0,0,0]
LinearEnumerateFunction(4)(6) = [1,1,0,0]
LinearEnumerateFunction(4)(7) = [1,0,1,0]
LinearEnumerateFunction(4)(8) = [1,0,0,1]
LinearEnumerateFunction(4)(9) = [0,2,0,0]

In case of the hyperbolic anisotropic enumeration function the bijection is based on an increasing q-quasi norm, and follows as written below in dimension 4 for a value of $q=0.75$ :

HyperbolicAnisotropicEnumerateFunction(4)(0.75)(0) = [0,0,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(1) = [1,0,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(2) = [0,1,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(3) = [0,0,1,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(4) = [0,0,0,1]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(5) = [2,0,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(6) = [0,2,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(7) = [0,0,2,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(8) = [0,0,0,2]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(9) = [1,1,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(10) = [1,0,1,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(11) = [1,0,0,1]

In order to know the degree of the $k$-th polynomial of the multivariate basis, it is enough to sum all the integers given in the list.

It is possible to ask the enumerate function how many polynomials of the multivariate basis have are contained in the $k$-th strata, thanks to the method getStrateCardinal(k) and how many polynomials have are contained in all the first $k$-th strata, thanks to the method getStrateCumulatedCardinal(k).

Note that for the LinearEnumerateFunction the $k$-th strata corresponds the polynomials of total degree $k$.

It is also possible to specify HyperbolicAnisotropicEnumerateFunction weights to specify which univariate polynomial should be preponderent in terms of degree used for the univarite polynomials.

Step 2 : Truncature strategy of the multivariate orthonormal basis (Adaptive strategy) : a strategy must be chosen for the selection of the different terms of the multivariate basis. The selected terms are regrouped in the subset $K$.

There is three different strategies in OpenTURNS :

• FixedStrategy: this strategy is parameterized with an integer $p$. The first $p$ polynomials of the multivariate basis are selected, according to the order defined by the Enumeratefunction(). If we want to construct the complete basis with respect to a maximal degree $d=2$, then it is necessary to have $p={C}_{d+dim}^{d}$ which leads to $p=15$ for $dim=4$.

• SequentialStrategy: this strategy takes one by one the polynomials of the multivariate basis according to the order defined by the Enumeratefunction(), till satisfying a convergence criterion. The criterion is defined in Step 3 and the residual value is defined in the FunctionalChaosAlgorithm through the method setMaximumResidual.

• CleaningStrategy: this strategy considers the $p$ first polynomials of the multivariate basis according to the order defined by the Enumeratefunction(), selects the $M$ ones associated to the most significant coefficients, which means greater than a given value (which defines the significance factor criterion). It proceeds as follows :

• It generates an intial basis of fixed number of terms ($M$) which is equal to the maximum size of the basis;

• Unsignificant terms are removed (with respect to the significance factor criterion);

• A new term is then generated and the basis is updated with respect to the new coefficient;

• The procedure then reiterates by removing the unsignificant contributions and at last keeping no more than $M$ most significant contributions.

Step 3 : Evaluation strategy of the approximation coefficients (Projection strategy): The vector $\underline{\alpha }={\left({\alpha }_{k}\right)}_{k\in K}$ is equivalently defined by :

 $\underline{\alpha }=argmi{n}_{\underline{\alpha }\in {ℝ}^{K}}{E}_{\mu }\left[{\left(g\circ {T}^{-1}\left(\underline{Z}\right)-\sum _{k\in K}{\alpha }_{k}{\Psi }_{k}\left(\underline{Z}\right)\right)}^{2}\right]$ (21)

or

 $\underline{\alpha }={\left({E}_{\mu }\left[g\circ {T}^{-1}\left(\underline{Z}\right){\Psi }_{k}\left(\underline{Z}\right)\right]\right)}_{k}$ (22)

where $\underline{Z}=T\left(\underline{X}\right)$.

It corresponds to two points of view :

• Relation (21) means that the coefficients ${\left({\alpha }_{k}\right)}_{k\in K}$ minimize the quadratic error between the model and the polynomial approximation. OpenTURNS implements it through the LeastSquaresStrategy.

• Relation (22) means that ${\alpha }_{k}$ is the scalar product of the model with the $k-th$ element of the orthonormal basis ${\left({\Psi }_{k}\right)}_{k\in K}$. OpenTURNS implements it through the IntegrationStrategy.

In both cases, the esperance ${E}_{\mu }$ is approximated by a linear quadrature formula :

 ${E}_{\mu }\left[f\left(\underline{Z}\right)\right]\simeq \sum _{i\in I}{\omega }_{i}f\left({\Xi }_{i}\right)$ (23)

where $f$ is a function ${L}_{1}\left(\mu \right)$ defined as :

• In relation (21) :

 $f\left(\underline{Z}\right)={\left(g\circ {T}^{-1}\left(\underline{Z}\right)-\sum _{k\in K}{\alpha }_{k}{\Psi }_{k}\left(\underline{Z}\right)\right)}^{2}$ (24)
• In relation (22) :

 $f\left(\underline{Z}\right)=g\circ {T}^{-1}\left(\underline{Z}\right){\Psi }_{k}\left(\underline{Z}\right)$ (25)

In the approximation (23), the set $I$, the points ${\left({\Xi }_{i}\right)}_{i\in I}$ and the weights ${\left({\omega }_{i}\right)}_{i\in I}$ are evaluated from different methods implemented in OpenTURNS in the WeightedExperiment which can be :

• random: the couples ${\left({\Xi }_{i},{\omega }_{i}\right)}_{i\in I}$ are generated such that :

 $li{m}_{I\to ℕ}\sum _{i\in I}{\omega }_{i}{\delta }_{{\Xi }_{i}}=\mu \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}a.s.$ (26)

where ${\delta }_{{\Xi }_{i}}$ is the Dirac random distribution centered on ${\Xi }_{i}$.

We find :

• the MonteCarloExperiment and LHSExperiment that generate the points ${\left({\Xi }_{i}\right)}_{i\in I}$ according to the distribution $\mu$ with the LHS technique or not, and take ${\omega }_{i}=\frac{1}{cardI}$;

• the ImportanceSamplingExperiment that generates the points ${\left({\Xi }_{i}\right)}_{i\in I}$according to a given distribution with density $h$ and takes ${\omega }_{i}=\frac{1}{cardI}\frac{\mu \left({\Xi }_{i}\right)}{h\left({\Xi }_{i}\right)}$.

• or deterministic: the couples ${\left({\xi }_{i},{\omega }_{i}\right)}_{i\in I}$ are such that :

 $li{m}_{cardI\to \infty }\sum _{i\in I}{\omega }_{i}{\delta }_{{\xi }_{i}}=\mu .$ (27)

where ${\delta }_{{\xi }_{i}}$ is the Dirac distribution centered on ${\xi }_{i}$.

We find :

• the FixedExperiment where the points ${\left({\xi }_{i}\right)}_{i\in I}$ are given and the weights are all equal to ${\omega }_{i}=\frac{1}{cardI}$ or given by the User in order to verify (27).

• the LowDiscrepancyExperiment where the points follow a low discrepancy sequence (see the different sequences implemented in OpenTURNS) and the weights are all equal to ${\omega }_{i}=\frac{1}{cardI}$;

• the GaussProductExperiment where the points ${\left({\xi }_{i}\right)}_{i\in I}$ are the Gauss quadrature points and their associated weights.

The convergence criterion used to evaluate the coefficients is based on the residual value defined in the FunctionalChaosAlgorithm through the method setMaximumResidual.

Step 4 : Creation of the Functional Chaos Algorithm: a FunctionalChaosAlgorithm needs the following information to be created :

• the model $g$, which is the limit state function,

• the multivariate distribution of the input random vector $\underline{X}$,

• the truncature strategy of the multivariate basis, which contains also the information to build the complete multivariate basis,

• the evaluation strategy of the approximation coefficients.

OpenTURNS offers a second way to define a FunctionalChaosAlgorithm when the User does not have the model but only some realizations of it : an input sample and the associated ouput sample and weights as follows

• the input sample ${\left({\underline{X}}_{i}\right)}_{i=1,\cdots ,n}$,

• the weights ${\omega }_{i}$ of each realization ${\underline{X}}_{i}$,

• the output sample ${\left({\underline{Y}}_{i}\right)}_{i=1,\cdots ,n}$,

• the multivariate distribution ${p}_{\underline{X}}$ of the input random vector $\underline{X}$ : the previous weights are determined such that ${\sum }_{i=1}^{n}{\omega }_{i}{\delta }_{{\underline{X}}_{i}}\simeq {p}_{\underline{X}}$ ,

• the truncature strategy of the multivariate basis, which contains also the information to build the complete multivariate basis,

• the evaluation strategy of the approximation coefficients.

. This second usage enables to deconnect the evaluations of the model from the analysis performed on the samples. It is possible to change the input distribution of the input sample by changing the qweights (and the declaration of ${p}_{\underline{X}}$).

Example illustrated in the Use Case: the input random vector $\underline{X}$ is of dimension 4 and follows a multivariate distribution denoted Xdist.

The ouput variable of interest is scalar and denoted outputVariable and is the image of $\underline{X}$ through the model myModel:

 $\begin{array}{c}\hfill outputVariable=myModel\left(\underline{X}\right)\end{array}$

To build the multivariate orthonormal basis, we select the following univariate polynomial families :

• for ${Z}_{1}$ : the Hermite family, associated to ${\mu }_{1}$ which is a Standard normal distribution : $𝒩\left(0,1\right)$

• for ${Z}_{2}$ : the Legendre family, associated to ${\mu }_{2}$ which is a Uniform distribution : $𝒰\left(-1,1\right)$

• for ${Z}_{3}$ : the $Laguerre\left(1.75\right)$ family, associated to ${\mu }_{3}$ which is a Gamma distribution : $\Gamma \left(2.75,1,0\right)$

• for ${Z}_{4}$ : the $Jacobi\left(2.5,3.5,Jacobi.PROBABILITY\right)$ family, associated to ${\mu }_{4}$ which is a Beta distribution : $B\left(2.5,3.5,-1,1\right)$

Be careful to build a collection of univariate polynomial family which dimension is the one of $\underline{X}$.

Thus, the reduced random vector $\underline{Z}=\left({Z}_{1},{Z}_{2},{Z}_{3},{Z}_{4}\right)=T\left(\underline{X}\right)$ follows the distribution $\mu ={\prod }_{i=1}^{4}{\mu }_{i}$.

In order to build the multivariate orthonormal basis, we can select any combinations of these four orthonormal polynomial families to characterize the general term of the basis. In that example, we show how to define each selection strategy :

• the FixedStrategy where we want to select all the polynomials of degree $\le 2$, which leads to $p={C}_{6}^{2}=15$,

• the SequentialStrategy where we want to select among the $maximumCardinalBasis=100$ first polynomials of the multivariate basis those verfying the convergence criterion,

• the CleaningStrategy where we want to select, among the $maximumConsideredTerms=500$ first polynomials of the multivariate basis, those which have the $mostSignificant=50$ most significant contributions (greatest coefficients), with respect to the significance criterion $\epsilon ={10}^{-4}$.

The example illustrates the ProjectionStrategy of type LeastSquaresStrategy defined from a MonteCarloExperiment using a Monte Carlo sampling of size $sampleSize=100$.

The FunctionalChaosAlgorithm is parameterized with all the information previously defined. The example illustrates how to redefine the value of the convergence criterion, which is used to :

• determine the truncated basis with the sequential strategy,

• evaluate the coefficients of the polynomial expansion approximation with the least squares strategy.

The new value is $newResidual=1.0e-4$.

 Requirements $•$ the limit state function : myModel type: a NumericalMathFunction $•$ the distribution of the input random vector : Xdist type: a Distribution Results $•$ the polynomial chaos algorithm : polynomialChaosAlgorithm type: a FunctionalChaosAlgorithm

Python script for this Use Case :

script_docUC_RespSurface_PolyChaosExpansion.py

############################################################# # STEP 1 : Construction of the multivariate orthonormal basis #############################################################   # Dimension of the input random vector dim = 4   # Create the univariate polynomial family collection # which regroups the polynomial families for each direction polyColl = PolynomialFamilyCollection(dim) # For information, with the Krawtchouk and Charlier families : polyColl[0] = KrawtchoukFactory() polyColl[1] = CharlierFactory() # for information, with the automatic selection for i in range(Xdist.getDimension()):     polyColl[i] = StandardDistributionPolynomialFactory(Xdist.getMarginal(i)) # We overload these factories as they are dedicated to discrete distributions polyColl[0] = HermiteFactory() polyColl[1] = LegendreFactory() polyColl[2] = LaguerreFactory(2.75) # Parameter for the Jacobi factory : 'Probabilty' encoded with 1 polyColl[3] = JacobiFactory(2.5, 3.5, 1)   # Create the enumeration function # LinearEnumerateFunction enumerateFunction = LinearEnumerateFunction(dim)   # HyperbolicAnisotropicEnumerateFunction q = 0.4 enumerateFunction_1 = HyperbolicAnisotropicEnumerateFunction(dim, q)   # Create the multivariate orthonormal basis # which is the the cartesian product of the univariate basis multivariateBasis = OrthogonalProductPolynomialFactory(     polyColl, enumerateFunction)   # Ask how many polynomials have degrees equal to k=5 k = 5 print(enumerateFunction.getStrataCardinal(k))   # Ask how many polynomials have degrees inferior or equal to k=5 print(enumerateFunction.getStrataCumulatedCardinal(k))   # Give the k-th term of the multivariate basis # To calculate its degree, add the integers k = 5 print(enumerateFunction(k))   # Build a term of the basis as a NumericalMathFunction # Generally, we do not need to construct manually any term, # all terms are constructed automatically by a strategy of construction of # the basis i = 5 Psi_i = multivariateBasis.build(i)   # Get the measure mu associated to the multivariate basis distributionMu = multivariateBasis.getMeasure()   #################################################################### # STEP 2 : Truncature strategy of the multivariate orthonormal basis ############################################################# # FixedStrategy : # all the polynomials af degree <=2 # which corresponds to the 15 first ones p = 15 truncatureBasisStrategy = FixedStrategy(multivariateBasis, p)   # SequentialStrategy : # among the maximumCardinalBasis = 100 first polynomials # of the multivariate basis those verfying the convergence criterion, maximumCardinalBasis = 100 truncatureBasisStrategy_1 = SequentialStrategy(     multivariateBasis, maximumCardinalBasis)   # CleaningStrategy : # among the maximumConsideredTerms = 500 first polynomials, # those which have the mostSignificant = 50 most significant contributions # with significance criterion significanceFactor = 10^(-4) # The True boolean indicates if we are interested # in the online monitoring of the current basis update # (removed or added coefficients) maximumConsideredTerms = 500 mostSignificant = 50 significanceFactor = 1.0e-4 truncatureBasisStrategy_2 = CleaningStrategy(     multivariateBasis, maximumConsideredTerms, mostSignificant, significanceFactor, True)   ################################################################ # STEP 3 : Evaluation strategy of the approximation coefficients ############################################################# # The technique illustrated is the Least Squares technique # where the points come from an design of experiments # Here : the Monte Carlo sampling technique sampleSize = 100 evaluationCoeffStrategy = LeastSquaresStrategy(     MonteCarloExperiment(sampleSize))   # You can specify the approximation algorithm # This is the algorithm that generates a sequence of basis using Least # Angle Regression basisSequenceFactory = LAR()   # This algorithm estimates the empirical error on each sub-basis using # Leave One Out strategy fittingAlgorithm = CorrectedLeaveOneOut() # Finally the metamodel selection algorithm embbeded in LeastSquaresStrategy approximationAlgorithm = LeastSquaresMetaModelSelectionFactory(     basisSequenceFactory, fittingAlgorithm) evaluationCoeffStrategy_2 = LeastSquaresStrategy(     MonteCarloExperiment(sampleSize), approximationAlgorithm)   # Try integration marginalDegrees = [2] * dim evaluationCoeffStrategy_3 = IntegrationStrategy(     GaussProductExperiment(distributionMu, marginalDegrees))   ##################################################### # STEP 4 : Creation of the Functional Chaos Algorithm ############################################################# # FunctionalChaosAlgorithm : # combination of the model : myModel # the distribution of the input random vector : Xdist # the truncature strategy of the multivariate basis # and the evaluation strategy of the coefficients polynomialChaosAlgorithm = FunctionalChaosAlgorithm(     myModel, Xdist, truncatureBasisStrategy, evaluationCoeffStrategy)

### 4.3.2 UC : Run and results exploitation of a polynomial chaos algorithm : coefficients, polynomial model, multivariate basis, truncated multivariate basis, ...

The objective of this Use Case is to launch the polynomial chaos algorithm and exploit all the associated results.

We note $g:{ℝ}^{n}⟶{ℝ}^{p}$, $g\left(\underline{X}\right)=\underline{Y}$ the physical model and $T:{ℝ}^{n}⟶{ℝ}^{n}$, $T\left(\underline{X}\right)=\underline{Z}$ the iso-probabilistic transformation.

Once the algorithm has run, it is possible to ask for the following results :

• the composed model: $h:{\underline{Z}}^{\phantom{\left(}}⟶\underline{Y}=g\circ {T}^{-1}\left(\underline{Z}\right)$, which is the model of the reduced variables $\underline{Z}$. We have $h=\sum _{k\in ℕ}{\underline{\alpha }}_{k}{\Psi }_{k}$,

• the coefficients of the polynomial approximation : ${\left({\underline{\alpha }}_{k}\right)}_{k\in K}$,

• the composed meta model: $\stackrel{^}{h}$, which is the model of the reduced variables reduced to the truncated multivariate basis ${\left({\Psi }_{k}\right)}_{k\in K}$. We have $\stackrel{^}{h}=\sum _{k\in K}{\underline{\alpha }}_{k}{\Psi }_{k}$,

• the meta model: $\stackrel{^}{g}:\underline{X}⟶Y=\stackrel{^}{h}\circ T\left(\underline{X}\right)$ which is the polynomial chaos approximation as a NumericalMathFunction. We have $\stackrel{^}{g}=\sum _{k\in K}{\underline{\alpha }}_{k}{\Psi }_{k}\circ T$,

• the truncated multivariate basis : ${\left({\Psi }_{k}\right)}_{k\in K}$,

• the indices $K$,

• the composition of each polynomial of the truncated multivariate basis ${\Psi }_{k}$,

• the distribution $\mu$ of the transformed variables $\underline{Z}$,

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Functional Chaos Expansion and – Polynomial Chaos Expansion ).

Once the approximated model has been determined, it is possible to apply the Uncertainty Methodology with the new ouput variable of interest $\stackrel{˜}{Y}$, instead of the real one $Y$. The Use Case (2.4.1) illustrates how to create an output variable of interest from the result of the PolynomialChaosAlgorithm : PolynomialChaosResult.

 Requirements $•$ the polynomial chaos algorithm : polynomialChaosAlgorithm type: a FunctionalChaosAlgorithm Results $•$ the result structure : result type: a FunctionalChaosResult $•$ the coefficients : coefficients type: a NumericalSample $•$ the indices $K$ : subsetK type: a Indices $•$ the composed model : composedModel type: a NumericalMathFunction $•$ the composed meta model : composedMetaModel type: a NumericalMathFunction $•$ the meta model : metaModel type: a NumericalMathFunction $•$ the truncated multivariate basis : multivariateBasisCollection type: a NumericalMathFunctionCollection $•$ the distribution of the transformed variables : mu type: a Distribution $•$ the composed model (of $\underline{Z}$) : composedModel type: a NumericalMathFunction $•$ the projection strategy : myProjStat type: a ProjectionStrategy

Python script for this Use Case :

script_docUC_RespSurface_PolyChaosExploitation.py

##################################################### # Perform the simulation ##################################################### polynomialChaosAlgorithm.run()   # Stream out the result result = polynomialChaosAlgorithm.getResult()   # Get the polynomial chaos coefficients # All the coefficients coefficients = result.getCoefficients()   # The coefficients of marginal i i = 1 coefficientsMarginal_i = result.getCoefficients()[i]   # Get the indices of the selected polynomials : K subsetK = result.getIndices()   # Get the composition of the polynomials # of the truncated multivariate basis for i in range(subsetK.getSize()):     print("Polynomial number ", i, " in truncated basis <-> polynomial number ",           subsetK[i], " = ", LinearEnumerateFunction(dim)(subsetK[i]), " in complete basis")   # Get the multivariate basis # as a colletion of NumericalMathFunction functionCollection = result.getReducedBasis()   # Get the orthogonal basis orthgBasis = result.getOrthogonalBasis()   # Get the distribution of variables Z mu = orthgBasis.getMeasure()   # Get the composed model which is the model of the reduced variables Z composedModel = result.getComposedModel()   # Get the composed meta model which is the model of the reduced variables Z # within the reduced polynomials basis composedMetaModel = result.getComposedMetaModel()   # Get the meta model which is the composed meta model combined with the # iso probabilistic transformation metaModel = result.getMetaModel()   # Get the projection strategy myProjStat = polynomialChaosAlgorithm.getProjectionStrategy()

### 4.3.3 UC : Draw some usefull graphs associated to a polynomial chaos algorithm : polynomial graphs, comparison graph between numerical samples from the model and the meta-model, ...

The objective of this UC is to provide some graphs usefull after the launch of a polynomial chaos algorithm, as, for example :

• Graph 1 : the drawings of some members of the 1D polynomial family,

• Graph 2 : the cloud of points making the comparison between the model values and the meta model ones : if the adequation is perfect, points must be on the first diagonal.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Functional Chaos Expansion and – Polynomial Chaos Expansion ).

The Use Case takes the example of the Jacobi family.

 Requirements $•$ the distribution of the input random vector : inputDistribution type: a Distribution $•$ the limit state function : limitStateFunction type: a NumericalMathFunction $•$ the meta model : metaModel type: a NumericalMathFunction Results $•$ graph described above : graphJacobi type: Graph

Python script for this UseCase :

script_docUC_RespSurface_PolyChaosDrawings1.py

################################################# # GRAPH 1 : drawings of the 5-th first members of # the Jacobi family #################################################   # Create the Jacobi polynomials family using the default Jacobi.ANALYSIS # parameter set alpha = 0.5 beta = 1.5 jacobiFamily = JacobiFactory(alpha, beta)   # Fix the degree max of the polynomials # which will be drawn degreeMax = 5   # Load all the valid colors colorList = Drawable.BuildDefaultPalette(degreeMax)   # Create a fine title titleJacobi = "Jacobi(" + str(alpha) + ", " + str(beta) + ") polynomials"   # Create an empty graph which will be fullfilled # with curves graphJacobi = Graph(titleJacobi, "z", "polynomial values", True, "topright")   # Fix the number of points for the graph pointNumber = 101   # Bounds of the graph xMinJacobi = -1 xMaxJacobi = 1   # Get the curves for i in range(degreeMax):     graphJacobi_temp = jacobiFamily.build(i).draw(         xMinJacobi, xMaxJacobi, pointNumber)     graphJacobi_temp_draw = graphJacobi_temp.getDrawable(0)     legend = "degree " + str(i)     graphJacobi_temp_draw.setLegend(legend)     graphJacobi_temp_draw.setColor(colorList[i])     graphJacobi.add(graphJacobi_temp_draw)   # In order to see the graphs without creating the files .EPS, .PNG and .FIG # Show(graphJacobi)

script_docUC_RespSurface_PolyChaosDrawings2.py

##################################################### # GRAPH 2 : points cloud between model and meta model #####################################################   # Generate a NumericalSample of the input random vector sizeX = 500 Xsample = Xdist.getSample(sizeX)   # Evaluate the model on the sample modelSample = myModel(Xsample)   # Evaluate the meta model on the sample metaModelSample = metaModel(Xsample)   # Create the numerical sample of points (Y, Y_tilde) sampleMixed = NumericalSample(sizeX, 2) for i in range(sizeX):     sampleMixed[i, 0] = modelSample[i, 0]     sampleMixed[i, 1] = metaModelSample[i, 0]   # Create a fine title legend = str(sizeX) + " realizations"   # Create the cloud comparisonCloud = Cloud(sampleMixed, "blue", "fsquare", legend)   # Put it within a graph structure graphCloud = Graph(     "Polynomial chaos expansion", "model", "meta model", True, "topleft")   # Fulfill the graph with the cloud graphCloud.add(comparisonCloud)   # In order to see the graphs # View(graphCloud).show()

The example illustrated here below is the beam example described in Eq. (), where :

• $E$ follows the Beta($r=0.9$, $t=3.2$, $a=2.8e7$, $b=4.9e7$) distribution,

• $F$ follows the Gamma($k=3e5$, $\lambda =9e3$, $\gamma =1.5e4$) distribution,

• $L$ follows the Uniform($a=250$, $b=260$) distribution,

• $I$ follows the Beta($r=2.5$, $t=4.0$, $a=3.1e2$, $b=4.5e2$) distribution,

• the four components are independent.

We took the following 1D polynomial families, which parameters have been determined in order to be adapted to the marginal distributions of the input random vector :

• $E$ : Jacobi($\alpha =1.3$, $\beta =-0.1$, param=Jacobi.ANALYSIS),

• $F$ : Laguerre($k=1.78$, Laguerre.ANALYSIS),

• $L$ : Legendre,

• $I$ : Jacobi($\alpha =0.5$, $\beta =1.5$).

The truncature strategy of the multivariate orthonormal basis is the Cleaning Strategy where we considered within the 500 first terms of the multivariate basis, among the 50 most significant ones, those which contribution wre significant (which means superior to ${10}^{-4}$).

The evaluation strategy of the approximation coefficients is the least square strategy based on a design of experiments determined with the Monte Carlo sampling technique of size 100.

Figures (49) to (51) draw the graphs mentioned above.

 The 5-th first polynomials of the Jacobi family associated to the variable E. The 5-th first polynomials of the Laguerre family associated to the variable F.
Figure 49
 The 5-th first polynomials of the Legendre associated to the variable I. he 5-th first polynomials of the Jacobi family associated to the variable I.
Figure 50
 Comparison of values from the model and the polynomial chaos meta model.
Figure 51

Figures (52) to (52) draw the 5-th first polynomials of the Krawtchouk and Charlier family respectively associated to the $Binomial\left(N=5,p=0.6\right)$ and $Poisson\left(0.6\right)$ distributions.

 The 5-th first polynomials of the Krawtchouk associated to the $Binomial\left(N=5,p=0.6\right)$ measure. The 5-th first polynomials of the Charlier family associated to the $Poisson\left(0.6\right)$ measure.
Figure 52

### 4.3.4 UC : Polynomial chaos approximation from a design experiment

This Use Case details the method to construct a response surface from a design experiment by polynomial chaos.

You will need the distribution of the input parameters. If not known, statistical inference can be used to select a possible candidate, and fitting tests can validate such an hypothesis.

 Requirements $•$ a sample of the input vector: X type: NumericalSample $•$ a sample of the output vector: Y type: NumericalSample $•$ the distribution of input parameters: distribution type: Distribution Results $•$ the polynomial chaos algorithm: algo type: a FunctionalChaosAlgorithm $•$ the meta-model function: metamodel type: a NumericalMathFunction

Python script for this Use Case :

script_docUC_RespSurface_PolyChaos_database.py

dimension = X.getDimension()   # build the orthogonal basis coll = [] for i in range(dimension):     coll.append(         StandardDistributionPolynomialFactory(distribution.getMarginal(i))) enumerateFunction = LinearEnumerateFunction(dimension) productBasis = OrthogonalProductPolynomialFactory(coll, enumerateFunction)   # create the algorithm degree = 6 adaptiveStrategy = FixedStrategy(     productBasis, enumerateFunction.getStrataCumulatedCardinal(degree)) projectionStrategy = LeastSquaresStrategy() algo = FunctionalChaosAlgorithm(     X, Y, distribution, adaptiveStrategy, projectionStrategy) algo.run()   # get the metamodel function result = algo.getResult() metamodel = result.getMetaModel()

## 4.4 Kriging

This section details another method to construct a response surface: kriging, evaluated on a sample of the input vector and the associated sample of the output variable of interest.

### 4.4.1 UC : Kriging metamodelling approximation from a design experiment

This Use Case details the method to build a response surface from a design experiment by Kriging.

 Requirements $•$ a sample of the input vector: X type: NumericalSample $•$ a sample of the output vector: Y type: NumericalSample Results $•$ the polynomial chaos algorithm: algo type: a KrigingAlgorithm $•$ the meta-model function: metamodel type: a NumericalMathFunction

Python script for this Use Case :

script_docUC_RespSurface_Kriging.py

dimension = X.getDimension()   # STEP 1: Construction of the autocorrelation function covarianceModel = SquaredExponential(dimension, 0.1)   # STEP2: Construction of the regression basis basis = ConstantBasisFactory(dimension).build()   # STEP 3: Construction of the algorithm algo = KrigingAlgorithm(X, Y, basis, covarianceModel) algo.run()   # STEP 4: Get the metamodel function result = algo.getResult() metamodel = result.getMetaModel()

 Uncertainty propagation and Uncertainty sources ranking Table of contents Stochastic process