4 Construction of a response surface

The objective of this section is to build a response surface from a function. This response surface may be built from :

4.1 Taylor approximations

4.1.1 UC : Linear and Quadratic Taylor approximations

This section details the first method to construct a response surface : from the linear or quadratic Taylor approximations of the function at a particular point.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Polynomial Response Surfaces : principles and – Taylor Expansion ).

Requirements  

a function : myFunc

type:

NumericalMathFunction

 
Results  

the linear Taylor approximation myLinearTaylor

type:

LinearTaylor

the quadratic Taylor approximation myQuadraticTaylor

type:

QuadraticTaylor

 

Python script for this UseCase :

               # Taylor approximations at point 'center'                center = NumericalPoint(myFunc.getInputNumericalPointDimension())                for i in range(center.getDimension()) :                center[i] = 1.0+i                  # Create the linear Taylor approximation                myLinearTaylor = LinearTaylor(center, myFunc)                  # Create the quadratic Taylor approximation                myQuadraticTaylor = QuadraticTaylor(center, myFunc)                  # Perform the approximations                # linear Taylor approximation                myLinearTaylor.run()                print "my linear Taylor =" , myLinearTaylor                  # quadratic Taylor approximation                myQuadraticTaylor.run()                print "my quadratic Taylor =" , myQuadraticTaylor                  # Stream out the result                # linear Taylor approximation                linearResponseSurface = myLinearTaylor.getResponseSurface()                print "responseSurface =" , linearResponseSurface                  # quadratic Taylor approximation                quadraticResponseSurface = myQuadraticTaylor.getResponseSurface()                print "quadraticResponseSurface =" , quadraticResponseSurface                  # Compare the approximations and the function at a particluar point                # point 'center'                print "myFunc(" , center , ")=" , myFunc(center)                print "linearResponseSurface(" , center , ")=" , linearResponseSurface(center)                print "quadraticResponseSurface(" , center , ")=" , quadraticResponseSurface(center)


4.2 Least Squares approximation

This section details another method to construct a response surface : with the least squares method, evaluated on a sample of the input vector and the associated sample of the output variable of interest.

If one is interested in building a probabilistic model for the residuals, it is no more a least square approximation problem but rather a regression problem (see UC : "Building and validating a linear model from two samples").

4.2.1 UC : Linear Least Squares approximation from a sample of the input vector and the real function

This Use Case details the second method to construct a response surface : by least squares method from a sample of the input vector and the real function. The output sample is obtained by evaluating the function on the input sample.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Polynomial Response Surfaces : principles and – Least Square Method ).

Requirements  

the limit state function : myFunc

type:

NumericalMathFunction

a sample of the input vector : inputSample

type:

NumericalSample

 
Results  

linear approximation by least squares method responseSurface

type:

NumericalMathFunction

the coefficients of the linear approximation myFunc(X ̲)=A ̲ ̲X ̲+B ̲

type:

Matrix (A ̲ ̲) , NumericalPoint (B ̲)

 

Python script for this Use Case :

               # Create the LinearLeastSquares algorithm                myLeastSquares = LinearLeastSquares(inputSample, myFunc)                  # Perform the algorithm                myLeastSquares.run()                print "myLeastSquares=", myLeastSquares                  # Stream out the results :                # get the matrix A :                linear = myLeastSquares.getLinear()                print "A = ", linear                  # Get the constant term B :                constant = myLeastSquares.getConstant()                print "B = ", constant                  # Get the linear response surface                responseSurface = myLeastSquares.getResponseSurface()                print "responseSurface=", responseSurface                  # Compare the two models at a particular point 'inPoint'                print "(myFunc", inPoint, ")=", myFunc(inPoint)                print "responseSurface(", inPoint, ")=", responseSurface(inPoint)


4.2.2 UC : Linear Least Squares approximation from a sample of the input vector and a sample of the output vector

In this Use Case, the sample of the output variable associated to the sample of the input vector is given.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Polynomial Response Surfaces : principles and – Least Square Method ).

Requirements  

a sample of the input vector : inputSample

type:

NumericalSample

the associated sample of the output vector : outputSample

type:

NumericalSample

 
Results  

linear approximation by least squares method responseSurface

type:

NumericalMathFunction

the coefficients of the linear approximation A ̲ ̲X ̲+B ̲

type:

Matrix (A ̲ ̲) , NumericalPoint (B ̲)

 

Python script for this UseCase :

               # Create the LinearLeastSquares algorithm                myLeastSquares = LinearLeastSquares(inputSample, outputSample)                  # Perform the algorithm                myLeastSquares.run()                print "myLeastSquares=", myLeastSquares                  # Stream out the results :                # get the matrix A :                linear = myLeastSquares.getLinear()                print "A = ", linear                  # Get the constant term B :                constant = myLeastSquares.getConstant()                print "B = ", constant                  # Get the linear response surface                responseSurface = myLeastSquares.getResponseSurface()                print "responseSurface=", responseSurface


4.3 Polynomial chaos expansion

The polynomial chaos expansion enables to approximate the output random variable of interest Y ̲=g(X ̲) p where X ̲ is the input random vector of dimension n by the response surface :

Y ̲ ˜= kK α ̲ k Ψ k T(X ̲)

where α ̲ k p , T is an isoprobabilistic transformation which maps the multivariate distribution of X ̲ into the multivariate distribution μ= i=1 n μ i , and (Ψ k ) k a multivariate polynomial basis of μ 2 ( n ,) which is orthonornal according to the distribution μ. K is a finite subset of .

The distribution μ is supposed to have an independent copula and Y be of finite second moment.

4.3.1 UC : Creation of a polynomial chaos algorithm

The objective of this Use Case is to create a polynomial chaos algorithm in order to evaluate in fine the probabilistic indicators of the ouput variable of interest without any problem of CPU time.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Functional Chaos Expansion and – Polynomial Chaos Expansion ).

When Y ̲ is of dimension p>1, OpenTURNS operates marginal by marginal, using the same multivariate orthonormal basis (Ψ k (x ̲)) k for all the marginals. Then, we obtain the following response surfaces for each marginal Y i :

Y ˜ i = kK α k i Ψ k T(X ̲)

The final multivariate response surface is presented as follows :

Y ̲ ˜= kK α ̲ k Ψ k T(X ̲)

where α ̲ k =(α k 1 ,,α k p ).

We detail here all the steps required in order to create a polynomial chaos algorithm when the output random variable Y is scalar.

Step 1 : Construction of the multivariate orthonormal basis : OpenTURNS proposes to build the the multivariate orthonornal basis (Ψ k (x ̲)) k as the cartesian product of orthonornal univariate polynomial family (Ψ l i (z i )) l :

Ψ k (z ̲)=Ψ k 1 1 (z 1 )*Ψ k 2 2 (z 2 )**Ψ k n n (z n )

The possible univariate polynomial families associated to continuous measures are :

  • Hermite, which is orthonormal with respect to the Normal(0,1) distribution,

  • Jacobi(α, β, param), which is orthonormal with respect to the Beta(β+1,α+β+2,-1,1) distribution if param=0 (default value) or to the Beta(α,β,-1,1) distribution if param=1,

  • Laguerre(k), which is orthonormal with respect to the Gamma(k+1,1,0) distribution,

  • Legendre, which is orthonormal with respect to the Uniform(-1,1) distribution.

The possible univariate polynomial families associated to discrete measures are :

  • Charlier, parameterized by λ +* , which is orthonormal with respect to the Poisson(λ) distribution,

  • Krawtchouk, parameterized by (N,p)( * ,[0,1]), which is orthonormal with respect to the Binomial(N,p) distribution.

  • Meixner, parameterized by (r,p)( +* ,]0,1[), which is orthonormal with respect to the NegativeBinomial(r,p) distribution.

Furthermore, the numerotation of the multivariate orthonormal basis (Ψ k (z ̲)) k is given by an enumerate function which defines a regular way to generate the collection of degres used for the univariate polynomials : an enumerate function represents a bijection from into dim . The possible enumerate functions are :

  • Linear enumeration, through LinearEnumerateFunction,

  • Hyperbolic and anisotropic enumeration, thanks to HyperbolicAnisotropicEnumerateFunction.

In case of the linear enumeration, the bijection is based on a particular procedure of enumerating the set of multi-indices based on an increasing total degree. It begins from the multi-index [0,0,...,0] and follows as written below in dimension 4 :

LinearEnumerateFunction(4)(0) = [0,0,0,0]
LinearEnumerateFunction(4)(1) = [1,0,0,0]
LinearEnumerateFunction(4)(2) = [0,1,0,0]
LinearEnumerateFunction(4)(3) = [0,0,1,0]
LinearEnumerateFunction(4)(4) = [0,0,0,1]
LinearEnumerateFunction(4)(5) = [2,0,0,0]
LinearEnumerateFunction(4)(6) = [1,1,0,0]
LinearEnumerateFunction(4)(7) = [1,0,1,0]
LinearEnumerateFunction(4)(8) = [1,0,0,1]
LinearEnumerateFunction(4)(9) = [0,2,0,0]

In case of the hyperbolic anisotropic enumeration function the bijection is based on an increasing q-quasi norm, and follows as written below in dimension 4 for a value of q=0.75 :

HyperbolicAnisotropicEnumerateFunction(4)(0.75)(0) = [0,0,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(1) = [1,0,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(2) = [0,1,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(3) = [0,0,1,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(4) = [0,0,0,1]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(5) = [2,0,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(6) = [0,2,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(7) = [0,0,2,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(8) = [0,0,0,2]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(9) = [1,1,0,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(10) = [1,0,1,0]
HyperbolicAnisotropicEnumerateFunction(4)(0.75)(11) = [1,0,0,1]

In order to know the degree of the k-th polynomial of the multivariate basis, it is enough to sum all the integers given in the list.

It is possible to ask the enumerate function how many polynomials of the multivariate basis have are contained in the k-th strata, thanks to the method getStrateCardinal(k) and how many polynomials have are contained in all the first k-th strata, thanks to the method getStrateCumulatedCardinal(k).

Note that for the LinearEnumerateFunction the k-th strata corresponds the polynomials of total degree k.

It is also possible to specify HyperbolicAnisotropicEnumerateFunction weights to specify which univariate polynomial should be preponderent in terms of degree used for the univarite polynomials.

Step 2 : Truncature strategy of the multivariate orthonormal basis (Adaptive strategy) : a strategy must be chosen for the selection of the different terms of the multivariate basis. The selected terms are regrouped in the subset K.

There is three different strategies in OpenTURNS :

  • FixedStrategy: this strategy is parameterized with an integer p. The first p polynomials of the multivariate basis are selected, according to the order defined by the Enumeratefunction(). If we want to construct the complete basis with respect to a maximal degree d=2, then it is necessary to have p=C d+dim d which leads to p=15 for dim=4.

  • SequentialStrategy: this strategy takes one by one the polynomials of the multivariate basis according to the order defined by the Enumeratefunction(), till satisfying a convergence criterion. The criterion is defined in Step 3 and the residual value is defined in the FunctionalChaosAlgorithm through the method setMaximumResidual.

  • CleaningStrategy: this strategy considers the p first polynomials of the multivariate basis according to the order defined by the Enumeratefunction(), selects the M ones associated to the most significant coefficients, which means greater than a given value (which defines the significance factor criterion). It proceeds as follows :

    • It generates an intial basis of fixed number of terms (M) which is equal to the maximum size of the basis;

    • Unsignificant terms are removed (with respect to the significance factor criterion);

    • A new term is then generated and the basis is updated with respect to the new coefficient;

    • The procedure then reiterates by removing the unsignificant contributions and at last keeping no more than M most significant contributions.

Step 3 : Evaluation strategy of the approximation coefficients (Projection strategy): The vector α ̲=(α k ) kK is equivalently defined by :

α ̲=argmin α ̲ K E μ gT -1 (Z ̲)- kK α k Ψ k (Z ̲) 2 (21)

or

α ̲=E μ gT -1 (Z ̲)Ψ k (Z ̲) k (22)

where Z ̲=T(X ̲).

It corresponds to two points of view :

  • Relation (21) means that the coefficients (α k ) kK minimize the quadratic error between the model and the polynomial approximation. OpenTURNS implements it through the LeastSquaresStrategy.

  • Relation (22) means that α k is the scalar product of the model with the k-th element of the orthonormal basis (Ψ k ) kK . OpenTURNS implements it through the IntegrationStrategy.

In both cases, the esperance E μ is approximated by a linear quadrature formula :

E μ f(Z ̲) iI ω i f(Ξ i ) (23)

where f is a function L 1 (μ) defined as :

  • In relation (21) :

    f(Z ̲)=gT -1 (Z ̲)- kK α k Ψ k (Z ̲) 2 (24)
  • In relation (22) :

    f(Z ̲)=gT -1 (Z ̲)Ψ k (Z ̲) (25)

In the approximation (23), the set I, the points (Ξ i ) iI and the weights (ω i ) iI are evaluated from different methods implemented in OpenTURNS in the WeightedExperiment which can be :

  • random: the couples (Ξ i ,ω i ) iI are generated such that :

    lim I iI ω i δ Ξ i =μa.s. (26)

    where δ Ξ i is the Dirac random distribution centered on Ξ i .

    We find :

    • the MonteCarloExperiment and LHSExperiment that generate the points (Ξ i ) iI according to the distribution μ with the LHS technique or not, and take ω i =1 cardI;

    • the ImportanceSamplingExperiment that generates the points (Ξ i ) iI according to a given distribution with density h and takes ω i =1 cardIμ(Ξ i ) h(Ξ i ).

  • or deterministic: the couples (ξ i ,ω i ) iI are such that :

    lim cardI iI ω i δ ξ i =μ. (27)

    where δ ξ i is the Dirac distribution centered on ξ i .

    We find :

    • the FixedExperiment where the points (ξ i ) iI are given and the weights are all equal to ω i =1 cardI or given by the User in order to verify (27).

    • the LowDiscrepancyExperiment where the points follow a low discrepancy sequence (see the different sequences implemented in OpenTURNS) and the weights are all equal to ω i =1 cardI;

    • the GaussProductExperiment where the points (ξ i ) iI are the Gauss quadrature points and their associated weights.

The convergence criterion used to evaluate the coefficients is based on the residual value defined in the FunctionalChaosAlgorithm through the method setMaximumResidual.

Step 4 : Creation of the Functional Chaos Algorithm: a FunctionalChaosAlgorithm needs the following information to be created :

  • the model g, which is the limit state function,

  • the multivariate distribution of the input random vector X ̲,

  • the truncature strategy of the multivariate basis, which contains also the information to build the complete multivariate basis,

  • the evaluation strategy of the approximation coefficients.

OpenTURNS offers a second way to define a FunctionalChaosAlgorithm when the User does not have the model but only some realizations of it : an input sample and the associated ouput sample and weights as follows

  • the input sample (X ̲ i ) i=1,,n ,

  • the weights ω i of each realization X ̲ i ,

  • the output sample (Y ̲ i ) i=1,,n ,

  • the multivariate distribution p X ̲ of the input random vector X ̲ : the previous weights are determined such that i=1 n ω i δ X ̲ i p X ̲ ,

  • the truncature strategy of the multivariate basis, which contains also the information to build the complete multivariate basis,

  • the evaluation strategy of the approximation coefficients.

. This second usage enables to deconnect the evaluations of the model from the analysis performed on the samples. It is possible to change the input distribution of the input sample by changing the qweights (and the declaration of p X ̲ ).

Example illustrated in the Use Case: the input random vector X ̲ is of dimension 4 and follows a multivariate distribution denoted Xdist.

The ouput variable of interest is scalar and denoted outputVariable and is the image of X ̲ through the model myModel:

outputVariable=myModel(X ̲)

To build the multivariate orthonormal basis, we select the following univariate polynomial families :

  • for Z 1 : the Hermite family, associated to μ 1 which is a Standard normal distribution : 𝒩(0,1)

  • for Z 2 : the Legendre family, associated to μ 2 which is a Uniform distribution : 𝒰(-1,1)

  • for Z 3 : the Laguerre(1.75) family, associated to μ 3 which is a Gamma distribution : Γ(2.75,1,0)

  • for Z 4 : the Jacobi(2.5,3.5,Jacobi.PROBABILITY) family, associated to μ 4 which is a Beta distribution : B(2.5,3.5,-1,1)

Be careful to build a collection of univariate polynomial family which dimension is the one of X ̲.

Thus, the reduced random vector Z ̲=(Z 1 ,Z 2 ,Z 3 ,Z 4 )=T(X ̲) follows the distribution μ= i=1 4 μ i .

In order to build the multivariate orthonormal basis, we can select any combinations of these four orthonormal polynomial families to characterize the general term of the basis. In that example, we show how to define each selection strategy :

  • the FixedStrategy where we want to select all the polynomials of degree 2, which leads to p=C 6 2 =15,

  • the SequentialStrategy where we want to select among the maximumCardinalBasis=100 first polynomials of the multivariate basis those verfying the convergence criterion,

  • the CleaningStrategy where we want to select, among the maximumConsideredTerms=500 first polynomials of the multivariate basis, those which have the mostSignificant=50 most significant contributions (greatest coefficients), with respect to the significance criterion ε=10 -4 .

The example illustrates the ProjectionStrategy of type LeastSquaresStrategy defined from a MonteCarloExperiment using a Monte Carlo sampling of size sampleSize=100.

The FunctionalChaosAlgorithm is parameterized with all the information previously defined. The example illustrates how to redefine the value of the convergence criterion, which is used to :

  • determine the truncated basis with the sequential strategy,

  • evaluate the coefficients of the polynomial expansion approximation with the least squares strategy.

The new value is newResidual=1.0e-4.

Requirements  

the limit state function : myModel

type:

a NumericalMathFunction

the distribution of the input random vector : Xdist

type:

a Distribution

 
Results  

the polynomial chaos algorithm : polynomialChaosAlgorithm

type:

a FunctionalChaosAlgorithm

 

Python script for this Use Case :

script_docUC_RespSurface_PolyChaosExpansion.py

############################################################# # STEP 1 : Construction of the multivariate orthonormal basis #############################################################   # Dimension of the input random vector dim = 4   # Create the univariate polynomial family collection # which regroups the polynomial families for each direction polyColl = PolynomialFamilyCollection(dim) # For information, with the Krawtchouk and Charlier families : polyColl[0] = KrawtchoukFactory() polyColl[1] = CharlierFactory() # for information, with the automatic selection for i in range(Xdist.getDimension()):     polyColl[i] = StandardDistributionPolynomialFactory(Xdist.getMarginal(i)) # We overload these factories as they are dedicated to discrete distributions polyColl[0] = HermiteFactory() polyColl[1] = LegendreFactory() polyColl[2] = LaguerreFactory(2.75) # Parameter for the Jacobi factory : 'Probabilty' encoded with 1 polyColl[3] = JacobiFactory(2.5, 3.5, 1)   # Create the enumeration function # LinearEnumerateFunction enumerateFunction = LinearEnumerateFunction(dim)   # HyperbolicAnisotropicEnumerateFunction q = 0.4 enumerateFunction_1 = HyperbolicAnisotropicEnumerateFunction(dim, q)   # Create the multivariate orthonormal basis # which is the the cartesian product of the univariate basis multivariateBasis = OrthogonalProductPolynomialFactory(     polyColl, enumerateFunction)   # Ask how many polynomials have degrees equal to k=5 k = 5 print(enumerateFunction.getStrataCardinal(k))   # Ask how many polynomials have degrees inferior or equal to k=5 print(enumerateFunction.getStrataCumulatedCardinal(k))   # Give the k-th term of the multivariate basis # To calculate its degree, add the integers k = 5 print(enumerateFunction(k))   # Build a term of the basis as a NumericalMathFunction # Generally, we do not need to construct manually any term, # all terms are constructed automatically by a strategy of construction of # the basis i = 5 Psi_i = multivariateBasis.build(i)   # Get the measure mu associated to the multivariate basis distributionMu = multivariateBasis.getMeasure()   #################################################################### # STEP 2 : Truncature strategy of the multivariate orthonormal basis ############################################################# # FixedStrategy : # all the polynomials af degree <=2 # which corresponds to the 15 first ones p = 15 truncatureBasisStrategy = FixedStrategy(multivariateBasis, p)   # SequentialStrategy : # among the maximumCardinalBasis = 100 first polynomials # of the multivariate basis those verfying the convergence criterion, maximumCardinalBasis = 100 truncatureBasisStrategy_1 = SequentialStrategy(     multivariateBasis, maximumCardinalBasis)   # CleaningStrategy : # among the maximumConsideredTerms = 500 first polynomials, # those which have the mostSignificant = 50 most significant contributions # with significance criterion significanceFactor = 10^(-4) # The True boolean indicates if we are interested # in the online monitoring of the current basis update # (removed or added coefficients) maximumConsideredTerms = 500 mostSignificant = 50 significanceFactor = 1.0e-4 truncatureBasisStrategy_2 = CleaningStrategy(     multivariateBasis, maximumConsideredTerms, mostSignificant, significanceFactor, True)   ################################################################ # STEP 3 : Evaluation strategy of the approximation coefficients ############################################################# # The technique illustrated is the Least Squares technique # where the points come from an design of experiments # Here : the Monte Carlo sampling technique sampleSize = 100 evaluationCoeffStrategy = LeastSquaresStrategy(     MonteCarloExperiment(sampleSize))   # You can specify the approximation algorithm # This is the algorithm that generates a sequence of basis using Least # Angle Regression basisSequenceFactory = LAR()   # This algorithm estimates the empirical error on each sub-basis using # Leave One Out strategy fittingAlgorithm = CorrectedLeaveOneOut() # Finally the metamodel selection algorithm embbeded in LeastSquaresStrategy approximationAlgorithm = LeastSquaresMetaModelSelectionFactory(     basisSequenceFactory, fittingAlgorithm) evaluationCoeffStrategy_2 = LeastSquaresStrategy(     MonteCarloExperiment(sampleSize), approximationAlgorithm)   # Try integration marginalDegrees = [2] * dim evaluationCoeffStrategy_3 = IntegrationStrategy(     GaussProductExperiment(distributionMu, marginalDegrees))   ##################################################### # STEP 4 : Creation of the Functional Chaos Algorithm ############################################################# # FunctionalChaosAlgorithm : # combination of the model : myModel # the distribution of the input random vector : Xdist # the truncature strategy of the multivariate basis # and the evaluation strategy of the coefficients polynomialChaosAlgorithm = FunctionalChaosAlgorithm(     myModel, Xdist, truncatureBasisStrategy, evaluationCoeffStrategy)


4.3.2 UC : Run and results exploitation of a polynomial chaos algorithm : coefficients, polynomial model, multivariate basis, truncated multivariate basis, ...

The objective of this Use Case is to launch the polynomial chaos algorithm and exploit all the associated results.

We note g: n p , g(X ̲)=Y ̲ the physical model and T: n n , T(X ̲)=Z ̲ the iso-probabilistic transformation.

Once the algorithm has run, it is possible to ask for the following results :

  • the composed model: h:Z ̲ ( Y ̲=gT -1 (Z ̲), which is the model of the reduced variables Z ̲. We have h= k α ̲ k Ψ k ,

  • the coefficients of the polynomial approximation : (α ̲ k ) kK ,

  • the composed meta model: h ^, which is the model of the reduced variables reduced to the truncated multivariate basis (Ψ k ) kK . We have h ^= kK α ̲ k Ψ k ,

  • the meta model: g ^:X ̲Y=h ^T(X ̲) which is the polynomial chaos approximation as a NumericalMathFunction. We have g ^= kK α ̲ k Ψ k T,

  • the truncated multivariate basis : (Ψ k ) kK ,

  • the indices K,

  • the composition of each polynomial of the truncated multivariate basis Ψ k ,

  • the distribution μ of the transformed variables Z ̲,

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Functional Chaos Expansion and – Polynomial Chaos Expansion ).

Once the approximated model has been determined, it is possible to apply the Uncertainty Methodology with the new ouput variable of interest Y ˜, instead of the real one Y. The Use Case (2.4.1) illustrates how to create an output variable of interest from the result of the PolynomialChaosAlgorithm : PolynomialChaosResult.

Requirements  

the polynomial chaos algorithm : polynomialChaosAlgorithm

type:

a FunctionalChaosAlgorithm

 
Results  

the result structure : result

type:

a FunctionalChaosResult

the coefficients : coefficients

type:

a NumericalSample

the indices K : subsetK

type:

a Indices

the composed model : composedModel

type:

a NumericalMathFunction

the composed meta model : composedMetaModel

type:

a NumericalMathFunction

the meta model : metaModel

type:

a NumericalMathFunction

the truncated multivariate basis : multivariateBasisCollection

type:

a NumericalMathFunctionCollection

the distribution of the transformed variables : mu

type:

a Distribution

the composed model (of Z ̲) : composedModel

type:

a NumericalMathFunction

the projection strategy : myProjStat

type:

a ProjectionStrategy

 

Python script for this Use Case :

script_docUC_RespSurface_PolyChaosExploitation.py

##################################################### # Perform the simulation ##################################################### polynomialChaosAlgorithm.run()   # Stream out the result result = polynomialChaosAlgorithm.getResult()   # Get the polynomial chaos coefficients # All the coefficients coefficients = result.getCoefficients()   # The coefficients of marginal i i = 1 coefficientsMarginal_i = result.getCoefficients()[i]   # Get the indices of the selected polynomials : K subsetK = result.getIndices()   # Get the composition of the polynomials # of the truncated multivariate basis for i in range(subsetK.getSize()):     print("Polynomial number ", i, " in truncated basis <-> polynomial number ",           subsetK[i], " = ", LinearEnumerateFunction(dim)(subsetK[i]), " in complete basis")   # Get the multivariate basis # as a colletion of NumericalMathFunction functionCollection = result.getReducedBasis()   # Get the orthogonal basis orthgBasis = result.getOrthogonalBasis()   # Get the distribution of variables Z mu = orthgBasis.getMeasure()   # Get the composed model which is the model of the reduced variables Z composedModel = result.getComposedModel()   # Get the composed meta model which is the model of the reduced variables Z # within the reduced polynomials basis composedMetaModel = result.getComposedMetaModel()   # Get the meta model which is the composed meta model combined with the # iso probabilistic transformation metaModel = result.getMetaModel()   # Get the projection strategy myProjStat = polynomialChaosAlgorithm.getProjectionStrategy()


4.3.3 UC : Draw some usefull graphs associated to a polynomial chaos algorithm : polynomial graphs, comparison graph between numerical samples from the model and the meta-model, ...

The objective of this UC is to provide some graphs usefull after the launch of a polynomial chaos algorithm, as, for example :

  • Graph 1 : the drawings of some members of the 1D polynomial family,

  • Graph 2 : the cloud of points making the comparison between the model values and the meta model ones : if the adequation is perfect, points must be on the first diagonal.

Details on response surface approximations may be found in the Reference Guide ( see files Reference Guide - Step Res. Surf. – Functional Chaos Expansion and – Polynomial Chaos Expansion ).

The Use Case takes the example of the Jacobi family.

Requirements  

the distribution of the input random vector : inputDistribution

type:

a Distribution

the limit state function : limitStateFunction

type:

a NumericalMathFunction

the meta model : metaModel

type:

a NumericalMathFunction

 
Results  

graph described above : graphJacobi

type:

Graph

 

Python script for this UseCase :

script_docUC_RespSurface_PolyChaosDrawings1.py

################################################# # GRAPH 1 : drawings of the 5-th first members of # the Jacobi family #################################################   # Create the Jacobi polynomials family using the default Jacobi.ANALYSIS # parameter set alpha = 0.5 beta = 1.5 jacobiFamily = JacobiFactory(alpha, beta)   # Fix the degree max of the polynomials # which will be drawn degreeMax = 5   # Load all the valid colors colorList = Drawable.BuildDefaultPalette(degreeMax)   # Create a fine title titleJacobi = "Jacobi(" + str(alpha) + ", " + str(beta) + ") polynomials"   # Create an empty graph which will be fullfilled # with curves graphJacobi = Graph(titleJacobi, "z", "polynomial values", True, "topright")   # Fix the number of points for the graph pointNumber = 101   # Bounds of the graph xMinJacobi = -1 xMaxJacobi = 1   # Get the curves for i in range(degreeMax):     graphJacobi_temp = jacobiFamily.build(i).draw(         xMinJacobi, xMaxJacobi, pointNumber)     graphJacobi_temp_draw = graphJacobi_temp.getDrawable(0)     legend = "degree " + str(i)     graphJacobi_temp_draw.setLegend(legend)     graphJacobi_temp_draw.setColor(colorList[i])     graphJacobi.add(graphJacobi_temp_draw)   # In order to see the graphs without creating the files .EPS, .PNG and .FIG # Show(graphJacobi)  

script_docUC_RespSurface_PolyChaosDrawings2.py

##################################################### # GRAPH 2 : points cloud between model and meta model #####################################################   # Generate a NumericalSample of the input random vector sizeX = 500 Xsample = Xdist.getSample(sizeX)   # Evaluate the model on the sample modelSample = myModel(Xsample)   # Evaluate the meta model on the sample metaModelSample = metaModel(Xsample)   # Create the numerical sample of points (Y, Y_tilde) sampleMixed = NumericalSample(sizeX, 2) for i in range(sizeX):     sampleMixed[i, 0] = modelSample[i, 0]     sampleMixed[i, 1] = metaModelSample[i, 0]   # Create a fine title legend = str(sizeX) + " realizations"   # Create the cloud comparisonCloud = Cloud(sampleMixed, "blue", "fsquare", legend)   # Put it within a graph structure graphCloud = Graph(     "Polynomial chaos expansion", "model", "meta model", True, "topleft")   # Fulfill the graph with the cloud graphCloud.add(comparisonCloud)   # In order to see the graphs # View(graphCloud).show()

The example illustrated here below is the beam example described in Eq. (), where :

  • E follows the Beta(r=0.9, t=3.2, a=2.8e7, b=4.9e7) distribution,

  • F follows the Gamma(k=3e5, λ=9e3, γ=1.5e4) distribution,

  • L follows the Uniform(a=250, b=260) distribution,

  • I follows the Beta(r=2.5, t=4.0, a=3.1e2, b=4.5e2) distribution,

  • the four components are independent.

We took the following 1D polynomial families, which parameters have been determined in order to be adapted to the marginal distributions of the input random vector :

  • E : Jacobi(α=1.3, β=-0.1, param=Jacobi.ANALYSIS),

  • F : Laguerre(k=1.78, Laguerre.ANALYSIS),

  • L : Legendre,

  • I : Jacobi(α=0.5, β=1.5).

The truncature strategy of the multivariate orthonormal basis is the Cleaning Strategy where we considered within the 500 first terms of the multivariate basis, among the 50 most significant ones, those which contribution wre significant (which means superior to 10 -4 ).

The evaluation strategy of the approximation coefficients is the least square strategy based on a design of experiments determined with the Monte Carlo sampling technique of size 100.

Figures (49) to (51) draw the graphs mentioned above.

The 5-th first polynomials of the Jacobi family associated to the variable E.
The 5-th first polynomials of the Laguerre family associated to the variable F.
Figure 49
The 5-th first polynomials of the Legendre associated to the variable I.
he 5-th first polynomials of the Jacobi family associated to the variable I.
Figure 50
Comparison of values from the model and the polynomial chaos meta model.
Figure 51

Figures (52) to (52) draw the 5-th first polynomials of the Krawtchouk and Charlier family respectively associated to the Binomial(N=5,p=0.6) and Poisson(0.6) distributions.

The 5-th first polynomials of the Krawtchouk associated to the Binomial(N=5,p=0.6) measure.
The 5-th first polynomials of the Charlier family associated to the Poisson(0.6) measure.
Figure 52

4.3.4 UC : Polynomial chaos approximation from a design experiment

This Use Case details the method to construct a response surface from a design experiment by polynomial chaos.

You will need the distribution of the input parameters. If not known, statistical inference can be used to select a possible candidate, and fitting tests can validate such an hypothesis.

Requirements  

a sample of the input vector: X

type:

NumericalSample

a sample of the output vector: Y

type:

NumericalSample

the distribution of input parameters: distribution

type:

Distribution

 
Results  

the polynomial chaos algorithm: algo

type:

a FunctionalChaosAlgorithm

the meta-model function: metamodel

type:

a NumericalMathFunction

 

Python script for this Use Case :

script_docUC_RespSurface_PolyChaos_database.py

dimension = X.getDimension()   # build the orthogonal basis coll = [] for i in range(dimension):     coll.append(         StandardDistributionPolynomialFactory(distribution.getMarginal(i))) enumerateFunction = LinearEnumerateFunction(dimension) productBasis = OrthogonalProductPolynomialFactory(coll, enumerateFunction)   # create the algorithm degree = 6 adaptiveStrategy = FixedStrategy(     productBasis, enumerateFunction.getStrataCumulatedCardinal(degree)) projectionStrategy = LeastSquaresStrategy() algo = FunctionalChaosAlgorithm(     X, Y, distribution, adaptiveStrategy, projectionStrategy) algo.run()   # get the metamodel function result = algo.getResult() metamodel = result.getMetaModel()  


4.4 Kriging

This section details another method to construct a response surface: kriging, evaluated on a sample of the input vector and the associated sample of the output variable of interest.

4.4.1 UC : Kriging metamodelling approximation from a design experiment

This Use Case details the method to build a response surface from a design experiment by Kriging.

Requirements  

a sample of the input vector: X

type:

NumericalSample

a sample of the output vector: Y

type:

NumericalSample

 
Results  

the polynomial chaos algorithm: algo

type:

a KrigingAlgorithm

the meta-model function: metamodel

type:

a NumericalMathFunction

 

Python script for this Use Case :

script_docUC_RespSurface_Kriging.py

dimension = X.getDimension()   # STEP 1: Construction of the autocorrelation function covarianceModel = SquaredExponential(dimension, 0.1)   # STEP2: Construction of the regression basis basis = ConstantBasisFactory(dimension).build()   # STEP 3: Construction of the algorithm algo = KrigingAlgorithm(X, Y, basis, covarianceModel) algo.run()   # STEP 4: Get the metamodel function result = algo.getResult() metamodel = result.getMetaModel()  


Uncertainty propagation and Uncertainty sources ranking  
Table of contents
Stochastic process