2 Creation of the limit state function and the output variable of interest

The objective of the section is to specify the limit state function and the output variable of interest, defined from the limit state function.

It corresponds to the step 'Step A : Specify the output variable of interest' of the global methodology ( see Reference Guide - Global Methodology for an uncertainty study - Part A : specification of the case-study ).

2.1 Creation of the limit state function

2.1.1 UC : From an analytical formula declared inline

The objective of this Use Case is to specify the limit state function, defined through an analytical formula declared in line.

OpenTURNS automatically evaluates the analytical expressions of the gradient and the hessian, except if the analytical expression of the function is not differentiable everywhere. In that case, OpenTURNS implements a finite difference method :

The example here is the analytical function myAnalyticalFunction defined by the formula :

myAnalyticalFunction: 2 (x 0 ,x 1 )y 0 =-(6+x 0 2 -x 1 )

In the case of functions scalarF:, a simplified constructor exists that requires to define the description of the scalar input and the description of the formulas. We give the example where scalarF(x)=x 2 .

Requirements  

none

 
Results  

some functions : myAnalyticalFunction, scalarF

type:

NumericalMathFunction

 

Python script for this Use Case :

script_docUC_LSF_Analytical.py

#################### # Case where f : R^n --> R^p ####################   # Describe the input vector of dimension 2 inputFunc = Description(("x1", "x2"))   # Describe the output vector of dimension 1 outputFunc = Description(("Output 1",))   # Give the formulas formulas = Description(outputFunc.getSize()) formulas[0] = "-(6 - x1 + x2^2)" print("formulas=", formulas)   # Create the analytical function myAnalyticalFunction = NumericalMathFunction(inputFunc, outputFunc, formulas)   # or directly myAnalyticalFunction_2 = NumericalMathFunction(     ("x1", "x2"), ("Output 1",), ("-(6 - x1 + x2^2)",))   #################### # Case where f : R --> R ####################   # For example : x --> x^2 scalarF = NumericalMathFunction('x', 'x^2')  


2.1.2 UC : From a fonction defined in the script python

The objective of this Use Case is to create the limit state function, from a function defined in the script python. OpenTURNS automatically gives to the analytical formula an implementation for the gradient and the hessian : by default,

It is possible to change the evaluation method for the gradient or the hessian. The following Use Case shows how to proceed.

In order to be able to use the function with the openturns library, it is necessary to define a class which derives from OpenTURNSPythonFunction as indicated belows. The example here is the functions modelePYTHON and modelePYTHON2:

modelePYTHON: 4 (E,F,L,I)FL 3 3EI (8)
modelePYTHON2: 4 2 (a,b,c)(a 2 ,abc) (9)
Requirements  

none

 
Results  

the python function assoiated to the model: modelePYTHON

type:

a OpenTURNSPythonFunction

the limit state function : modeleOpenTURNS

type:

a NumericalMathFunction

 

Python script for this Use Case : script_docUC_LSF_PythonScript.py

################################ # CASE 1 : function : R^4 --> R ################################   # Create here the python lines to define the implementation of the function   # In order to be able to use that function with the openturns library, # it is necessary to define a class which derives from OpenTURNSPythonFunction   class modelePYTHON(OpenTURNSPythonFunction):     # that following method defines the input size (4) and the output size (1)       def __init__(self):         OpenTURNSPythonFunction.__init__(self, 4, 1)       # that following method gives the implementation of modelePYTHON     def _exec(self, x):         E = x[0]         F = x[1]         L = x[2]         I = x[3]         return [-(F * L * L * L) / (3. * E * I)]   # Use that function defined in the script python # with the openturns library # Create a NumericalMathFunction from modelePYTHON model1 = NumericalMathFunction(modelePYTHON())   ################################ # CASE 2 : function : R^3 --> R^2 ################################   # Create here the python lines to define the implementation of the function   # In order to be able to use that function with the openturns library, # we can use the constructor PythonFunction from a regular python function.   def f(x):     a = x[0]     b = x[1]     c = x[2]     y = [-a * a, a * b * c]     return y   model2 = PythonFunction(3, 2, f)  


2.1.3 UC : Some particular functions : linear combination, agregation, composition

The objective of this Use Case is to create some particular functions :

  • the scalar linear combination linComb of vectorial functions vectFctColl=(f 1 ,,f N ) where

    1iN,f i : n p

    with specific scalar weights scalWeight=(c 1 ,,c N ) N :

    linComb: n p X ̲ i=1 N c i f i (X ̲)
  • the vectorial linear combination vectLinComb of a set of functions scalFctColl=(f 1 ,,f N ) where

    1iN,f i : n

    with specific vectorial weights vectWeight=(c ̲ 1 ,,c ̲ N ) where

    1iN,c ̲ i p
    vectLinComb: n p X ̲ i=1 N c ̲ i f i (X ̲)
  • the agregated function h of a set of functions (f 1 ,,f N ) where

    f i : n p i
    agregFct: n p X ̲(f 1 (X ̲),,f N (X ̲)) t

    with

    p= i=1 N p i
  • the indicator function l of an event defined by a function f, a comparison operator and a threshold s. For example, if the comparison operator is >, then

    l=1 {f>s}

OpenTURNS automatically evaluates the analytical expressions of the gradient and the hessian, except if the analytical expression of the function is not differentiable everywhere. In that case, OpenTURNS implements a finite difference method :

  • the gradient evaluation method is the centered finite difference method, with the differential increment h=1e-5 for each direction,

  • the hessian evaluation method is the centered finite difference method, with the differential increment h=1e-4 for each direction.

Requirements  

some collections of scalar and vectorial functions : scalFctColl, vectFctColl

type:

some NumericalMathFunctionCollection

a list of scalar weights : scalWeight

type:

a NumericalPoint

a list of vectorial weights : vectWeight

type:

a NumericalSample

a function : function

type:

a NumericalMathFunction

 
Results  

some particular funtions : linComb, vectLinComb, agregFct, indFactor

type:

some NumericalMathFunction

 

Python script for this UseCase :

script_docUC_LSF_SomeParticularFunctions.py

# Create the scalar linear combination of vectorial functions linComb = NumericalMathFunction(vectFctColl, scalWeight)   # Create the vectorial linear combination of scalar functions vectLinComb = NumericalMathFunction(scalFctColl, vectWeight)   # Create the agregated function agregFct = NumericalMathFunction(vectFctColl)   # Create the indicator function # define a threshold and a comparison operator threshold = 3.0 comparisonOperator = Greater() agregFunction = NumericalMathFunction(function, comparisonOperator, threshold)


2.1.4 UC : Introducing some deterministic variables, optimizing memory and CPU time

The objective of this Use Case is to restrict a model which has been previously declared, to some of its variables, through an optimized way.

Let's have the same context than in the UC. The idea here is to avoid the introduction of the potentially huge matrix A ̲ ̲ and the gradient matrix and hessian tensor of the functions increase and poutre. For that last problem, it is sufficient to define the gradient matrix and hessian tensor to the final function poutreReduced from a finite difference technique.

The function increase is defined as follows :

increase: n prob n X ̲ prob ="inputProb1""inputProbNprob"increase(X ̲ prob )="inputProb1""inputProbNprob"valDet1valDetNdet

where all the (valDet1,...,valDetNdet) are the n det values of the determinist components of X ̲.

The same example is re-written in the folloing Use Case.

Requirements  

the initial limit state function : poutre

type:

LinearNumericalMathFunction ( 4 )

 
Results  

the increase function

type:

NumericalMathFunction ( 2 4 )

the new limit state function : poutreReduced = poutre increase

type:

NumericalMathFunction ( 2 )

 

Python script for this UseCase :

script_docUC_LSF_DeterministicVar2.py

# Dimension of the random input vector stochasticDimension = 2   # Dimension of the deterministic input vector deterministicDimension = 2   # Dimension of the input vector of the limit state function 'poutre' inputDim = poutre.getInputDimension()   # Fixe deterministic values for the two last variables # of the input vecteor (E,F,L,I) # L X2 = 10.0 # I X3 = 5.0   # Create the analyticalfunction 'increase' increase = NumericalMathFunction(     ["E", "F"], ["E", "F", "L", "I"], ["E", "F", str(X2), str(X3)])   # Create the new limit state function : # 'poutreReduced = poutre o increase' poutreReduced = NumericalMathFunction(poutre, increase)   # Give directly to the 'poutreReduced' function a gradient evaluation method # thanks to the finite difference technique # For example, gradient technique : non centered finite difference method myGradient = NonCenteredFiniteDifferenceGradient(     NumericalPoint(2, 1.0e-7), poutreReduced.getEvaluation()) print("myGradient = ", myGradient)   # Substitute the gradient poutreReduced.setGradient(myGradient)   # Give directly to the 'poutreReduced' function a hessian evaluation method # thanks to the finite difference technique # type : non centered finite difference method myHessian = CenteredFiniteDifferenceHessian(     NumericalPoint(2, 1.0e-7), poutreReduced.getEvaluation()) print("myHessian = ", myHessian)   # Substitute the hessian poutreReduced.setHessian(myHessian)


2.1.5 UC : Defining a piece wise function according to a classifier

The objective of this Use Case is define a piece wise function according to a classifier:

f(x ̲)=f 1 (x ̲)x ̲Classe1=f k (x ̲)x ̲Classek=f N (x ̲)x ̲ClasseN (2.1)

where the N classes are defined by the classifier.

The classifier is MixtureClassifier based on a MixtureDistribution defined as:

p(x ̲)= i=1 N w i p i (x ̲) (11)

The rule to assign a point to a class is defined as follows: x ̲ is assigned to the class i=argmax k logw k p k (x ̲).

The grade of x ̲ with respect to the classe k is logw k p k (x ̲).

The example here is a bivariate classifier that classes points among 2 classes, using the mixture distribution defined by:

p(x ̲)=1 2(φ 1 (x ̲)+φ 2 (x ̲)) (12)

with φ i the probability density function of the Normal(μ ̲,σ ̲,R ̲ ̲ i ) where μ ̲=(-1,1) t , σ ̲=(1,1) t , R ̲ ̲ 1 [0,1]=-0.99 and R ̲ ̲ 2 [0,1]=0.99. The function f is defined by:

f(x ̲)=-x ̲x ̲Classe1=+x ̲x ̲Classe2 (2.1)
Requirements   -  
Results  

the moe: model of expert mixture

type:

ExpertMixture

 

Python script for this Use Case :

script_docUC_LSF_ExpertMixture.py

# Create the distribution of the MixtureClassifier aCollection = DistributionCollection(0) R = CorrelationMatrix(2) R[0, 1] = -0.99 aCollection.add(Normal([-1.0, 1.0], [1.0, 1.0], R)) R[0, 1] = 0.99 aCollection.add(Normal([1.0, 1.0], [1.0, 1.0], R))   distClassifier = Mixture(aCollection)   # Create a mixture classifier classifier = MixtureClassifier(distClassifier)   # Create local experts experts = Basis() experts.add(NumericalMathFunction("x", "-x")) experts.add(NumericalMathFunction("x", "x"))   # Create a mixture of experts moe = ExpertMixture(experts, classifier) moeNMF = NumericalMathFunction(moe)   print("Mixture of experts=", moe)   # Evaluate the mixture of experts on some points for i in range(5):     p = NumericalPoint(1, -0.3 + 0.8 * i / 4.0)     print("moe   ( %.6g )=" % p[0], moe(p))     print("moeNMF( %.6g )=" % p[0], moeNMF(p))


2.1.6 UC : Manipulation of a NumericalMathFunction

The objective of this Use Case is to describe the main functionalities of OpenTURNS to manipulate a numerical function f: n p .

OpenTURNS enables :

  • to ask the dimension of its input and output vectors, with the methods getInputDimension, getOutputDimension,

  • to evaluate itself, its gradient and hessian, with the methods gradient, hessian. The evaluation of the function is a vector of dimension p, the gradient is a matrix with p rows and n columns, the hessian is a tensor of order 3 with p rows, n columns and n sheets,

  • to set a finite difference method for the evaluation of the gradient or the hessian, with the methods setGradientImplementation, setHessianImplementation,

  • to evaluate the number of times the function or its gradient or its hessian have been evaluated since the beginning of the python session, with the methods getEvaluationCallsNumber, getGradientCallsNumber, getHessianCallsNumber,

  • to desable or enable (enabled by default) the history mechanism with the methods disableHistory, enableHistory,

  • to get all the values evaluated by the function and the associated inputs with the methods getInputHistory, getOutputHistory

  • to clear the history clearHistory,

  • to ask the description of its input and output vectors, with the methods getInputDescription, getOutputDescription,

  • to extract its components if p>1, which are functions f i : n , with the method getMarginal,

  • to ask for its parameters with the method getParameters,

  • to define its parameters, with the method setParameters,

  • to compose two functions,

  • to ask for the valid operators in OpenTURNS, the valid constants and functions, with the methods GetValidOperators, GetValidConstants, GetValidFunctions.

Furthermore, OpenTURNS implemented an history mechanism to all the NumericalMathFunction types. It is desactivated by default, and stores all the input and output values of a function when activated thanks to the method enableHistory.

It is also possible to draw function graphs, for any function : f: n p where we note x ̲=(x 1 ,,x n ) and f(x ̲)=(f 1 (x ̲),,f p (x ̲)), with n1 and p1. OpenTURNS allows to :

  • the graph of any marginal f k : n with respect to the variation of x j in the intervall [x j min ,x j max ], when all the other components of x ̲ are fixed to the corresponding ones of the central point noted CP ̲. Then OpenTURNS draws the graph : t[x j min ,x j max ]f k (CP 1 ,,CP j-1 ,t,CP j+1 ,CP n ). The method is draw(arguments) whith the appropriate arguments.

  • the iso-curves of the function f k with respect to the variation of (x i ,x j ) in the intervall [x i min ,x i max ]×[x j min ,x j max ], when all the other components of x ̲ are fixed to the corresponding ones of the central point noted CP ̲. Then OpenTURNS draws the graph : (t,u)[x i min ,x i max ]×[x j min ,x j max ]f k (CP 1 ,,CP i-1 ,t,CP i+1 ,,CP j-1 ,u,CP j+1 ,CP n ).

The number of points to draw the curve can be specified.

There is a simplified call to the metho when n=p=1 or when n=2 and p=1.

Requirements  

some functions n p : myFunction, func1, func2, func3, g (for more details, see above)

type:

NumericalMathFunction

 
Results  

none

 

Functions used in the script have the following features :

  • myFunction: 2 2 ,

  • func1: ,

  • func2: 2 ,

  • func3: 3 2

  • g: 2 2 .

Python script for this Use Case :

script_docUC_LSF_NMFManipulation.py

# Activate the history mechanism myFunction.enableHistory()   # Evaluate the function at a particular point # For example the null vector point = NumericalPoint(myFunction.getInputDimension(), 1) imagePoint = myFunction(point)   # Get the input history myInputHistory = myFunction.getHistoryInput() # Then get the sample which has been stored storedSample = myInputHistory.getSample() print('stored sample = ', storedSample)   # Desactivate the history mechanism myFunction.disableHistory()   # Ask for the dimension of the input and output vectors print(myFunction.getInputDimension()) print(myFunction.getOutputDimension())   # Evaluate the gradient of the function at a particular point gradientMatrix = myFunction.gradient(point)   # Evaluate the hessian of the function at a particular point hessianMatrix = myFunction.hessian(point)   # Change the gradient evaluation method # Type : non centered finite difference method incrGrad = NumericalPoint(myFunction.getInputDimension(), 1.e-7) myGradient = NonCenteredFiniteDifferenceGradient(     incrGrad, myFunction.getEvaluation()) print("myGradient = ", myGradient) # Substitute the gradient myFunction.setGradient(myGradient)   # Change the hessian evaluation method # type : centered finite difference method with constant step myStep = NumericalPoint(myFunction.getInputDimension(), 1.e-7) myHessian = CenteredFiniteDifferenceHessian(myStep, myFunction.getEvaluation()) print("myHessian = ", myHessian) # Substitute the hessian myFunction.setHessian(myHessian)   # Get the number of times the function has been evaluated callsNumber = myFunction.getEvaluationCallsNumber()   # Get the number of times the gradient has been evaluated callsNumberGrad = myFunction.getGradientCallsNumber()   # Get the number of times the hessian has been evaluated callsNumberHes = myFunction.getHessianCallsNumber()   # Get the component i # Care : the numerotation begins at 0 i = 1 component = myFunction.getMarginal(i)   # Compose the two NumericalMathFunction : h=myFunction o g h = NumericalMathFunction(myFunction, g)   # Get the valid operators in OpenTURNS print(NumericalMathFunction.GetValidOperators())   # Get the valid functions in OpenTURNS print(NumericalMathFunction.GetValidFunctions())   # Get the valid constants in OpenTURNS print(NumericalMathFunction.GetValidConstants())   # Clear the cache from all the values previoulsy stored myFunction.clearCache()   ####### # Graph #######   # General Case : function R ^n --> R^p # for example R^3 --> R^2 # (x,y,z) --> f(x,y,z) = (f_1(x,y,z), f_2(x,y,z))   ################################# # Graph 1 : z -->  f_2(x_0,y_0,z) # for z in [-1.5, 1.5] and y_0 = 2. and z_0 = 2.5 # Specify the input component that varies # Care : numerotation begins at 0 inputMarg = 2 # Give its variation intervall zMin = -1.5 zMax = 1.5 # Give the coordinates of the fixed input components centralPt = [1.0, 2.0, 2.5] # Specify the output marginal function # Care : numerotation begins at 0 outputMarg = 1 # Specify the point number of the final curve ptNb = 101 # Draw the curve! myGraph1 = func3.draw(inputMarg, outputMarg, centralPt, zMin, zMax, ptNb)   ################################### # Graph 2 : (x,z) -->  f_1(x,y_0,z) # for x in [-1.5, 1.5], z in [-2.5, 2.5] # and y_0 = 2.5 # Specify the input components that varie # Care : numerotation begins at 0 firstInputMarg = 0 secondInputMarg = 2 # Give their variation intervall inputMin2 = [-1.5, -2.5] inputMax2 = [1.5, 2.5] # Give the coordinates of the fixed input components centralPt = [0.0, 2., 2.5] # Specify the output marginal function # Care : numerotation begins at 0 outputMarg = 1 # Specify the point number of the final curve ptNb = [101, 101] # Draw the curve! myGraph2 = func3.draw(firstInputMarg, secondInputMarg,                       outputMarg, centralPt, inputMin2, inputMax2, ptNb)   ################################################# # Graph 3 : simplified method for x -->  func1(x) # for x in [-1.5, 1.5] # Give the variation intervall xMin3 = -1.5 xMax3 = 1.5 # Specify the point number of the final curve ptNb = 101 # Draw the curve! myGraph3 = func1.draw(xMin3, xMax3, ptNb)   ################################# # Graph 4 : (x,y) -->  func2(x,y) # for x in [-1.5, 1.5], y in [-2.5, 2.5] # Give their variation intervall inputMin4 = [-1.5, -2.5] inputMax4 = [1.5, 2.5] # Give the coordinates of the fixed input components centralPt = [0.0, 2., 2.5] # Specify the output marginal function # Care : numerotation begins at 0 outputMarg = 1 # Specify the point number of the final curve ptNb = [101, 101] # Draw the curve! myGraph4 = func2.draw(inputMin4, inputMax4, ptNb)

Here are the graphs drawn by the script for the functions :

  • func1: such that func1(x)=x 2 ,

  • func2: 2 such that func2(x,y)=xy ,

  • func3: 3 3 such that func3(x,y,z)=(1+2x+y+z 3 ,2+sin(x+2y)-z 4 sin(z)).

Graph of : zfunc3 2 (x 0 ,y 0 ,z), with (x 0 ,y 0 )=(1.0,2.0) and z[-1.5,1.5].
Iso-curves of : (x,z)func3 2 (x 0 ,y,z), with y 0 =2.0 and x[-1.5,1.5] and z[-2.5,2.5].
Figure 28
Graph of : xfunc1(x), with x[-1.5,1.5].
Iso-curves of : (x,y)func2(x,y), with x[-1.5,1.5] and y[-2.5,2.5].
Figure 29

2.2 Creation of the output variable of interest from the limit state function and the input random vector

The objective of the section is to determine the output variable of interest directly from a limit state function and a random input vector declared previously.

2.2.1 UC : Creation of the ouput random vector

The objective of this Use Case is to create the ouput random variable of interest defined as the image through the limit state function of the input random vector.

Details on the definition of random mixture variables may be found in the Reference Guide ( see files Reference Guide - Step B – Random Mixture : affine combination of independent univariate distributions ).

Requirements  

the limit state function : myFunction

type:

NumericalMathFunction

the random input vector : inputRV

type:

RandomVector which implementation is a UsualRandomVector

 
Results  

the output variable of interest outputRV = myFunction(inputRV)

type:

RandomVector which implementation is a CompositeRandomVector

 

Python script for this UseCase :

script_docUC_OVI_FromLSF.py

# Create the output variable of interest 'output = myFunction(input)' outputRV = RandomVector(myFunction, inputRV)  


2.2.2 UC : Extraction of a random subvector from a random vector

The objective of this Use Case is to extract a subvector from a random vector which has been defined as well as a UsualRandomvector (it means thanks to a distribution, see UC. 1.0.1) than as a CompositeRandomVector (as the image through a limit state function of an input random vector, see UC. 2.2.1).

Let's note Y ̲=(Y 1 ,,Y n ) a random vector and I[1,n] a set of indices :

  • In the first case, the subvector is defined by Y ˜ ̲=(Y i ) iI ,

  • In the second case, where Y ̲=f(X ̲) with f=(f 1 ,,f n ), f i some scalar functions, the sub vector is Y ˜ ̲=(f i (X ̲)) iI .

Requirements  

the random vector : myRandomVector

type:

RandomVector which implementation is a UsualRandomVector or CompositeRandomVector

 
Results  

the extracted random vector : myExtractedRandomVector

type:

RandomVector which implementation is a UsualRandomVector or CompositeRandomVector

 

Python script for this UseCase :

                 # CASE 1 : Get the marginal of the random vector                # Corresponding to the component i                  # Care : numerotation begins at 0                myExtractedRandomVector = myRandomVector.getMarginal(i)                    # CASE 2 : Get the marginals of the random vector                # Corresponding to several components                # decribed in the myIndice table                # For example, components 0, 1, and 5                  myExtractedRandomVector = myRandomVector.getMarginal( (0, 1, 5) )


2.3 Creation of the output variable of interest defined as an affine combination of input random variables

2.3.1 UC : Creation of a Random Mixture

The objective of this Use Case is to define a random vector Y ̲ as a RandomMixture, which means an affine transform of input random variables :

Y ̲=y ̲ 0 +M ̲ ̲X ̲

where y ̲ 0 d is a deterministic vector with d{1,2,3}, M ̲ ̲ d,n () a deterministic matrix and (X k ) 1kn are some independent univariate variables.

Be careful! This notion is different from the Mixture notion where the combination is made on the probability density functions and not on the univariate random variable :

  • Random Mixture (in dimension 1) : Y=a 0 + i=1 n a i X i ,

  • Mixture : p Y = i=1 n a i p X i , where p X i is the probability density function of X i and i=1 n a i =1.

When not precised, the coefficient y ̲ 0 is taken equal to 0.

OpenTURNS evaluates the probability density function and cumulative distribution function of the random variable Y ̲. So, it is possible to ask Y ̲ any request compatible with a Distribution: moments, quantiles(in dimension 1 only), PDF and CDF evaluations, ...

It is important to note that the distribution evaluation of Y ̲ needs the evaluation of the characteristic functions of the univariate X i . OpenTURNS proposes an implementation of all its univariate distributions, continuous or discrete ones. But only some of the them have the implementation of a specific algorithm that evaluates the characteristic function : it is the case of all the discrete distributions and most of (but not all) the continuous ones. In that case, the evaluation is performant. For the remaining distributions, the generic implementation might be time consuming for high arguments.

Furthermore, let's note that as Y ̲ is not a CompositeRandomVector; In dimension 1, it cannot be used by a FORM/SORM algorithm, a QuadraticCumul algorithm or even a Simulation algorithm... In order to use such algorithms, it is necessary to transform the RandomMixture in a CompositeRandomVector by using the identity function f: quickly defined (see the following python script).

The example here is an output variable of interest defined as the following combination :

Y=2+5X 1 +X 2

where :

  • X 1 follows a (λ=1.5),

  • X 2 follows a 𝒩(μ=4,Variance=1).

The UC asks Y its mean, variance, probability density graph, quantile of order 90%, its probability to exceeds 3.

Requirements  

none

 
Results  

the random mixture Y : myRandomMixtureY

type:

RandomMixture

 

Python script for this UseCase :

script_docUC_OVI_RandomMixture.py

# Create the univariate distributions   # X1 : Exponential(1.5) X1 = Exponential(1.5) # X2 : Normal(4,1) X2 = Normal(4, 1)   # Put them in a DistributionCollection distribList = [X1, X2]   # Create the numerical of the distribution weights # coefficients a1, a2 weight = NumericalPoint([5., 1.])   # Create the constant coefficient a0 a0 = 2.0   # Create the Random Mixture Y = a0 + Sum(ai Xi) myRandomMixtureY = RandomMixture(distribList, weight, a0)   # Or create the Random Mixture where a0 = 0 : Y = Sum(ai Xi) myRandomMixtureY = RandomMixture(distribList, weight, a0)   # Or create the Random Mixture where all the weights (a1, a2)are equal to 1 myRandomMixtureY = RandomMixture(distribList, a0)   # Ask myRandomMixtureY its  mean, variance, quantile of order 0.9, its # probability to exceeds 3 mean = myRandomMixtureY.getMean()[0] variance = myRandomMixtureY.getCovariance()[0, 0] quantile90 = myRandomMixtureY.computeQuantile(0.90)[0] proba = myRandomMixtureY.computeComplementaryCDF(3.0)   # Ask myRandomMixtureY to draw its pdf pdfY = myRandomMixtureY.drawPDF()   # Visualize the graph without saving it # View(pdfY).show()   # Transform the RandomMixture into a RandomVector myRandomVectorY = RandomVector(myRandomMixtureY)

Probability density function of a Random Mixture.
Cumulative density function of a Random Mixture.
Figure 30

2.4 Creation of the output variable of interest from the result of a polynomial chaos expansion

2.4.1 UC : Creation of the output variable of interest from the result of a polynomial chaos expansion

The objective of this Use Case is to define the output variable of interest as the result of a polynomial chaos algorithm which defined a particular response surface (refer to 4.3).

Details on the polynomial chaos expansion may be found in the Reference Guide ( see files Reference Guide - Step Resp. Surf. – Polynomial Chaos Expansion ).

Requirements  

the result structure of a polynomial chaos algorithm : polynomialChaosResult

type:

a FunctionalChaosResult

 
Results  

the new output variable of interest : newOuputVariableOfInterest

type:

a RandomVector

 

Python script for this UseCase :

               # Create the new ouput variable of interest                # based on the meta model                # evaluated from the polynomial chaos algorithm                newOuputVariableOfInterest = RandomVector(polynomialChaosResult)


2.4.2 UC : Creation of a specialized random vector for the global sensitivity analysis using a polynomial chaos expansion

The objective of this Use Case is to define the output variable of interest as a specialized random vector that allows the User to compute the mean and covariance using the coefficients of the decomposition upon the polynomial hilbertian basis, and also the Sobol indices and total indices.

Details on the Sobol indices may be found in the Reference Guide ( see files Reference Guide - Step C' – Sensitivity analysis using Sobol indices ).

If g: n is a model and Y=g(X ̲ is the random ouput variable with X ̲ a random vector, we define the Sobol indice associated to i ̲=(i 1 ,,i k )) as follows where we suppose g and X ̲ have the good properties :

IS(i ̲)= Var 𝔼Y|X i 1 X i k Var Y

The Total Sobol indice associated to (i 1 ,,i k )) is defined as :

TIS(i ̲)= j ̲I(i ̲) IS(j ̲

where I(i ̲)=(j 1 ,,j p ),p[k,n]/{i 1 ,,i k }{j 1 ,,j p }.

Requirements  

the result structure of a polynomial chaos algorithm : polynomialChaosResult

type:

a FunctionalChaosResult

 
Results  

the new output variable of interest : newOuputVariableOfInterest

type:

a FunctionalChaosRandomVector

 

Python script for this UseCase :

               # Create the new ouput variable of interest                # based on the meta model                # evaluated from the polynomial chaos algorithm                # in a way that allow to compute Sobol indices                # and total indices                newOuputVariableOfInterest = FunctionalChaosRandomVector(polynomialChaosResult)                print "Sobol index 0=", newOuputVariableOfInterest.getSobolIndex(0)                indices = Indices(2)                indices[0] = 0                indices[1] = 1                print "Sobol index [0, 1]=", newOuputVariableOfInterest.getSobolIndex(indices)                print "Sobol total index 0=", newOuputVariableOfInterest.getSobolTotalIndex(0)                indices = Indices(2)                indices[0] = 0                indices[1] = 1                print "Sobol total index [0, 1]=", newOuputVariableOfInterest.getSobolTotalIndex(indices)  


Probabilistic input vector modelisation  
Table of contents
Uncertainty propagation and Uncertainty sources ranking