# 2 Creation of the limit state function and the output variable of interest

The objective of the section is to specify the limit state function and the output variable of interest, defined from the limit state function.

It corresponds to the step 'Step A : Specify the output variable of interest' of the global methodology ( see Reference Guide - Global Methodology for an uncertainty study - Part A : specification of the case-study ).

## 2.1 Creation of the limit state function

### 2.1.1 UC : From an analytical formula declared inline

The objective of this Use Case is to specify the limit state function, defined through an analytical formula declared in line.

OpenTURNS automatically evaluates the analytical expressions of the gradient and the hessian, except if the analytical expression of the function is not differentiable everywhere. In that case, OpenTURNS implements a finite difference method :

• the gradient evaluation method is the centered finite difference method, with the differential increment $h=1e-5$ for each direction,

• the hessian evaluation method is the centered finite difference method, with the differential increment $h=1e-4$ for each direction.

The example here is the analytical function myAnalyticalFunction defined by the formula :

 $\begin{array}{c}\hfill \begin{array}{cccc}myAnalyticalFunction:\hfill & {ℝ}^{2}& \to \hfill & ℝ\hfill \\ & \left({x}_{0},{x}_{1}\right)& ↦\hfill & {y}_{0}=-\left(6+{x}_{0}^{2}-{x}_{1}\right)\hfill \end{array}\end{array}$

In the case of functions $scalarF:ℝ\to ℝ$, a simplified constructor exists that requires to define the description of the scalar input and the description of the formulas. We give the example where $scalarF\left(x\right)={x}^{2}$.

 Requirements $•$ none Results $•$ some functions : myAnalyticalFunction, scalarF type: NumericalMathFunction

Python script for this Use Case :

script_docUC_LSF_Analytical.py

#################### # Case where f : R^n --> R^p ####################   # Describe the input vector of dimension 2 inputFunc = Description(("x1", "x2"))   # Describe the output vector of dimension 1 outputFunc = Description(("Output 1",))   # Give the formulas formulas = Description(outputFunc.getSize()) formulas[0] = "-(6 - x1 + x2^2)" print("formulas=", formulas)   # Create the analytical function myAnalyticalFunction = NumericalMathFunction(inputFunc, outputFunc, formulas)   # or directly myAnalyticalFunction_2 = NumericalMathFunction(     ("x1", "x2"), ("Output 1",), ("-(6 - x1 + x2^2)",))   #################### # Case where f : R --> R ####################   # For example : x --> x^2 scalarF = NumericalMathFunction('x', 'x^2')

### 2.1.2 UC : From a fonction defined in the script python

The objective of this Use Case is to create the limit state function, from a function defined in the script python. OpenTURNS automatically gives to the analytical formula an implementation for the gradient and the hessian : by default,

• the gradient evaluation method is the centered finite difference method, with the differential increment $h=1e-5$ for each direction,

• the hessian evaluation method is the centered finite difference method, with the differential increment $h=1e-4$ for each direction.

It is possible to change the evaluation method for the gradient or the hessian. The following Use Case shows how to proceed.

In order to be able to use the function with the openturns library, it is necessary to define a class which derives from OpenTURNSPythonFunction as indicated belows. The example here is the functions modelePYTHON and modelePYTHON2:

 $\begin{array}{cccc}modelePYTHON:\hfill & {ℝ}^{4}\hfill & \to & ℝ\hfill \\ & \left(E,F,L,I\right)\hfill & ↦& \frac{F{L}^{3}}{3EI}\hfill \end{array}$ (8)
 $\begin{array}{cccc}modelePYTHON2:\hfill & {ℝ}^{4}\hfill & \to & {ℝ}^{2}\hfill \\ & \left(a,b,c\right)\hfill & ↦& \left({a}^{2},abc\right)\hfill \end{array}$ (9)
 Requirements $•$ none Results $•$ the python function assoiated to the model: modelePYTHON type: a OpenTURNSPythonFunction $•$ the limit state function : modeleOpenTURNS type: a NumericalMathFunction

Python script for this Use Case : script_docUC_LSF_PythonScript.py

################################ # CASE 1 : function : R^4 --> R ################################   # Create here the python lines to define the implementation of the function   # In order to be able to use that function with the openturns library, # it is necessary to define a class which derives from OpenTURNSPythonFunction   class modelePYTHON(OpenTURNSPythonFunction):     # that following method defines the input size (4) and the output size (1)       def __init__(self):         OpenTURNSPythonFunction.__init__(self, 4, 1)       # that following method gives the implementation of modelePYTHON     def _exec(self, x):         E = x[0]         F = x[1]         L = x[2]         I = x[3]         return [-(F * L * L * L) / (3. * E * I)]   # Use that function defined in the script python # with the openturns library # Create a NumericalMathFunction from modelePYTHON model1 = NumericalMathFunction(modelePYTHON())   ################################ # CASE 2 : function : R^3 --> R^2 ################################   # Create here the python lines to define the implementation of the function   # In order to be able to use that function with the openturns library, # we can use the constructor PythonFunction from a regular python function.   def f(x):     a = x[0]     b = x[1]     c = x[2]     y = [-a * a, a * b * c]     return y   model2 = PythonFunction(3, 2, f)

### 2.1.3 UC : Some particular functions : linear combination, agregation, composition

The objective of this Use Case is to create some particular functions :

• the scalar linear combination linComb of vectorial functions $vectFctColl=\left({f}_{1},\cdots ,{f}_{N}\right)$ where

 $\begin{array}{c}\hfill \forall 1\le i\le N,\phantom{\rule{0.166667em}{0ex}}{f}_{i}:{ℝ}^{n}⟶{ℝ}^{p}\end{array}$

with specific scalar weights $scalWeight=\left({c}_{1},\cdots ,{c}_{N}\right)\in {ℝ}^{N}$ :

 $\begin{array}{c}\hfill \begin{array}{cccc}linComb:\hfill & {ℝ}^{n}\hfill & ⟶& {ℝ}^{p}\hfill \\ & \underline{X}\hfill & ⟶& \sum _{i=1}^{N}{c}_{i}{f}_{i}\left(\underline{X}\right)\hfill \end{array}\end{array}$
• the vectorial linear combination vectLinComb of a set of functions $scalFctColl=\left({f}_{1},\cdots ,{f}_{N}\right)$ where

 $\begin{array}{c}\hfill \forall 1\le i\le N,\phantom{\rule{0.166667em}{0ex}}{f}_{i}:{ℝ}^{n}⟶ℝ\end{array}$

with specific vectorial weights $vectWeight=\left({\underline{c}}_{1},\cdots ,{\underline{c}}_{N}\right)$ where

 $\begin{array}{c}\hfill \forall 1\le i\le N,\phantom{\rule{0.166667em}{0ex}}{\underline{c}}_{i}\in {ℝ}^{p}\end{array}$
 $\begin{array}{c}\hfill \begin{array}{cccc}vectLinComb:\hfill & {ℝ}^{n}\hfill & ⟶& {ℝ}^{p}\hfill \\ & \underline{X}\hfill & ⟶& \sum _{i=1}^{N}{\underline{c}}_{i}{f}_{i}\left(\underline{X}\right)\hfill \end{array}\end{array}$
• the agregated function $h$ of a set of functions $\left({f}_{1},\cdots ,{f}_{N}\right)$ where

 $\begin{array}{c}\hfill {f}_{i}:{ℝ}^{n}⟶{ℝ}^{{p}_{i}}\end{array}$
 $\begin{array}{c}\hfill \begin{array}{cccc}agregFct:\hfill & {ℝ}^{n}\hfill & ⟶& {ℝ}^{p}\hfill \\ & \underline{X}\hfill & ⟶& {\left({f}_{1}\left(\underline{X}\right),\cdots ,{f}_{N}\left(\underline{X}\right)\right)}^{t}\hfill \end{array}\end{array}$

with

 $\begin{array}{c}\hfill p=\sum _{i=1}^{N}{p}_{i}\end{array}$
• the indicator function $l$ of an event defined by a function $f$, a comparison operator and a threshold $s$. For example, if the comparison operator is $>$, then

 $\begin{array}{c}\hfill l={1}_{\left\{f>s\right\}}\end{array}$

OpenTURNS automatically evaluates the analytical expressions of the gradient and the hessian, except if the analytical expression of the function is not differentiable everywhere. In that case, OpenTURNS implements a finite difference method :

• the gradient evaluation method is the centered finite difference method, with the differential increment $h=1e-5$ for each direction,

• the hessian evaluation method is the centered finite difference method, with the differential increment $h=1e-4$ for each direction.

 Requirements $•$ some collections of scalar and vectorial functions : scalFctColl, vectFctColl type: some NumericalMathFunctionCollection $•$ a list of scalar weights : scalWeight type: a NumericalPoint $•$ a list of vectorial weights : vectWeight type: a NumericalSample $•$ a function : function type: a NumericalMathFunction Results $•$ some particular funtions : linComb, vectLinComb, agregFct, indFactor type: some NumericalMathFunction

Python script for this UseCase :

script_docUC_LSF_SomeParticularFunctions.py

# Create the scalar linear combination of vectorial functions linComb = NumericalMathFunction(vectFctColl, scalWeight)   # Create the vectorial linear combination of scalar functions vectLinComb = NumericalMathFunction(scalFctColl, vectWeight)   # Create the agregated function agregFct = NumericalMathFunction(vectFctColl)   # Create the indicator function # define a threshold and a comparison operator threshold = 3.0 comparisonOperator = Greater() agregFunction = NumericalMathFunction(function, comparisonOperator, threshold)

### 2.1.4 UC : Introducing some deterministic variables, optimizing memory and CPU time

The objective of this Use Case is to restrict a model which has been previously declared, to some of its variables, through an optimized way.

Let's have the same context than in the UC. The idea here is to avoid the introduction of the potentially huge matrix $\underline{\underline{A}}$ and the gradient matrix and hessian tensor of the functions increase and poutre. For that last problem, it is sufficient to define the gradient matrix and hessian tensor to the final function poutreReduced from a finite difference technique.

The function increase is defined as follows :

 $\begin{array}{c}\hfill \begin{array}{cccc}increase:\hfill & {ℝ}^{{n}_{prob}}\hfill & \to & {ℝ}^{n}\hfill \\ & {\underline{X}}_{prob}=\begin{array}{c}"inputProb1"\hfill \\ \cdots \hfill \\ "inputProbNprob"\hfill \end{array}\hfill & ↦& increase\left({\underline{X}}_{prob}\right)=\begin{array}{c}"inputProb1"\hfill \\ \cdots \hfill \\ "inputProbNprob"\hfill \\ valDet1\hfill \\ \cdots \hfill \\ valDetNdet\hfill \end{array}\hfill \end{array}\end{array}$

where all the $\left(valDet1,...,valDetNdet\right)$ are the ${n}_{det}$ values of the determinist components of $\underline{X}$.

The same example is re-written in the folloing Use Case.

 Requirements $•$ the initial limit state function : poutre type: LinearNumericalMathFunction $\left({ℝ}^{4}\to ℝ\right)$ Results $•$ the increase function type: NumericalMathFunction $\left({ℝ}^{2}\to {ℝ}^{4}\right)$ $•$ the new limit state function : poutreReduced = poutre $\circ$ increase type: NumericalMathFunction $\left({ℝ}^{2}\to ℝ\right)$

Python script for this UseCase :

script_docUC_LSF_DeterministicVar2.py

# Dimension of the random input vector stochasticDimension = 2   # Dimension of the deterministic input vector deterministicDimension = 2   # Dimension of the input vector of the limit state function 'poutre' inputDim = poutre.getInputDimension()   # Fixe deterministic values for the two last variables # of the input vecteor (E,F,L,I) # L X2 = 10.0 # I X3 = 5.0   # Create the analyticalfunction 'increase' increase = NumericalMathFunction(     ["E", "F"], ["E", "F", "L", "I"], ["E", "F", str(X2), str(X3)])   # Create the new limit state function : # 'poutreReduced = poutre o increase' poutreReduced = NumericalMathFunction(poutre, increase)   # Give directly to the 'poutreReduced' function a gradient evaluation method # thanks to the finite difference technique # For example, gradient technique : non centered finite difference method myGradient = NonCenteredFiniteDifferenceGradient(     NumericalPoint(2, 1.0e-7), poutreReduced.getEvaluation()) print("myGradient = ", myGradient)   # Substitute the gradient poutreReduced.setGradient(myGradient)   # Give directly to the 'poutreReduced' function a hessian evaluation method # thanks to the finite difference technique # type : non centered finite difference method myHessian = CenteredFiniteDifferenceHessian(     NumericalPoint(2, 1.0e-7), poutreReduced.getEvaluation()) print("myHessian = ", myHessian)   # Substitute the hessian poutreReduced.setHessian(myHessian)

### 2.1.5 UC : Defining a piece wise function according to a classifier

The objective of this Use Case is define a piece wise function according to a classifier:

 $\begin{array}{cc}\hfill f\left(\underline{x}\right)& ={f}_{1}\left(\underline{x}\right)\phantom{\rule{1.em}{0ex}}\forall \underline{x}\in Classe\phantom{\rule{0.166667em}{0ex}}1\\ \hfill & ={f}_{k}\left(\underline{x}\right)\phantom{\rule{1.em}{0ex}}\forall \underline{x}\in Classe\phantom{\rule{0.166667em}{0ex}}k\\ \hfill & ={f}_{N}\left(\underline{x}\right)\phantom{\rule{1.em}{0ex}}\forall \underline{x}\in Classe\phantom{\rule{0.166667em}{0ex}}N\end{array}$ (2.1)

where the $N$ classes are defined by the classifier.

The classifier is MixtureClassifier based on a MixtureDistribution defined as:

 $p\left(\underline{x}\right)=\sum _{i=1}^{N}{w}_{i}{p}_{i}\left(\underline{x}\right)$ (11)

The rule to assign a point to a class is defined as follows: $\underline{x}$ is assigned to the class $i={argmax}_{k}log{w}_{k}{p}_{k}\left(\underline{x}\right)$.

The grade of $\underline{x}$ with respect to the classe $k$ is $log{w}_{k}{p}_{k}\left(\underline{x}\right)$.

The example here is a bivariate classifier that classes points among 2 classes, using the mixture distribution defined by:

 $p\left(\underline{x}\right)=\frac{1}{2}\left({\phi }_{1}\left(\underline{x}\right)+{\phi }_{2}\left(\underline{x}\right)\right)$ (12)

with ${\phi }_{i}$ the probability density function of the $Normal\left(\underline{\mu },\underline{\sigma },{\underline{\underline{R}}}_{i}\right)$ where $\underline{\mu }={\left(-1,1\right)}^{t}$, $\underline{\sigma }={\left(1,1\right)}^{t}$, ${\underline{\underline{R}}}_{1}\left[0,1\right]=-0.99$ and ${\underline{\underline{R}}}_{2}\left[0,1\right]=0.99$. The function $f$ is defined by:

 $\begin{array}{cc}\hfill f\left(\underline{x}\right)& =-\underline{x}\phantom{\rule{1.em}{0ex}}\forall \underline{x}\in Classe\phantom{\rule{0.166667em}{0ex}}1\\ \hfill & =+\underline{x}\phantom{\rule{1.em}{0ex}}\forall \underline{x}\in Classe\phantom{\rule{0.166667em}{0ex}}2\end{array}$ (2.1)
 Requirements - Results $•$ the moe: model of expert mixture type: ExpertMixture

Python script for this Use Case :

script_docUC_LSF_ExpertMixture.py

# Create the distribution of the MixtureClassifier aCollection = DistributionCollection(0) R = CorrelationMatrix(2) R[0, 1] = -0.99 aCollection.add(Normal([-1.0, 1.0], [1.0, 1.0], R)) R[0, 1] = 0.99 aCollection.add(Normal([1.0, 1.0], [1.0, 1.0], R))   distClassifier = Mixture(aCollection)   # Create a mixture classifier classifier = MixtureClassifier(distClassifier)   # Create local experts experts = Basis() experts.add(NumericalMathFunction("x", "-x")) experts.add(NumericalMathFunction("x", "x"))   # Create a mixture of experts moe = ExpertMixture(experts, classifier) moeNMF = NumericalMathFunction(moe)   print("Mixture of experts=", moe)   # Evaluate the mixture of experts on some points for i in range(5):     p = NumericalPoint(1, -0.3 + 0.8 * i / 4.0)     print("moe   ( %.6g )=" % p[0], moe(p))     print("moeNMF( %.6g )=" % p[0], moeNMF(p))

### 2.1.6 UC : Manipulation of a NumericalMathFunction

The objective of this Use Case is to describe the main functionalities of OpenTURNS to manipulate a numerical function $f:{ℝ}^{n}⟶{ℝ}^{p}$.

OpenTURNS enables :

• to ask the dimension of its input and output vectors, with the methods getInputDimension, getOutputDimension,

• to evaluate itself, its gradient and hessian, with the methods gradient, hessian. The evaluation of the function is a vector of dimension $p$, the gradient is a matrix with $p$ rows and $n$ columns, the hessian is a tensor of order 3 with $p$ rows, $n$ columns and $n$ sheets,

• to set a finite difference method for the evaluation of the gradient or the hessian, with the methods setGradientImplementation, setHessianImplementation,

• to evaluate the number of times the function or its gradient or its hessian have been evaluated since the beginning of the python session, with the methods getEvaluationCallsNumber, getGradientCallsNumber, getHessianCallsNumber,

• to desable or enable (enabled by default) the history mechanism with the methods disableHistory, enableHistory,

• to get all the values evaluated by the function and the associated inputs with the methods getInputHistory, getOutputHistory

• to clear the history clearHistory,

• to ask the description of its input and output vectors, with the methods getInputDescription, getOutputDescription,

• to extract its components if $p>1$, which are functions ${f}_{i}:{ℝ}^{n}⟶ℝ$, with the method getMarginal,

• to ask for its parameters with the method getParameters,

• to define its parameters, with the method setParameters,

• to compose two functions,

• to ask for the valid operators in OpenTURNS, the valid constants and functions, with the methods GetValidOperators, GetValidConstants, GetValidFunctions.

Furthermore, OpenTURNS implemented an history mechanism to all the NumericalMathFunction types. It is desactivated by default, and stores all the input and output values of a function when activated thanks to the method enableHistory.

It is also possible to draw function graphs, for any function : $f:{ℝ}^{n}\to {ℝ}^{p}$ where we note $\underline{x}=\left({x}_{1},\cdots ,{x}_{n}\right)$ and $f\left(\underline{x}\right)=\left({f}_{1}\left(\underline{x}\right),\cdots ,{f}_{p}\left(\underline{x}\right)\right)$, with $n\ge 1$ and $p\ge 1$. OpenTURNS allows to :

• the graph of any marginal ${f}_{k}:{ℝ}^{n}\to ℝ$ with respect to the variation of ${x}_{j}$ in the intervall $\left[{x}_{j}^{min},{x}_{j}^{max}\right]$, when all the other components of $\underline{x}$ are fixed to the corresponding ones of the central point noted $\underline{CP}$. Then OpenTURNS draws the graph : $t\in \left[{x}_{j}^{min},{x}_{j}^{max}\right]↦{f}_{k}\left(C{P}_{1},\cdots ,C{P}_{j-1},t,C{P}_{j+1}\cdots ,C{P}_{n}\right)$. The method is $draw\left(arguments\right)$ whith the appropriate arguments.

• the iso-curves of the function ${f}_{k}$ with respect to the variation of $\left({x}_{i},{x}_{j}\right)$ in the intervall $\left[{x}_{i}^{min},{x}_{i}^{max}\right]×\left[{x}_{j}^{min},{x}_{j}^{max}\right]$, when all the other components of $\underline{x}$ are fixed to the corresponding ones of the central point noted $\underline{CP}$. Then OpenTURNS draws the graph : $\left(t,u\right)\in \left[{x}_{i}^{min},{x}_{i}^{max}\right]×\left[{x}_{j}^{min},{x}_{j}^{max}\right]↦{f}_{k}\left(C{P}_{1},\cdots ,C{P}_{i-1},t,C{P}_{i+1},\cdots ,C{P}_{j-1},u,C{P}_{j+1}\cdots ,C{P}_{n}\right)$.

The number of points to draw the curve can be specified.

There is a simplified call to the metho when $n=p=1$ or when $n=2$ and $p=1$.

 Requirements $•$ some functions ${ℝ}^{n}⟶{ℝ}^{p}$: myFunction, func1, func2, func3, g (for more details, see above) type: NumericalMathFunction Results $•$ none

Functions used in the script have the following features :

• $myFunction:{ℝ}^{2}\to {ℝ}^{2}$ ,

• $func1:ℝ\to ℝ$ ,

• $func2:{ℝ}^{2}\to ℝ$ ,

• $func3:{ℝ}^{3}\to {ℝ}^{2}$

• $g:{ℝ}^{2}\to {ℝ}^{2}$.

Python script for this Use Case :

script_docUC_LSF_NMFManipulation.py

Here are the graphs drawn by the script for the functions :

• $func1:ℝ\to ℝ$ such that $func1\left(x\right)={x}^{2}$ ,

• $func2:{ℝ}^{2}\to ℝ$ such that $func2\left(x,y\right)=xy$ ,

• $func3:{ℝ}^{3}\to {ℝ}^{3}$ such that $func3\left(x,y,z\right)=\left(1+2x+y+{z}^{3},2+sin\left(x+2y\right)-{z}^{4}sin\left(z\right)\right)$.

 Graph of : $z↦func{3}_{2}\left({x}_{0},{y}_{0},z\right)$, with $\left({x}_{0},{y}_{0}\right)=\left(1.0,2.0\right)$ and $z\in \left[-1.5,1.5\right]$. Iso-curves of : $\left(x,z\right)↦func{3}_{2}\left({x}_{0},y,z\right)$, with ${y}_{0}=2.0$ and $x\in \left[-1.5,1.5\right]$ and $z\in \left[-2.5,2.5\right]$.
Figure 28
 Graph of : $x↦func1\left(x\right)$, with $x\in \left[-1.5,1.5\right]$. Iso-curves of : $\left(x,y\right)↦func2\left(x,y\right)$, with $x\in \left[-1.5,1.5\right]$ and $y\in \left[-2.5,2.5\right]$.
Figure 29

## 2.2 Creation of the output variable of interest from the limit state function and the input random vector

The objective of the section is to determine the output variable of interest directly from a limit state function and a random input vector declared previously.

### 2.2.1 UC : Creation of the ouput random vector

The objective of this Use Case is to create the ouput random variable of interest defined as the image through the limit state function of the input random vector.

Details on the definition of random mixture variables may be found in the Reference Guide ( see files Reference Guide - Step B – Random Mixture : affine combination of independent univariate distributions ).

 Requirements $•$ the limit state function : myFunction type: NumericalMathFunction $•$ the random input vector : inputRV type: RandomVector which implementation is a UsualRandomVector Results $•$ the output variable of interest outputRV = myFunction(inputRV) type: RandomVector which implementation is a CompositeRandomVector

Python script for this UseCase :

script_docUC_OVI_FromLSF.py

# Create the output variable of interest 'output = myFunction(input)' outputRV = RandomVector(myFunction, inputRV)

### 2.2.2 UC : Extraction of a random subvector from a random vector

The objective of this Use Case is to extract a subvector from a random vector which has been defined as well as a UsualRandomvector (it means thanks to a distribution, see UC. 1.0.1) than as a CompositeRandomVector (as the image through a limit state function of an input random vector, see UC. 2.2.1).

Let's note $\underline{Y}=\left({Y}_{1},\cdots ,{Y}_{n}\right)$ a random vector and $I\subset \left[1,n\right]$ a set of indices :

• In the first case, the subvector is defined by $\underline{\stackrel{˜}{Y}}={\left({Y}_{i}\right)}_{i\in I}$,

• In the second case, where $\underline{Y}=f\left(\underline{X}\right)$ with $f=\left({f}_{1},\cdots ,{f}_{n}\right)$, ${f}_{i}$ some scalar functions, the sub vector is $\underline{\stackrel{˜}{Y}}={\left({f}_{i}\left(\underline{X}\right)\right)}_{i\in I}$.

 Requirements $•$ the random vector : myRandomVector type: RandomVector which implementation is a UsualRandomVector or CompositeRandomVector Results $•$ the extracted random vector : myExtractedRandomVector type: RandomVector which implementation is a UsualRandomVector or CompositeRandomVector

Python script for this UseCase :

# CASE 1 : Get the marginal of the random vector                # Corresponding to the component i                  # Care : numerotation begins at 0                myExtractedRandomVector = myRandomVector.getMarginal(i)                    # CASE 2 : Get the marginals of the random vector                # Corresponding to several components                # decribed in the myIndice table                # For example, components 0, 1, and 5                  myExtractedRandomVector = myRandomVector.getMarginal( (0, 1, 5) )

## 2.3 Creation of the output variable of interest defined as an affine combination of input random variables

### 2.3.1 UC : Creation of a Random Mixture

The objective of this Use Case is to define a random vector $\underline{Y}$ as a RandomMixture, which means an affine transform of input random variables :

 $\begin{array}{c}\hfill \underline{Y}={\underline{y}}_{0}+\underline{\underline{M}}\phantom{\rule{0.166667em}{0ex}}\underline{X}\end{array}$

where ${\underline{y}}_{0}\in {ℝ}^{d}$ is a deterministic vector with $d\in \left\{1,2,3\right\}$, $\underline{\underline{M}}\in {ℳ}_{d,n}\left(ℝ\right)$ a deterministic matrix and ${\left({X}_{k}\right)}_{1\le k\le n}$ are some independent univariate variables.

Be careful! This notion is different from the Mixture notion where the combination is made on the probability density functions and not on the univariate random variable :

• Random Mixture (in dimension 1) : $Y={a}_{0}+{\sum }_{i=1}^{n}{a}_{i}{X}_{i}$,

• Mixture : ${p}_{Y}={\sum }_{i=1}^{n}{a}_{i}{p}_{{X}_{i}}$, where ${p}_{{X}_{i}}$ is the probability density function of ${X}_{i}$ and ${\sum }_{i=1}^{n}{a}_{i}=1$.

When not precised, the coefficient ${\underline{y}}_{0}$ is taken equal to 0.

OpenTURNS evaluates the probability density function and cumulative distribution function of the random variable $\underline{Y}$. So, it is possible to ask $\underline{Y}$ any request compatible with a Distribution: moments, quantiles(in dimension 1 only), PDF and CDF evaluations, ...

It is important to note that the distribution evaluation of $\underline{Y}$ needs the evaluation of the characteristic functions of the univariate ${X}_{i}$. OpenTURNS proposes an implementation of all its univariate distributions, continuous or discrete ones. But only some of the them have the implementation of a specific algorithm that evaluates the characteristic function : it is the case of all the discrete distributions and most of (but not all) the continuous ones. In that case, the evaluation is performant. For the remaining distributions, the generic implementation might be time consuming for high arguments.

Furthermore, let's note that as $\underline{Y}$ is not a CompositeRandomVector; In dimension 1, it cannot be used by a FORM/SORM algorithm, a QuadraticCumul algorithm or even a Simulation algorithm... In order to use such algorithms, it is necessary to transform the RandomMixture in a CompositeRandomVector by using the identity function $f:ℝ\to ℝ$ quickly defined (see the following python script).

The example here is an output variable of interest defined as the following combination :

 $\begin{array}{c}\hfill Y=2+5{X}_{1}+{X}_{2}\end{array}$

where :

• ${X}_{1}$ follows a $ℰ\left(\lambda =1.5\right)$,

• ${X}_{2}$ follows a $𝒩\left(\mu =4,Variance=1\right)$.

The UC asks $Y$ its mean, variance, probability density graph, quantile of order $90%$, its probability to exceeds 3.

 Requirements $•$ none Results $•$ the random mixture $Y$ : myRandomMixtureY type: RandomMixture

Python script for this UseCase :

script_docUC_OVI_RandomMixture.py

# Create the univariate distributions   # X1 : Exponential(1.5) X1 = Exponential(1.5) # X2 : Normal(4,1) X2 = Normal(4, 1)   # Put them in a DistributionCollection distribList = [X1, X2]   # Create the numerical of the distribution weights # coefficients a1, a2 weight = NumericalPoint([5., 1.])   # Create the constant coefficient a0 a0 = 2.0   # Create the Random Mixture Y = a0 + Sum(ai Xi) myRandomMixtureY = RandomMixture(distribList, weight, a0)   # Or create the Random Mixture where a0 = 0 : Y = Sum(ai Xi) myRandomMixtureY = RandomMixture(distribList, weight, a0)   # Or create the Random Mixture where all the weights (a1, a2)are equal to 1 myRandomMixtureY = RandomMixture(distribList, a0)   # Ask myRandomMixtureY its  mean, variance, quantile of order 0.9, its # probability to exceeds 3 mean = myRandomMixtureY.getMean()[0] variance = myRandomMixtureY.getCovariance()[0, 0] quantile90 = myRandomMixtureY.computeQuantile(0.90)[0] proba = myRandomMixtureY.computeComplementaryCDF(3.0)   # Ask myRandomMixtureY to draw its pdf pdfY = myRandomMixtureY.drawPDF()   # Visualize the graph without saving it # View(pdfY).show()   # Transform the RandomMixture into a RandomVector myRandomVectorY = RandomVector(myRandomMixtureY)

 Probability density function of a Random Mixture. Cumulative density function of a Random Mixture.
Figure 30

## 2.4 Creation of the output variable of interest from the result of a polynomial chaos expansion

### 2.4.1 UC : Creation of the output variable of interest from the result of a polynomial chaos expansion

The objective of this Use Case is to define the output variable of interest as the result of a polynomial chaos algorithm which defined a particular response surface (refer to 4.3).

Details on the polynomial chaos expansion may be found in the Reference Guide ( see files Reference Guide - Step Resp. Surf. – Polynomial Chaos Expansion ).

 Requirements $•$ the result structure of a polynomial chaos algorithm : polynomialChaosResult type: a FunctionalChaosResult Results $•$ the new output variable of interest : newOuputVariableOfInterest type: a RandomVector

Python script for this UseCase :

# Create the new ouput variable of interest                # based on the meta model                # evaluated from the polynomial chaos algorithm                newOuputVariableOfInterest = RandomVector(polynomialChaosResult)

### 2.4.2 UC : Creation of a specialized random vector for the global sensitivity analysis using a polynomial chaos expansion

The objective of this Use Case is to define the output variable of interest as a specialized random vector that allows the User to compute the mean and covariance using the coefficients of the decomposition upon the polynomial hilbertian basis, and also the Sobol indices and total indices.

Details on the Sobol indices may be found in the Reference Guide ( see files Reference Guide - Step C' – Sensitivity analysis using Sobol indices ).

If $g:{ℝ}^{n}⟶ℝ$ is a model and $Y=g\left(\underline{X}$ is the random ouput variable with $\underline{X}$ a random vector, we define the Sobol indice associated to $\underline{i}=\left({i}_{1},\cdots ,{i}_{k}\right)\right)$ as follows where we suppose $g$ and $\underline{X}$ have the good properties :

 $\begin{array}{c}\hfill IS\left(\underline{i}\right)=\frac{\mathrm{Var}\left[𝔼\left[Y|{X}_{{i}_{1}}\cdots {X}_{{i}_{k}}\right]\right]}{\mathrm{Var}\left[Y\right]}\end{array}$

The Total Sobol indice associated to $\left({i}_{1},\cdots ,{i}_{k}\right)\right)$ is defined as :

 $\begin{array}{c}\hfill TIS\left(\underline{i}\right)=\sum _{\underline{j}\in I\left(\underline{i}\right)}IS\left(\underline{j}\end{array}$

where $I\left(\underline{i}\right)=\left\{\left({j}_{1},\cdots ,{j}_{p}\right),\phantom{\rule{0.166667em}{0ex}}p\in \left[k,n\right]/\left\{{i}_{1},\cdots ,{i}_{k}\right\}\subset \left\{{j}_{1},\cdots ,{j}_{p}\right\}\right\}$.

 Requirements $•$ the result structure of a polynomial chaos algorithm : polynomialChaosResult type: a FunctionalChaosResult Results $•$ the new output variable of interest : newOuputVariableOfInterest type: a FunctionalChaosRandomVector

Python script for this UseCase :

# Create the new ouput variable of interest                # based on the meta model                # evaluated from the polynomial chaos algorithm                # in a way that allow to compute Sobol indices                # and total indices                newOuputVariableOfInterest = FunctionalChaosRandomVector(polynomialChaosResult)                print "Sobol index 0=", newOuputVariableOfInterest.getSobolIndex(0)                indices = Indices(2)                indices[0] = 0                indices[1] = 1                print "Sobol index [0, 1]=", newOuputVariableOfInterest.getSobolIndex(indices)                print "Sobol total index 0=", newOuputVariableOfInterest.getSobolTotalIndex(0)                indices = Indices(2)                indices[0] = 0                indices[1] = 1                print "Sobol total index [0, 1]=", newOuputVariableOfInterest.getSobolTotalIndex(indices)