3 Uncertainty propagation and Uncertainty sources ranking

The objective of this section is to manipulate all the functionalities to propagate uncertainties from the random input vector through the limit state function until the output variable of interest ( see Reference Guide - Global Methodology for an uncertainty stdy - Part C : propagating uncertainty sources ).

Some techniques require the generation of random distributed sequences and others some low discrepancy sequences.

3.1 UC : Parametrisation of the Random Generator

The seed of the random generator is automatically initialized to 0. It means that as soon as a the openturns session is launched, the sequence of random values generated within [0,1] is the same one : if a script is launched several times, within different openturns sessions, the same results will be obtained.

Details on the random generator may be found in the Reference Guide ( see files Reference Guide - Step B – Uniform Random Generator ).

Before any simulation, it is possible to initialise differently than the value by default or get the state of the random generator.

To initialize the random generator state, it is possible :

  # INITIALIZE THE RANDOM GENERATOR STATE     # Case 1 : reproductible sequence of generated random vector   # the seed is reproductible     # Initialise the state of the random generator   # thanks to the fonctionality SetSeed(n) where n is an UnsignedLong in [0, 2^(32)-1]   # which enables an easy initialisation for the user   RandomGenerator.SetSeed(77)     # or by specifying a complete state of the random generator : particularState   # coming from a previous particularState = RandomGenerator.GetState() :   RandomGenerator.SetState(particularState)     # Case 2 : non reproductible sequence of generated random vector   # the seed is not reproductible     # Example 1 : the number of the openturns python session   from os import getpid   RandomGenerator.SetSeed(getpid())     # Example 2 : times of the moment   from os import times   RandomGenerator.SetSeed(int(100*times()[4]))       # GET THE RANDOM GENERATOR STATE     # Get the complete state of the random generator before simulation   randomGeneratorStateBeforeRandomExperiment = RandomGenerator.GetState()


3.2 UC : Generation of low discrepancy sequences

It is possible to generate some low discrepancy sequences in order to approximate some integrals.

Details on low discrepancy sequences may be found in the Reference Guide ( Reference Guide - Low Discrepancy Sequence ).

OpenTURNS proposes the following sequences : Sobol, Faure, Halton, Reverse Halton and Haselgrove, in dimension n1.

Requirements  

-

 
Results  

a Faure sequence : myFaureSeq

type:

FaureSequence

a Sobol sequence : mySobolSeq

type:

SobolSequence

an Halton sequence: myHaltonSeq

type:

HaltonSequence

an Haselgrove sequence: myHaselgroveSeq

type:

HaselgroveSequence

a Reverse Halton sequence : myReverseHaltonSeq

type:

ReverseHaltonSequence

the points of the sequence : myFirstSequencePoints

type:

NumericalSample

 

Python script for this UseCase :

               # Create the Faure sequence of dimension 2                myFaureSeq = FaureSequence(2)                  # Create the Sobol sequence of dimension 2                mySobolSeq = SobolSequence(2)                  # Create the Halton sequence of dimension 2                myHaltonSeq = HaltonSequence(2)                  # Create the Haselgrove sequence of dimension 2                myHaselgroveSeq = HaselgroveSequence(2)                  # Create the Inverse Halton sequence of dimension 2                myReverseHaltonSeq = ReverseHaltonSequence(2)                  # Generate the first points of the sequence                myFirstSequencePoints = myFaureSeq.generate(10)

To illustrate these sequences, we generate the first points (1024) from the Sobol, Halton and Reverse Halton schemes. In order to ease the comparaison with the uniformly ditributed sequence, we draw the first points obtained from the Mersenne Twister algorithm : this last sequence has a greater discrepancy than the other ones.

Sobol sequence.
Halton sequence.
Figure 31
Reverse Halton sequence.
Uniform random sequence.
Figure 32

3.3 Min/Max approach

In this section, we focus on the deterministic approach which consists of researching the variation range of the output variable of interest.

3.3.1 UC : Creation of a deterministic design of experiments : Axial, Box, Composite, Factorial patterns

The objective of this Use Case is to create an design of experiments which scheme is specified and then denoted deterministic design of experiments .

Details on design of experiments may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Designs Of Experiment ).

OpenTURNS enables to define four types of deterministic design of experiments : axial, composite, factorial and box.

In order to define an design of experiments , follow the 3 steps, whatever the type of the design of experiments , where n is the dimension of the space and n level the number of levels (the same for each direction except for the Box grid) :

  • Step 1 : Define a reduced and centered grid structure, centered on 0 ̲ n , by specifying the levels which will be consider on each direction,

  • Step 2 : Scale each direction with a specific scale factor for each direction, in order to give a unit effect on each direction,

  • Step 3 : Translate the scaled grid structure onto a specified center point.

Each design of experiments has a specific method to define its reduced and centered grid structure :

  • Axial: the points grid is obtained by discretizing each direction according to the specified levels, symmetrically with respect to 0. The number of points generated is 1+2n*n level .

  • Factorial: the points grid is obtained by discretizing each principal diagonal according to the specified levels, symmetrically with respect to 0. The number of points generated is 1+2 n *n level .

  • Composite: the points grid is obtained as the union between an axial and a factorial design of experiments . The number of points generated is 1+2n*n level +2 n *n level .

  • Box: the points grid is obtained by regularly discretizing the unit pavement [0,1] n , with the number of intermediate points specified for each direction. The number of points generated is i=1 n (2+n level (directioni)).

In order to scale each direction according to a specified factor or/and to translate the points grid until a specified center, the methods scale and translate must be used.

The following example works in 2 .

Requirements  

none

 
Results  

a centered and reducted grid structure : myCenteredReductedPlane

type:

ExperimentPlane, which type is Axial, Composite, Factorial or Box

the numerical sample associated to the centered and reducted grid structure then scaled then translated grid structure : mySample

type:

NumericalSample

 

Python script for this UseCase :

                 # Define a scale factor for each direction                scaledVector = NumericalPoint( (1.5, 2.5) )                  # Define the translation until the final center of the design of experiments                translationVector = NumericalPoint( (2., 3.) )                  # Define the different levels of the grid structure                # CARE : for the axial, composite and factorial design of experiments,                # these levels are all applied along each direction                # Here : 3 levels on each direction                levels = NumericalPoint( (1., 1.5, 3.) )                  # For the box design of experiments , levels specifies the number of                # intermediate points on each direction (one per direction)                # Here : direction 1 will be discretized with 2 intermediate points                # and direction 2 with 4 intermediate points                levelsBox = NumericalPoint( (2., 4.) )                    # STEP 1 : Define a reduced and centered grid structure                  # AXIAL structure                myCenteredReductedGrid = Axial(2,levels)                  # COMPOSITE structure                myCenteredReductedGrid = Composite(2,levels)                  # FACTORIAL structure                myCenteredReductedGrid = Factorial(2,levels)                  # BOX structure                myCenteredReductedGrid = Box(levelsBox)                  # Generate the numerical sample (centered and reducted grid structure)                # a NumericalSample is created                mySample = myCenteredReductedGrid.generate()                  # Get the number of points of the centered and reducted grid structure                pointsNumber = mySample.getSize()                    # STEP 2 : Scale each direction with a specific scale factor                  # The NumericalSample is transformed                mySample.scale(scaledVector)                    # STEP 3 : Translate the scaled grid structure onto a specified center point                  # The NumericalSample is transformed                mySample.translate(translationVector)                  # Print the numerical sample                print mySample

Figures 33 to 40 draw the different grid structures obtained after the scale or translate methods.

Axial Design of Experiments : initial grid.
Axial Design of Experiments : after scaling.
Figure 33
Axial Design of Experiments : after scaling and translation.
Figure 34
Factorial Design of Experiments : initial grid.
Factorial Design of Experiments : after scaling.
Figure 35
Factorial Design of Experiments : after scaling and translation.
Figure 36
Composite Design of Experiments : initial grid.
Composite Design of Experiments : after scaling.
Figure 37
Composite Design of Experiments : after scaling and translation.
Figure 38
Box Design of Experiments : initial grid.
Box Design of Experiments : after scaling.
Figure 39
Box Design of Experiments : after scaling and translation.
Figure 40

3.3.2 UC: Creation of a combinatorial generator: subsets, injections, Cartesian products

The objective of this Use Case is to define a combinatorial generator, it means a design of experiment able to generate all the integer collections satisfying a given combinatorial constraint.

Details on combinatorial generators may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Designs Of Experiment ).

OpenTURNS proposes the following combinatorial generators:

  • the Tuples generator, which allows to generate all the elements of a Cartesian product E={0,,n 0 -1}××{0,,n d-1 -1}, it means all the points x ̲ of dimension d with integral components such that 0x 0 n 0 -1,,0x d-1 n d-1 -1. The integers n 0 ,,n d-1 are supposed to be positive, if one of them is zero then E is the empty set. The total number of generated points is N= k=0 d-1 n k .

  • the K-permutations generator, which allows to generate all the injective functions from {0,,k-1} into {0,,n-1}, described by a point x ̲ of dimension k with integral components, such that the associated injective function φ maps {0,,k-1} into φ()=x . k and n are positive integers such that kn, else there is no such injective function. The total number of generated points is N=n! (n-k)!.

  • the combinations generator, which allows to generate all the subsets of size k of {0,,n-1}. The subsets are described using points x ̲ with integral components such that 0x 0 <<x k-1 n. If k>n, there is no such subset. The total number of generated points is N=n! k!(n-k)!.

Be aware of the fact that the number of generated points grows very rapidly with the magnitude of the parameters.

Requirements  

the parameters of the combinatorial generator: (n,k) or (n 0 ,,n d-1 )

type:

UnsignedLong

 
Results  

the sample generated: experimentSample

type:

IndicesCollection

 

Python script for this UseCase :

script_docUC_MinMax_CombinatorialGenerators.py

# Create the generator of the Cartesian product {0,...,3}x{0,...,5}x{0,...,8} myTuplesGenerator = Tuples([4, 6, 9]) # Generate all the tuples myTuples = myTuplesGenerator.generate()   # Create the generator of the injective functions from {0,...,3} into {0,...,5} myKPermutationsGenerator = KPermutations(4, 6) # Generate all the injective functions myKPermutations = myKPermutationsGenerator.generate()   # Create the generator of the subsets of {0,...,5} with 4 elements myCombinationsGenerator = Combinations(4, 6) # Generate all the subsets myKPermutations = myKPermutationsGenerator.generate()


3.3.3 UC: Creation of a random design of experiments : Monte Carlo, LHS patterns

The objective of this Use Case is to define a random design of experiments : the design of experiments does not follow a specified scheme any more. The experiment grid is generated according to a specified distribution and a specified number of points.

Details on design of experiments may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Designs Of Experiment ).

OpenTURNS proposes many sampling methods to generate the experiment grid, two of them will be detailed here:

  • the Monte Carlo method: the numerical sample is generated by sampling the specified distribution. When recalled, the generate method regenerates a new numerical sample.

  • the LHS method: the numerical sample is generated by sampling the specified distribution with the LHS technique: some cells are determined, with the same probabilistic content according to the specified distribution, each line and each column contains exactly one cell, then points are selected among these selected cells. When recalled, the generate method regenerates a new numerical sample: the point selection within the cells changes but not the cells selection. To change the cell selection, it is necessary to create a new LHS Experiment.

Before any simulation, it is usefull to initialize the state of the random generator, as defined in the Use Case 3.1.

Requirements  

the specified distribution: distribution

type:

Distribution

the number of points of the design of experiments : number

type:

UnsignedLong

 
Results  

the sample generated: experimentSample

type:

NumericalSample

 

Python script for this UseCase:

               # Create a Monte Carlo design of experiments                myRandomExp = MonteCarloExperiment(distribution, number)                  # Create a LHS design of experiments                myRandomExp = LHSExperiment(distribution, number)                  # Generate the design of experiments  numerical sample                experimentSample = myRandomExp.generate()


3.3.4 UC : Re-use of a specified numerical sample as design of experiments

The objective of this Use Case is to enable to re-use a design of experiments , previously elaborated.

Details on design of experiments may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Design of Experimentss ).

This functionality is particularly interesting in the polynomial chaos technique, where the evaluation of the coefficients by regression requires the discretisation of a particular integral : to do that, the User may want to re-use a pre-existing numerical sample (for example, the sparse Smolyak grid) to parameterize algorithms which work with design of experiments structure (and not numerical sample one).

When recalled, the generate method regives the specified numerical sample.

Requirements  

the previously elaborated numerical sample : mySample

type:

NumericalSample

 
Results  

the experiment based on the numerical sample : myFixedExperiment

type:

FixedExperiment

 

Python script for this UseCase :

               # Create a fixed design of experiments                myFixedExperiment = FixedExperiment(mySample)


3.3.5 UC : Creation of a mixed deterministic / random design of experiments

The objective of this Use Case is to create a deterministic design of experiments which levels are defined from the probabilistic distribution of the input random vector.

Details on design of experiments may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Design of Experimentss ).

The example here is an axial design of experiments where levels are proportionnal to the standard deviation of each component of the random input vector, and centered on the mean vector of the random input vector. Be carefull to remain within the range of the distribution if the latter is bounded!

There are three levels : +/-1, +/-2, +/-3 around a center fixed equal to the center point (0 ̲).

The dilatation vector is composed of the standard deviation of each component of the random input vector.

Requirements  

the input vector : input

type:

RandomVector

 
Results  

an design of experiments : myPlane

type:

Axial

a sample of input according to myPlane: inputSample

type:

NumericalSample

 

Python script for this UseCase :

               # In order to use the 'sqrt' function                from math import *                  # Dimension of the use case : 4                dim = 4                  # Give the levels of the design of experiments                # here,  3 levels : +/-1, +/-2, +/-3                levelsNumber = 3                levels = NumericalPoint( (1., 2. 3.) )                levels.setName( "Levels" )                  # Create the axial plane centered on the vector (0)                # and with the levels 'levels'                myPlane = Axial(dim, levels)                  # Generate the points according to the structure                # of the design of experiments  (in a reduced centered space)                inputSample = myPlane.generate()                  # Scale the structure of the design of experiments                # proportionnally to the standard deviation of each component                # of 'input' in case of a RandomVector                # Scaling vector for each dimension of the levels of the structure                # to take into account the dimension of each component                scaling = NumericalPoint(dim, 0)                scaling[0] = sqrt(input.getCovariance()[0,0])                scaling[1] = sqrt(input.getCovariance()[1,1])                scaling[2] = sqrt(input.getCovariance()[2,2])                scaling[3] = sqrt(input.getCovariance()[3,3])                inputSample.scale(scaling)                  # Translate the nonReducedSample onto the center of the design of experiments                # Translation vector for each dimension                center = input.getMean()                inputSample.translate(center)


3.3.6 UC : Drawing an design of experiments in dimension 2

The objective of this Use Case is to draw an design of experiments in dimension 2.

Requirements  

the points of an design of experiments : mySample

type:

a NumericalSample

 
Results  

the files containing the graph, in format .EPS, .FIG, .PNG : DoE

type:

-

 

Python script for this UseCase :

               # Draw it                from openturns.viewer import View                mySampleDrawable = Cloud(mySample, "blue", "square", "My design of experiments")                graph = Graph("My design of experiments", "x", "y", True)                graph.add(mySampleDrawable)                view = View(graph)                view.save('DoE.png')                view.show()                  # In order to see the drawable without creating the associated files                # CARE : it requires to have created the graph structure before                View(mySampleDrawable).show()                # or to see the graph without creating the associated files                View(graph).show()


3.3.7 UC : Min/Max research from an design of experiments and sensitivity analysis

The objective of this Use Case is to evaluate the min and max values of the output variable of interest from a numerical sample and to evaluate the gradient of the limit state function defining the output variable of interest at a particular point.

Details on design of experiments may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Design of Experimentss ).

The numerical sample of the output variable of interest may be obtained as follows :

  • create an design of experiments of the input random vector : deterministic scheme (see Use Case 3.3.1) or random scheme (see Use Case 3.3.2), mixed deterministic/random scheme (see Use Case 3.3.5), or by re-using a previously elaborated design of experiments (see Use Case 3.3.4),

  • evaluate the output variable of interest on each point of the design of experiments .

The example here is the limit state function poutre defined in Eq.() with the random input vector (E,F,L,I).

Requirements  

the numerical sample of the input random vector : inputSample

type:

NumericalSample

the limit state function : poutre

type:

NumericalMathFunction

the input vector where gradient is evaluated : input0

type:

NumericalPoint

 
Results  

the min and max of the output variable of interest

type:

NumericalPoint

the deterministic gradient of the output variable of interest at input 0

type:

Matrix

 

Python script for this UseCase :

               # Generate the  values of the output variable of interest                # 'output = poutre(input)' corresponding to 'inputSample'                outputSample = poutre(inputSample)                print "outputSample = ", outputSample                  # Get the min and the max of the output variable, component by component                min = outputSample.getMin()                max = outputSample.getMax()                print  "max =  ", max                print  "min =  ", min                  # Get the gradient of 'poutre' with respect to 'input'                # at a particular point 'input0'                sensitivity = poutre.gradient(input0)                print "sensitivity at point input0 = ", sensitivity


3.3.8 UC : Min/Max research with an optimization algorithm

The objective of this Use Case is to evaluate the min and max values of the output variable of interest when the input vector of the limit state function varies whitin the interval [a ̲,b ̲] ¯ n , which bounds may be infinite.

Details on design of experiments may be found in the Reference Guide ( see files Reference Guide - Step C – Min-Max approach using Optimization Algorithm ).

The example here illustrates how to get the min anx max values of the limit state function limitStateFunction: 4 , when the interval [a ̲,b ̲] is :

[a ̲,b ̲]=[a 1 ,b 1 ]×]-,b 2 ]×[a 3 ,+[×

thanks to the TNC (Truncated Newton Constrained) algorithm parameterized with is default parameters.

Requirements  

the output variable of interest : outputVariable

type:

RandomVector, of type Composite (ie defined as the image of a RandomVector through a NumericalMathFunction), which must be of dimension 1

the starting point of the optimization research : startingPoint

type:

NumericalPoint of dimension 4

 
Results  

the interval where the optimization research will be performed : intervalOptim

type:

Interval

the min and max of the output variable of interest

type:

NumericalSacalar

the input vectors where the output variable of interest is optimal : optimalInputVector

type:

NumericalPoint

 

Python script for this UseCase :

               # STEP 1 : Create the interval where the optimization research will be performed                  # Create the collection of the lower bounds and the upper bounds                # In the direction where the bound is infinite,                # only the sign of the value specified will be considered                lowerBoundVector = NumericalPoint( (a1, -1.0, a3, -1.0)                  upperBoundVector = NumericalPoint( (b1, b2, 1.0, 1.0) )                  # Create the collection of flags indicating                # wether the bound is finite or infinite                # If the bound is finite, the corresponding flag must be True or 1                # In the bound is infinite, the corresponding flag must be False or 0                finiteLowerBoundVector = BoolCollection( (1, 0, 1, 0) )                  finiteUpperBoundVector = BoolCollection( (1, 1, 0, 0) )                    # Create the optimization interval                # For each direction i, if the flag is True, the value is the one specified                # in the corresponding BoundVector                # If not, the value is infinite, and the sign is the one of the value specified                # in the corresponding BoundVector                intervalOptim = Interval(lowerBoundVector, upperBoundVector, finiteLowerBoundVector, finiteUpperBoundVector)                      # STEP 2 : Create the optimization algorithm TNC                  # Extract the limit state function from the output variable of interest                limitStateFunction = ouptVariable.getFunction()                  # For the resarch of the min value                myAlgoTNC = TNC(TNCSpecificParameters(), limitStateFunction, intervalOptim, startingPoint, TNC.MINIMIZATION)                  # For the resarch of the minax value                myAlgoTNC = TNC(TNCSpecificParameters(), limitStateFunction, intervalOptim, startingPoint, TNC.MAXIMIZATION)                    # STEP 3 : Run the research and extract the results                  myAlgoTNC.run()                myAlgoTNCResult = BoundConstrainedAlgorithm(myAlgoTNC).getResult()                  # Get the optimal value of the oupt variable of interest                # (min or max one)                optimalValue = myAlgoTNCResult.getOptimalValue()                  # Get the input  vector value associated to the optimal value                optimalInputVector = myAlgoTNCResult.getOptimizer()


3.4 Random approach : central uncertainty

In this section, we focus on the random approach which aims at evaluating the central tendance of the output variable of interest.

In order to evaluate the central tendance of the output variable of interest described by a numerical sample, it is possible to use all the functionalities described in the Use Case 1.2.14.

The Use Case 3.4.7 describes the correlation analysis we can perform between the input random vector, described by a numerical sample, and the output variable of interest described by a numerical sample too.

3.4.1 UC : Sensitivity analysis : Sobol indices

The objective of the Use Case is to quantify the correlation between the input variables and the output variable of a model described by a numerical function : it is called sensitivity analysis. The Sobol indices allow to evaluate the importance of a single variable or a specific set of variables. Here the Sobol indices are estimated by sampling, from two input samples and a numerical function.

In theory, Sobol indices range is 0;1 ; the more the indice value is close to 1 the more the variable is important toward the output of the function. The Sobol indices can be computed at different orders.

The first order indices evaluate the importance of one variable at a time (d indices stored in a NumericalPoint, with d the input dimension of the model).

The second order indices evaluate the importance of every pair of variables (d 2=d×d-1 2 indices stored via a SymmetricMatrix).

The d total indices give the relative importance of every variables except the variable X i , for every variable.

To evaluate the indices (variance of conditional mean of the output variable), OpenTURNS needs two numerical samples of the input variables, independent from each othern, of same size and generated according to the input variables distribution. Computation of first and total order indices requires N×(d+2) calls to the function, and N×(2×d+2) for first, second order and total indices.

Details on the Taylor variance decomposition method may be found in the Reference Guide ( see files Reference Guide - Step C' – Sensitivity analysis using Sobol indices ).

Requirements  

two independent input samples : inputSample1, inputSample2, which marginals are independently distributed

type:

NumericalSample

a function : model, which input dimension must fit the dimension of the two samples

type:

NumericalMathFunction

 
Results  

the different Sobol indices

type:

NumericalPoint, for first and total indices

type:

SymmetricMatrix, for second order indices

 

Python script for this UseCase :

script_docUC_CentralUncertainty_SobolIndices.py

# Initialize computation sensitivityAnalysis = SensitivityAnalysis(inputSample1, inputSample2, model)   # Allow multithreading if available sensitivityAnalysis.setBlockSize(int(ResourceMap.Get("parallel-threads")))   # Compute second order indices (first, second and total order indices are # computed together) secondOrderIndices = sensitivityAnalysis.getSecondOrderIndices()   # Retrieve first order indices firstOrderIndices = sensitivityAnalysis.getFirstOrderIndices()   # Retrieve total order indices totalOrderIndices = sensitivityAnalysis.getTotalOrderIndices()   # Print some indices print("First order Sobol indice of Y|X1 = %.6f" % firstOrderIndices[0]) print("Total order Sobol indice of Y|X3 = %.6f" % totalOrderIndices[1]) print("Second order Sobol indice of Y|X1,X3 = %.6f" % secondOrderIndices[0, 1])   # Draw first order indices s1 = NumericalPointWithDescription(firstOrderIndices) s1.setDescription(inputDist.getDescription()) ifPlot = sensitivityAnalysis.DrawImportanceFactors(s1) ifPlot.setTitle("First Order Sensitivity Indices") try:     from openturns.viewer import View     View(ifPlot).show() except:     pass


3.4.2 UC : Sensitivity analysis : ANCOVA indices

The objective of the Use Case is to estimate a generalization of the Sobol indices for a model with correlated inputs. These indices enable to measure the contribution of the input variables to the variance of the output and distinguish which part of this contribution is due the variable itself and which one is due to its correlation with the other input parameters.

In theory, ANCOVA indices range is 0;1 ; the closer to 1 the index is, the greater the model response sensitivity to the variable is. These indices are a sum of a physical part S i U and correlated part S i C . The correlation has a weak influence on the contribution of X i , if |S i C | is low and S i is close to S i U . On the contrary, the correlation has a strong influence on the contribution of the input X i , if |S i C | is high and S i is close to S i C .

The ANCOVA indices S i evaluate the importance of one variable at a time (d indices stored in a NumericalPoint, with d the input dimension of the model). The d uncorrelated parts of variance of the output due to each input S i U are stored in a NumericalPoint and the effects of the correlation are represented by the indices resulting from the subtraction of these two previous lists.

To evaluate the indices, OpenTURNS needs of a functional chaos result approximating the model response with uncorrelated inputs and a sample with correlated inputs used to compute the real values of the output. The sample dimension must be equal to the number of inputs of the model.

Details on the ANCOVA decomposition method may be found in the Reference Guide ( see files Reference Guide - Step C' – Sensivity analysis for models with correlated inputs ).

Requirements  

a functional chaos result : functionalChaosResult, which is built with a independent joint distribution

type:

functionalChaosResult

an input sample : inputSample, which contains correlated inputs.

type:

NumericalSample

 
Results  

the different ANCOVA indices

type:

NumericalPoint, for the first-order sensitivity indices

type:

NumericalPoint, for the uncorrelated part of the latter indices

 

Python script for this UseCase :

script_docUC_CentralUncertainty_ANCOVAIndices.py

# Initialize computation of the indices ancova = ANCOVA(result, sample) # Compute the ANCOVA indices (first order and uncorrelated indices are # computed together) indices = ancova.getIndices() # Retrieve uncorrelated indices uncorrelatedIndices = ancova.getUncorrelatedIndices() # Retrieve correlated indices correlatedIndices = indices - uncorrelatedIndices   # Print indices print("ANCOVA indices ", indices) print("ANCOVA uncorrelated indices ", uncorrelatedIndices) print("ANCOVA correlated indices ", correlatedIndices)  


3.4.3 UC : Sensitivity analysis : FAST indices

The objective of the Use Case is to quantify the correlation between the input variables and the output variable of a model described by a numerical function : it is called sensitivity analysis. The FAST method, based upon the Fourier decomposition of the model response, is a relevant alternative to the classical simulation approach (See Use Case 3.4.1) for computing Sobol sensitivity indices. The FAST indices, like the Sobol indices, allow to evaluate the importance of a single variable or a specific set of variables.

In theory, FAST indices range is 0;1 ; the closer to 1 the index is, the greater the model response sensitivity to the variable is. The FAST method compute the first and total order indices.

The first order indices evaluate the importance of one variable at a time (d indices stored in a NumericalPoint, with d the input dimension of the model).

The d total indices give the relative importance of every variables except the variable X i , for every variable.

Details on the FAST method may be found in the Reference Guide ( see files Reference Guide - Step C' – Sensitivity analysis by Fourier decomposition ).

Requirements  

an independent joint distribution : distribution

type:

Distribution

a function : model, which input dimension must fit the dimension of the distribution

type:

NumericalMathFunction

sample size : N, from which the Fourier series are calculated. It represents the length of the discretization of the s-space.

type:

int

number of resamplings : Nr, which enables to realize the procedure Nr times and then to calculate the arithmetic means of the results over the Nr estimates.

type:

int

the interference factor : M, usually equal to 4 or higher. It corresponds to the truncation level of the Fourier series, i.e. the number of harmonics that are retained in the decomposition.

 
Results  

the different FAST indices

type:

NumericalPoint, for first and total indices

 

Python script for this UseCase :

script_docUC_CentralUncertainty_FASTIndices.py

# Initialize computation of the indices sensitivityAnalysis = FAST(model, distributions, 400) # Compute the first order indices (first and total order indices are # computed together) firstOrderIndices = sensitivityAnalysis.getFirstOrderIndices() # Retrieve total order indices totalOrderIndices = sensitivityAnalysis.getTotalOrderIndices()   # Print indices print("First order FAST indice of Y|X1 = %.6f" % firstOrderIndices[0]) print("Total order FAST indice of Y|X3 = %.6f" % totalOrderIndices[2])  


3.4.4 UC : Sensitivity analysis : Cobweb graph

The Cobweb graph enables to visualize all the combination of the input variables which lead to a specific range of the output variable.

Let's suppose a model f: n , where f(X ̲)=Y.

The graph requires to have a numerical sample inputSample of X ̲ and the output sample of Y defined as :

outputSample=f(inputSample)

Figure (41) draws such a graph : each column represents one component X i of the input vector X ̲. The last column represents the scalar output variable Y.

For each point X ̲ j of inputSample, each component X i j is noted on its respective axe and the last mark is the one which corresponds to the associated Y j . A line joins all the marks. Thus, each point of the sample corresponds to a particular line on the graph.

The scale of the axes are quantile based : each axe runs between 0 and 1 and each value is represented by its quantile with respect to its marginal empirical distribution.

It is interesting to select, among those lines, the ones which correspond to a specific range of the output variable. These particular lines are colored differently. This specific range is defined in the quantile based scale of Y or in its specific scale. In that second case, the range is automatically converted into a quantile based scale range.

The CobWeb graph : a graphical sensitivity analysis tool
Figure 41

Figure (41) has been obtained for the following example. The model is

f: 2

where

y=f(x 1 ,x 2 )=x 1 2 +x 2

The input random vector X ̲ follows a bidimensional Normal distribution with 0 ̲ mean, a reduced standard deviation vector and a correlation factor ρ=-0.6.

A numerical sample of X ̲ of size 500 has been generated and the associated output values have been evaluated.

The red curves are the lines where the output variable is in the range [0.9,1] in the rank based scale.

We conclude that the high value of Y are mainly obtained for low values of X 1 and high values for X 2 .

Requirements  

the input sample inputSample

type:

a NumericalSample

the corresponding output sample outputSample

type:

a NumericalSample

 
Results  

the Cobweb graph

type:

a Graph

 

Python script for this UseCase :

script_docUC_CentralUncertainty_CobWebGraph.py

#################################### # Graph 1 : value based scale to describe the Y range #################################### minValue = 3 maxValue = 20 myCobweb = VisualTest.DrawCobWeb(     inputSample, outputSample, minValue, maxValue, 'red', False) # View(myCobweb).show()   #################################### # Graph 2 : rank based scale to describe the Y range #################################### minValue = 0.9 maxValue = 1 myCobweb2 = VisualTest.DrawCobWeb(     inputSample, outputSample, minValue, maxValue, 'red', True) # View(myCobweb2).show()   # In order to increase the bounding box of the graph bb = myCobweb2.getBoundingBox() # define the incresaing factor of the bounding box factor = 1.1 bb[1] = factor * bb[1] myCobweb2.setBoundingBox(bb)   # Save in a PNG file myCobweb2.draw('cobWeb', 640, 480, GraphImplementation.PNG) # Save in any formats myCobweb2.draw('cobWeb')  


3.4.5 UC : Moments evaluation from the Taylor variance decomposition method and evaluation of the importance factors associated

The objective of this Use Case is to evaluate the mean and standard deviation of the output variable of interest thanks to the Taylor variance decomposition method of order one or two.

Details on the Taylor variance decomposition method may be found in the Reference Guide ( see files Reference Guide - Step C – Taylor variance decomposition / Perturbation Method and Step C' – importance Factors derived from Taylor Variance Decomposition Method).

Requirements  

the output variable of interest output of dimension 1

a RandomVector

 
Results  

Mean and covariance of the variable of interest

type:

NumericalPoint, Matrix

Importance factors from the Taylor variance decomposition method only for output of dimension 1

type:

NumericalPoint

 

Python script for this UseCase :

script_docUC_CentralUncertainty_TaylorVarDecomposition.py

# Create a quadraticCumul algorithm myQuadraticCumul = QuadraticCumul(output)   # Compute the several elements provided by the quadratic cumul algorithm # First order mean print("First order mean=", myQuadraticCumul.getMeanFirstOrder()) # Second order mean print("Second order mean=", myQuadraticCumul.getMeanSecondOrder()) # Covariance Matrix print("Covariance=", myQuadraticCumul.getCovariance()) # Importance factors # CARE : for this calculus only, the output variable of interest must be # of dimension 1 print("Importance factors=", myQuadraticCumul.getImportanceFactors())   # Graph 1 : Importance Factors graph importanceFactorsGraph = myQuadraticCumul.drawImportanceFactors()   # In order to see the graph without creating the associated files # View(importanceFactorsGraph).show()   # Create the .PNG, .EPS and .FIG files importanceFactorsGraph.draw("ImportanceFactorsDrawingQuadraticCumul")   # Get the  value of the limit state function at the mean point meanValue = myQuadraticCumul.getValueAtMean()   # Get the gradient value of the limit state function at the mean point meanGradient = myQuadraticCumul.getGradientAtMean()   # Get the hessian value of the limit state function at the mean point meanHessian = myQuadraticCumul.getHessianAtMean()

Figure 42 shows an importance factors pie evaluated from the Taylor variance decomposition method, in the beam example described in Eq.() before, where :

  • E follows the Beta(r=0.94, t=3.19, a=2.78e7, b=4.83e7) distribution,

  • F follows the LogNormal(μ=3e5, σ=9e3, γ=1.5e4) distribution,

  • L follows the Uniform(a=250, b=260) distribution,

  • I follows the Beta(r=2.5, t=4.0, a=3.1e2, b=4.5e2) distribution,

  • the four components are independent.

Importance Factors from the Taylor variance decomposition method in the beam example.
Figure 42

3.4.6 UC : Moments evaluation of a random sample of the output variable of interest

The objective of this Use Case is to evaluate the mean and standard deviation of the output variable of interest by generating a random sample of the output variable of interest and evaluate the empirical indicators from that sample.

Details on empirical moments evaluation may be found in the Reference Guide ( see files Reference Guide - Step C – Estimating the mean and variance using the Monte Carlo Method ).

Requirements  

the output variable of interest : output, of dimension 1

type:

RandomVector which implementation is a CompositeRandomVector

 
Results  

Mean and covariance of the variable of interest

type:

NumericalPoint, CovarianceMatrix

Covariance matrix and its Cholesky factor of the variable of interest

type:

CovarianceMatrix, SquareMatrix

 

Python script for this UseCase :

script_docUC_CentralUncertainty_MomentsEvaluation.py

# Create a random sample of the output variabe of interest of size 1000 size = 1000 outputSample = output.getSample(size)   # Get the empirical mean empiricalMean = outputSample.computeMean() print("Empirical Mean = ", empiricalMean)   # Get the empirical covariance matrix empiricalCovarianceMatrix = outputSample.computeCovariance() print("Empirical Covariance Matrix = ") print(empiricalCovarianceMatrix)   # Get the Cholesky factor of the covariance matrix C # C = LL^t choleskyMatrix = outputSample.computeStandardDeviation() print("chol = ") print(choleskyMatrix)


3.4.7 UC : Correlation analysis on samples : Pearson and Spearman coefficients, PCC, PRCC, SRC, SRRC coefficients

This Use Case describes the correlation analysis we can perform between the input random vector, described by a numerical sample, and the output variable of interest described by a numerical sample too.

Details on design of experiments correlation coefficients may be found in the Reference Guide ( see files Reference Guide - Step C' – Uncertainty Ranking using Pearson's correlation .

Requirements  

a numerical sample : inputSample, may be of dimension 1

type:

NumericalSample

two scalar numerical samples : inputSample2, outputSample

type:

NumericalSample

 
Results  

the different correlation coefficients : PCCcoefficient, PRCCcoefficient, SRCcoefficient, SRRCcoefficient, pearsonCorrelation, spearmanCorrelation

type:

NumericalPoint

 

Python script for this UseCase :

script_docUC_CentralUncertainty_CorrelationAnalysis.py

# PCC coefficients evaluated between the outputSample and each coordinate # of inputSample PCCcoefficient = CorrelationAnalysis.PCC(inputSample, outputSample) print('PCC coefficients = ', PCCcoefficient)   # PRCC evaluated between the outputSample and each coordinate of # inputSample (based on the rank values) PRCCcoefficient = CorrelationAnalysis.PRCC(inputSample, outputSample) print('PRCC coeffcieints = ', PRCCcoefficient)   # SRC evaluated between the outputSample and each coordinate of inputSample SRCcoefficient = CorrelationAnalysis.SRC(inputSample, outputSample) print('SRC coefficients = ', SRCcoefficient)   # SRRC evaluated between the outputSample and each coordinate of # inputSample (based on the rank values) SRRCcoefficient = CorrelationAnalysis.SRRC(inputSample, outputSample) print('SRRC coefficients = ', SRRCcoefficient)   # Pearson Correlation Coefficient # CARE :  inputSample must be of dimension 1 pearsonCorrelation = CorrelationAnalysis.PearsonCorrelation(     inputSample2, outputSample) print('Pearson correlation = ', pearsonCorrelation)   # Spearman Correlation Coefficient # CARE :  inputSample must be of dimension 1 spearmanCorrelation = CorrelationAnalysis.SpearmanCorrelation(     inputSample2, outputSample) print('Spearman correlation = ', spearmanCorrelation)


3.4.8 UC : Quantile estimations : Wilks and empirical estimators

The objective of this Use Case is to evaluate a particular quantile, with the empirical estimator or the Wilks one, from a numerical sample of the random variable. Each estimation is associated to a confidence interval, which level is specified.

Details on probability estimators may be found in the Reference Guide ( see files Reference Guide - Step C – Estimating a quantile by Sampling / Wilks Method ).

Let's suppose we want to estimate the quantile q α of order α of the variable Y : Yq α =α, from the numerical sample (Y 1 ,...,Y n ) of size n, with a confidence level equal to β. We note (Y (1) ,...,Y (n) ) the numerical sample where the values are sorted in ascending order.

The empirical estimator, noted q α emp , and its confidence intervall, is defined by the expressions :

q α emp =Y (E[nα]) P(q α [Y (i n ) ,Y (j n ) ])=βi n =E[nα-a α nα(1-α)]i n =E[nα+a α nα(1-α)]

The Wilks estimator, noted q α,β Wilks , and its confidence intervall, is defined by the expressions :

q α,β Wilks =Y (n-i) P(q α q α,β Wilks )βi0/nN Wilks (α,β,i)

Once the order i has been chosen, the Wilks number N Wilks (α,β,i) is evaluated by OpenTURNS, thanks to the static method ComputeSampleSize(α,β,i) of the Wilks object.

In the example, we want to evaluate a quantile α=95%, with a confidence level of β=90% thanks to the 4th maximum of the ordered sample (associated to the order i=3).

Care : i=0 signifies that the Wilks estimator is the maximum of the numerical sample : it corresponds to the first maximum of the numerical sample.

Before any simulation, we initialise the state of the random generator.

The method computeQuantile evaluates the empirical quantile from a numerical sample in the case of dimension n1. However, the evaluation of the confidence interval is given only in the case of dimension 1.

Furter more, the Wilks estimator and its confidence interval is evaluated in the case of dimension 1 only.

Requirements  

the output variable of interest of dimension 1 : output

type:

RandomVector

 
Results  

the quantile estimators

type:

NumericalScalar

Confidence Interval length

type:

NumericalScalar

 

Python script for this UseCase :

script_docUC_CentralUncertainty_QuantileEstimation.py

# Order of the quantile to estimate alpha = 0.95   # Confidence level of the estimation beta = 0.90   # Empirical Quantile Estimator   # Get the numerical sample of the variable N = 10 ** 4 outputNumericalSample = output.getSample(N)   # Get the empirical estimation empiricalQuantile = outputNumericalSample.computeQuantile(alpha)   # Confidence interval of the Empirical Quantile Estimator # Get the indices of the confidence interval bounds aAlpha = Normal(1).computeQuantile((1 + beta) / 2)[0] min = int(N * alpha - aAlpha * sqrt(N * alpha * (1 - alpha))) max = int(N * alpha + aAlpha * sqrt(N * alpha * (1 - alpha)))   # Get the sorted numerical sample sortedOutputNumericalSample = outputNumericalSample.sort()   # Get the Confidence interval [infQuantile, supQuantile] infQuantile = sortedOutputNumericalSample[min - 1] supQuantile = sortedOutputNumericalSample[max - 1]   # Wilks Quantile Estimator   # Get the Wilks number : the minimal number of realizations to perform # in order to garantee that the empirical quantile alpha be greater than # the theoretical one with a probability of beta, # when the empirical quantile is evaluated with the (n-i)th maximum of the sample. # For the example, we consider alpha=0.95, beta=0.90 and i=3 # By default, i=0 (empirical quantile = maximum of the sample) i = 3 wilksNumber = Wilks.ComputeSampleSize(0.95, 0.90, i)   # Get the numerical sample of the variable outputNumericalSample = output.getSample(wilksNumber)   # Get the sorted numerical sample sortedOutputNumericalSample = outputNumericalSample.sort()   # Calculate the indice of the Wilks quantile indice = wilksNumber - i   # Get the empirical estimation wilksQuantile = sortedOutputNumericalSample[indice] print("wilks Quantile 0.95 = ", wilksQuantile)


3.5 Random approach : threshold exceedance

In this section, we focus on the random approach which aims at evaluating the probability of an event, defined as a threshold exceedance.

3.5.1 UC : Creation of an event in the physical and the standard spaces

This section gives elements to create events in the physical space Event and in the standard space StandardEvent.

Details on isoproabilitic transformations may be found in the Reference Guide ( see files Reference Guide - Step C – Isoprobabilistic transformation preliminary to FORM-SORM methods ).

The below script creates the event based on the scalar ouput variable output and defined by :

myEvent={output>4}.
Requirements  

the scalar output variable of interest : output

type:

RandomVector which implementation is a CompositeRandomVector

 
Results  

the events in the physical and standard spaces : myEvent, myStandardEvent

type:

Event and StandardEvent

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_Event.py

# Create the event Y > 4 threshold = 4 myEvent = Event(output, Greater(), threshold)   #  Build a standard event based on an event myStandardEvent2 = StandardEvent(myEvent)


3.5.2 UC : Manipulation of a StandardEvent

This section gives elements to manipulate an StandardEvent in OpenTURNS .

Details on isoprobabilistic transformations may be found in the Reference Guide ( see files Reference Guide - Step C – Isoprobabilistic transformation preliminary to FORM-SORM methods ).

The below script manipulates the event based on the scalar ouput variable output and defined by :

myEvent={output>4}.

and

output=model(input)

The event is expressed within the physical space (input-space) or within the standard space.

Requirements  

the event expressed in the physical space : myEvent

type:

Event

 
Results  

the associated event in the standard space : myStandardEvent

type:

StandardEvent

some realizations of the standard event

type:

NumericalSample

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_StandardEventManipulation.py

# Create the associated standard event in the standard space myStandardEvent = StandardEvent(myEvent)   # Realization of 'myStandardEvent' as a Bernoulli print("myStandardEvent realization=", myStandardEvent.getRealization())   # Sample of 10 realizations of 'myStandardEvent  as a Bernoulli print("myStandardEvent sample=", myStandardEvent.getSample(10))  


3.5.3 UC : Creation of an analytical algorithm : FORM/SORM

The objective of this Use Case is to create an analytical algorithm FORM or SORM, in order to evaluate in fine the event probability from the FORM or SORM method and all the associated reliability indicators.

Details on FORM algorithm may be found in the Reference Guide ( see files Reference Guide - Step C – FORM ).

Be carefull, the ouput vector of interest, defined in the Event, must be of type CompositeRandomVector , which means defined from the relation : output=limitStateFunction(input).

The possible constraints algorithms in OpenTURNS are :

  • Abdo-Rackwitz,

  • Cobyla, which doesn't require the gradient evaluation of the limit state function,

  • SQP.

The convergence is controlled by the evaluation of the following errors, evaluated in the standard space:

  • the absolute error which is the distance between two successive iterates:

    ε abs =||u ̲ n+1 -u ̲ n ||
  • the constraint error, which is the absolute value of the constraint function minus the threshold s:

    ε cons =|f(u ̲ n )-s|
  • the relative error, which is the relative distance between two successive iterates (with regards the second iterate):

    ε rel =||u ̲ n+1 -u ̲ n || ||u ̲ n+1 ||
  • the residual error, which is the orthogonality error (lack of orthogonality between the vector linking the Center and the Iterate and the constraint surface):

    ε res =<u ̲ n ,f(u ̲ n )>

The algorithm has converged if all the final error values are less than the maximum value specified by the User or if the algorithm has reached the maximum number of iterations fixed by the User.

It is often usefull to initialize the optimization of the algorithm with the mean of the input random vector, obtained thanks to the method getMean().

Requirements  

the event in physical space : myEvent

type:

Event

the distribution of the input random vector : inputDist

type:

Event

 
Results  

the FORM algorithm : myFORMalgo

type:

FORM

the SORM algorithm : mySORMalgo

type:

SORM

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_FORMAlgorithm.py

# Create a NearestPoint algorithm with the Cobyla algorithm myCobyla = Cobyla()   # It is possible to change the default values of the specific parameters : myValue = 0.2 myCobyla.setRhoBeg(myValue)   # Change the general parameters of the algorithm myCobyla.setMaximumIterationNumber(100) myCobyla.setMaximumAbsoluteError(1.0e-6) myCobyla.setMaximumRelativeError(1.0e-6) myCobyla.setMaximumResidualError(1.0e-6) myCobyla.setMaximumConstraintError(1.0e-6) print("myCobyla=", myCobyla)   # Create a NearestPoint algorithm # with the AbdoRackwitz algorithm myAbdoRackwitz = AbdoRackwitz() # or with the SQP algorithm mySQP = SQP()   # Create a FORM or SORM algorithm : # The first parameter is a NearestPointAlgorithm # The second parameter is an Event in the physical space # The third parameter is a starting point for the design point research   # The starting point is fixed to the mean of the input distributino myStartingPoint = inputDist.getMean()   myAlgoFORM = FORM(myCobyla, myEvent, myStartingPoint) print("FORM=", myAlgoFORM)   myAlgoSORM = SORM(myCobyla, myEvent, myStartingPoint) print("SORM=", myAlgoSORM)


3.5.4 UC : Run and results exploitation of a FORM/SORM algorithm : probability estimation, importance factors, reliability indexes, sensitivity factors

The objective of this Use Case is to launch the analytical algorithm and exploit all the associated results :

  • the design point in both physical and standard space,

  • the probability estimation accoding to the FORM approximation, and the following SORM ones : Tvedt, Hohen-Bichler and Breitung,

  • the Hasofer reliability index and the generalized ones evaluated from the Breitung, Tvedt and Hohen-Bichler approximations,

  • the classical importance factors defined as the normalized director factors of the design point in the U-space :

    α i 2 =(u i * ) 2 β HL 2 (14)

    if we note u ̲ * the design point in the U-space. These importance factors are accessible with the methods getImportanceFactors(True) or drawImportanceFactors(True) where the boolean True specifies the formula (14);

  • the innovative importance factors defined in (15) in the elliptical space of the iso-probabilistic transformation, where the marginal distributions are all elliptical, with cumulative distribution function noted E, and not yet decorrelated.

    In the case where the input distribution of X ̲ has an elliptical copula C E , then E has the same type as C E .

    In the case where the input distribution of X ̲ has a copula C which is not elliptical, then E=Φ where Φ is the CDF of the standard normal.

    This innovative definition of importance factors has the advantage to be one-to-one between the X i components and the Y i ones :

    α i 2 =(y i * ) 2 ||y ̲ * || 2 (15)

    where

    Y * =E -1 F 1 (X 1 * )E -1 F 2 (X 2 * )E -1 F n (X n * ). (16)

    with X ̲ * the design point in the physical space. If not specified, the importance factors are evaluated from (15).

    Note that (14) and (15) match in the case of independent components X i .

  • the sensitivity factors of the Hasofer reliability index and the FORM probability.

  • the coordinates of the mean point in the standard event space : 1 E 1 (-β) β u 1 p 1 (u 1 )du 1 where E 1 is the spheric univariate distribution of the standard space and β the reliability index.

Note that it is possible to vizualize the convergence quality of the algorithm, by drawing the history of the errors defined in the use case 3.5.3.

Details on FORM algorithm may be found in the Reference Guide ( see files Reference Guide - Step C – FORM and - Step Cp – Importance Factors from FORM-SORM methods ).

Warning! Check the quality of your gradient and hessian implementations as the SORM approximation relies on an accurate computation of the main curvatures of the limit state function at the design point, which needs an accurate evaluation of both the gradient and the hessian at this point.

Requirements  

the analytical algorithm : myAlgoFORM, myAlgoSORM

type:

FORM or SORM

the limit state function model such as : output = model(input)

type:

NumericalMathFunction

 
Results  

SORM Event probabilities (Breitung, HohenBichler and Tvedt approximations)

type:

NumericalScalar

Reliability Index

type:

NumericalScalar

Importance factors

type:

NumericalPoint

Reliability index Sensitivity factors

type:

AnalyticalSensitivity

Mean point in event standard space

type:

NuemricalPoint

graphs

type:

Graph

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_FORMExploitation.py

# Save the number of calls to the limit state function, its gradient and # hessian already done modelCallNumberBefore = model.getEvaluationCallsNumber() modelGradientCallNumberBefore = model.getGradientCallsNumber() modelHessianCallNumberBefore = model.getHessianCallsNumber()   # To have access to the input and output samples # after the simulation, activate the History mechanism model.enableHistory()   # Remove all the values stored in the history mechanism # Care : it is done regardless the status of the History mechanism model.clearHistory()   # Perform the simulation myAlgoFORM.run()   # Save the number of calls to the limit state function, its gradient and # hessian already done modelCallNumberAfter = model.getEvaluationCallsNumber() modelGradientCallNumberAfter = model.getGradientCallsNumber() modelHessianCallNumberAfter = model.getHessianCallsNumber()   # Display the number of iterations executed and # the number of evaluations of the limite state function print("number of evaluations of the model = ", end=' ') modelCallNumberAfter - modelCallNumberBefore   # Stream out the result resultFORM = myAlgoFORM.getResult()   # Check the convergence criteria of the algorithm optimResult = resultFORM.getOptimizationResult() # In particular, draw the error histories graphErrors = optimResult.drawErrorHistory() graphErrors.setLegendPosition('bottom') # Show(graphErrors)   # Hasofer reliability index print("Hasofer reliability index=", resultFORM.getHasoferReliabilityIndex())   # Generalized reliability index # FORM study : generalized reliability index is the Hasofer one print("generalized reliability index=",       resultFORM.getGeneralisedReliabilityIndex())   # Design point in the standard and physical spaces print("standard space design point=", resultFORM.getStandardSpaceDesignPoint()) print("physical space design point=", resultFORM.getPhysicalSpaceDesignPoint())   # Is the standard point origin in failure space? print("is standard point origin in failure space? ",       resultFORM.getIsStandardPointOriginInFailureSpace())   # FORM probability of the event print("event probability=", resultFORM.getEventProbability())   # Get the mean point in standard event space print("Mean point in standard event space= ",       resultFORM.getMeanPointInStandardEventDomain())   # Importance factors : numerical results # Y-space definition print("importance factors Y-space =", resultFORM.getImportanceFactors()) # U-space definition print("importance factors U-space =", resultFORM.getImportanceFactors(True))   # " # GRAPH 1 : Importance Factors graph # "   # Y-space definition importanceFactorsGraph = resultFORM.drawImportanceFactors() # U-space definition importanceFactorsGraph = resultFORM.drawImportanceFactors(True)   importanceFactorsGraph.draw("ImportanceFactorsDrawingFORM")   # In order to see the graph without creating the associated files # View(importanceFactorsGraph).show()   # Hasofer Reliability Index Sensitivity : numerical results hasoferReliabilityIndexSensitivity = resultFORM.getHasoferReliabilityIndexSensitivity( ) print("hasoferReliabilityIndexSensitivity = ",       hasoferReliabilityIndexSensitivity)   # " # GRAPH 2 : Hasofer Reliability Index Sensitivity Graphs # "   reliabilityIndexSensitivityGraphs = resultFORM.drawHasoferReliabilityIndexSensitivity( )   # Sensitivity to parameters of the marginals of # the input random vector graph2a = reliabilityIndexSensitivityGraphs[0] graph2a.draw("HasoferReliabilityIndexMarginalSensitivityDrawing")   # In order to see the graph without creating the associated files # View(graph2a).show()   # Sensitivity to the other parameters (dependance) graph2b = reliabilityIndexSensitivityGraphs[1] graph2b.draw("HasoferReliabilityIndexOtherSensitivityDrawing")   # or in order to quickly draw it : with default options # default options : 640, 480 and the files are on the current repertory importanceFactorsGraph.draw("ImportanceFactorsDrawingFORM")   # In order to see the graph without creating the associated files # View(graph2b).show()   # FORM Event Probability Sensitivity : numerical results eventProbabilitySensitivity = resultFORM.getEventProbabilitySensitivity() print("eventProbabilitySensitivity = ", eventProbabilitySensitivity)   ################################ # GRAPH 3 : FORM Event Probability Sensitivity Graphs ################################   eventProbabilitySensitivityGraphs = resultFORM.drawEventProbabilitySensitivity( )   # Sensitivity to parameters of the marginals of the input random vector graph3a = eventProbabilitySensitivityGraphs[0] graph3a.draw("EventProbabilityIndexMarginalSensitivityDrawing")   # In order to see the graph without creating the associated files # View(graph3a).show()   # Sensitivity to the other parameters (dependance) graph3b = eventProbabilitySensitivityGraphs[1] graph3b.draw("EventProbabilityIndexOtherSensitivityDrawing")   # In order to see the graph without creating the associated files # View(graph3b).show()   ###################################### # SORM study # Additionnal results w.r.t the FORM study ######################################   # Perform the simulation myAlgoSORM.run() resultSORM = myAlgoSORM.getResult()   # Reliability index # with Breitung approximation print("Breitung generalized reliability index=",       resultSORM.getGeneralisedReliabilityIndexBreitung())   # with  HohenBichler approximation print("HohenBichler generalized reliability index=",       resultSORM.getGeneralisedReliabilityIndexHohenBichler())   # with Tvedt approximation print("Tvedt generalized reliability index=",       resultSORM.getGeneralisedReliabilityIndexTvedt())   # SORM probability of the event # with Breitung approximation print("Breitung event probability=", resultSORM.getEventProbabilityBreitung())   # with  HohenBichler approximation print("HohenBichler event probability=",       resultSORM.getEventProbabilityHohenBichler())   # with Tvedt approximation print("Tvedt event probability=", resultSORM.getEventProbabilityTvedt())  

The Figure 43 shows an importance factors pie evaluated from the FORM method, in the beam example described in Eq. (), where :

  • E follows the Beta(r=0.94, t=3.19, a=2.78e7, b=4.83e7) distribution,

  • F follows the LogNormal(μ=3e5, σ=9e3, γ=1.5e4) distribution,

  • L follows the Uniform(a=250, b=260) distribution,

  • I follows the Beta(r=2.5, t=4.0, a=3.1e2, b=4.5e2) distribution,

  • the four components are independant.

The output is expressed in centimeters.

The event considered is :

myEvent={output=f(input)-30}.
Importance factors from the FORM method.
Figure 43
Hasofer Reliability Index sensitivities with respect to the marginal parameters.
Figure 44
FORM probability sensitivities with respect to the marginal parameters.
Figure 45

The Figure 46 draws the history of the four errors considered to define the convergence of the algorithm, according to the iteration number.

Error history of the FORM algorithm.
Figure 46

3.5.5 UC : Validate the design point with the Strong Maximum Test

The objective of this Use Case is to explicitate the manipulation of the Strong Max Test.

Details on the Strong Max Test may be found in the Reference Guide ( see files Reference Guide - Step C – Strong Maximum Test : a design point validation ).

The Strong Maximum Test helps to evaluate the quality of the design point resulting from the optimization algorithm. It checks whether the design point computed is :

  • the true design point, which means a global maximum point,

  • a strong design point, which means that there is no other local maximum located on the event boundary and which likelihood is slightly inferior to the design point one.

This verification is very important in order to give sense to the FORM and SORM approximations .

We briefly recall here the main principles of the test, drawn on figure (47).

The objective is to detect all the points P ˜ * in the ball of radius R ε =β(1+δ ε ) which are potentially the real design point (case of P ˜ 2 * ) or which contribution to P f is not negligeable as regards the approximations Form and Sorm (case of P ˜ 1 * ). The contribution of a point is considered as negligeable when its likelihood in the U-space is more than ε-times lesser than the design point one. The radius R ε is the distance to the U-space center upon which points are considered as negligeable in the evaluation of P f .

In order to catch the potential points located on the sphere of radius R ε (frontier of the zone of prospection), it is necessary to go a little further more : that's why the test samples the sphere of radius R=β(1+τδ ε ), with τ>0.

Points on the sampled sphere which are in the vicinity of the design point P * are less interesting than those verifying the event and located far from the design point : these last ones might reveal a potential P ˜ * which contribution to P f has to be taken into account. The vicinity of the design point is defined with the angular parameter α as the cone centered on P * and of half-angle α.

The number N of the simulations sampling the sphere of radius R is determined to ensure that the test detect with a probability greater than (1-q) any point verifying the event and outside the design point vicinity.

The Strong Maximum Test to validate the quality of the design point : unicity and strongness
Figure 47

The vicinity of the Design Point is the arc of the sampled sphere which is inside the half space which frontier is the linearized limit state function at the Design Point (see figure (48) : the vicinity is the arc included in the half space D 1 ).

Vicinity of the Design Point in the standard space : the half space D 1
Figure 48

The Strong Maximum Test proceeds as follows. The User selects the parameters :

  • the importance level ε,

  • the accuracy level τ,

  • the confidence level (1-q) or the number of points N used to sample the sphere. The parameters are deductible from one other.

The Strong Maximum Test will sample the sphere of radius β(1+τδ ε ), where δ ε =1-2ln(ε) β 2 -1.

The test will detect with a probability greater than (1-q) any point of 𝒟 f which contribution to P f is not negligeable (i.e. which density value in the U-space is greater than ε times the density value at the design point).

The Strong Maximum Test provides :

  • set 1 : all the points detected on the sampled sphere that are in 𝒟 f and outside the design point vicinity, with the corresponding value of the limit state function. These points are given thanks to the method getFarDesignPointVerifyingEventPoints. The respective values of the limit state function at these points are given thanks to the method getFarDesignPointVerifyingEventValues.

  • set 2 : all the points detected on the sampled sphere that are in 𝒟 f and in the design point vicinity, with the corresponding value of the limit state function. These points are given thanks to the method getNearDesignPointVerifyingEventPoints. The respective values of the limit state function at these points are given thanks to the method getNearDesignPointVerifyingEventValues.

  • set 3 : all the points detected on the sampled sphere that are outside 𝒟 f and outside the design point vicinity, with the corresponding value of the limit state function. These points are given thanks to the method getFarDesignPointViolatingEventPoints. The respective values of the limit state function at these points are given thanks to the method getFarDesignPointViolatingEventValues.

  • set 4 : all the points detected on the sampled sphere that are outside 𝒟 f but in the vicinity of the design point, with the corresponding value of the limit state function. These points are given thanks to the method getNearDesignPointViolatingEventPoints. The respective values of the limit state function at these points are given thanks to the method getNearDesignPointViolatingEventValues.

Points are described by their coordinates in the X-space.

The Reference Guide helps to quantify the parameters of the test.

As the Strong Maximum Test involves the computation of N values of the limit state function, which is computationally intensive, it is interesting to have more than just an indication about the quality of OP ̲ * . In fact, the test gives some information about the trace of the limit state function on the sphere of radius β(1+τδ ε ) centered on the origin of the U-space. Two cases can be distinguished:

  • Case 1: set 1 is empty. We are confident on the fact that OP ̲ * is a design point verifying the hypothesis according to which most of the contribution of P f is concentrated in the vicinity of OP ̲ * . By using the value of the limit state function on the sample (U ̲ 1 ,,U ̲ N ), we can check if the limit state function is reasonably linear in the vicinity of OP ̲ * , which can validate the second hypothesis of FORM.

    If the behaviour of the limit state function is not linear, we can decide to use an importance sampling version of the Monte Carlo method for computing the probability of failure (refer to Reference Guide - Step C – Importance sampling ). However, the information obtained through the Strong Max Test, according to which OP ̲ * is the actual design point, is quite essential : it allows to construct an effective importance sampling density, e.g. a multidimensional gaussian distribution centered on OP ̲ * .

  • Case 2: set 1 is not empty. There are two possibilities:

    1. We have found some points that suggest that OP ̲ * is not a strong maximum, because for some points of the sampled sphere, the value taken by the limit state function is slightly negative;

    2. We have found some points that suggest that OP ̲ * is not even the global maximum, because for some points of the sampled sphere, the value taken by the limit state function is very negative.

      In the first case, we can decide to use an importance sampling version of the Monte Carlo method for computing the probability of failure, but with a mixture of e.g. multidimensional gaussian distributions centered on the U i in 𝒟 f (refer to Reference Guide - Step C – Importance Sampling ). In the second case, we can restart the search of the design point by starting at the detected U i .

Requirements  

the event of the FORM analysis described in the standard space myStandardEvent

type:

StandardEvent

the design point expressed in the standard space standardSpaceDesignPoint

type:

a NumericalPoint

 
Results  

all the points in 𝒟 f and outside the design point vicinity : potentialDesignPoints

type:

a NumericalSample

all the points in 𝒟 f and inside the design point vicinity : vicinityDesignPoint

type:

a NumericalSample

all the points outside 𝒟 f and outside the design point vicinity : farSecurityPoints

type:

a NumericalSample

all the points outside 𝒟 f and inside the design point vicinity : vicinitySecurityPoints

type:

a NumericalSample

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_StrongMaxTest.py

# Fix the importance level epsilon of the test # Care : 0<epsilon<1 importanceLevel = 0.15   # Fix the accuracy level tau of the test # Care : tau >0 # It is recommended that tau <4 accuracyLevel = 3   # Fix the confidence level (1-q) of the test confidenceLevel = 0.99   # Create the Strong Maximum Test # CARE : the event must be declared in the standard space # 1. From the confidenceLevel parameter mySMT_CL = StrongMaximumTest(     myStandardEvent, standardSpaceDesignPoint,  importanceLevel, accuracyLevel,  confidenceLevel)   # 2. Or from the  maximum number of points sampling the sphere pointsNumber = 1000 mySMT_PN = StrongMaximumTest(     myStandardEvent, standardSpaceDesignPoint,  importanceLevel, accuracyLevel,  pointsNumber)   # Perform the test mySMT_CL.run() mySMT_PN.run()   # Get (or evaluate) the confidence level # associated to the number of points used to sample the sphere print('Confidence level = ', mySMT_CL.getConfidenceLevel())   # Get (or evaluate) the number of points used to sample the sphere # associated the confidence level used print('Points Number = ', mySMT_CL.getPointNumber())   # Get all the points verifying the event  and outside the design point vicinity # Get also the values of limit state function at these points potentialDesignPoints = mySMT_CL.getFarDesignPointVerifyingEventPoints() values = mySMT_CL.getFarDesignPointVerifyingEventValues() print('Potential design points = ', end=' ') print(potentialDesignPoints) print('Model values = ') print(values)   # Get all the points verifying the event and inside the design point vicinity # Get also the values of limit state function at these points vicinityDesignPoint = mySMT_CL.getNearDesignPointVerifyingEventPoints() values = mySMT_CL.getNearDesignPointVerifyingEventValues() print(     'Points verifying the Event in the vicinity of the design points = ', end=' ') print(vicinityDesignPoint) print('Model values = ') print(values)   # Get all the points not verifying the event and outside the design point vicinity # Get also the values of limit state function at these points farSecurityPoints = mySMT_CL.getFarDesignPointViolatingEventPoints() values = mySMT_CL.getFarDesignPointViolatingEventValues() print(     'Points NOT verifying the Event outside the vicinity of the design points = ', end=' ') print(farSecurityPoints) print('Model values = ') print(values)   # Get  all the points not verifying the event and inside the design point vicinity # Get also the values of limit state function at these points vicinitySecurityPoints = mySMT_CL.getNearDesignPointViolatingEventPoints() values = mySMT_CL.getNearDesignPointViolatingEventValues() print(     'Points NOT verifying the Event outside the vicinity of the design points = ', end=' ') print(vicinitySecurityPoints) print('Model values = ') print(values)


3.5.6 UC : Creation of a Monte Carlo / LHS / Quasi Monte Carlo / Importance Sampling simulation algorithm

The objective of this Use Case is to create a simulation algorithm in order to evaluate in fine the probability of the specified event according to the specified method : Monte Carlo sampling, LHS sampling, Importance sampling or the Quasi Monte Carlo method.

The Importance sampling method requires the specification of the importance distribution according to which the numerical sample will be generated.

For the LHS method, the copula of the multi-variate distribution must be independent.

As the quasi-Monte Carlo method is based on deterministic sequences, we do not need to initialize the state of the random generator.

Details on simulation algorithms may be found in the Reference Guide ( see files Reference Guide - Step C – Estimating the probability of an event using Sampling and files around).

Requirements  

the event we want to evaluate the pobability : myEvent

type:

Event

the importance distribution : myImportanceDist

type:

Event

 
Results  

the Monte Carlo simulation algorithm : myMCAlgo

type:

Simulation

the LHS simulation algorithm : myLHSAlgo

type:

Simulation

the Importance Sampling simulation algorithm : myISAlgo

type:

Simulation

 

Python script for this UseCase : script_docUC_ThresholdExceedance_SimulationAlgo.py

myEvent = Event(ouputVector, Greater(), threshold)   # Create a Monte Carlo algorithm myMCAlgo = MonteCarlo(myEvent)   # Create a LHS algorithmscript_docUC_ThresholdExceedance_SimulationAlgo.py # Care : the copula of the multi-variate distribution must be independent myLHSAlgo = LHS(myEvent)   # Create a Randomized LHS algorithm # Care : the copula of the multi-variate distribution must be independent myRandomizedLHSAlgo = RandomizedLHS(myEvent)   # Create a Importance Sampling  algorithm # from the distribution myImportanceDistribution myImportanceDist = Normal(NumericalPoint([4, 0]), IdentityMatrix(2)) myISAlgo = ImportanceSampling(myEvent, myImportanceDist)   # Create a Quasi Monte Carlo algorithm # Care : the copula of the multi-variate distribution must be independent myQMCAlgo = QuasiMonteCarlo(myEvent)   # Create a Randomized Quasi Monte Carlo algorithm # Care : the copula of the multi-variate distribution must be independent myRandomizedQMCAlgo = RandomizedQuasiMonteCarlo(myEvent)


3.5.7 UC : Creation of a Directional Sampling simulation algorithm

The Directional Sampling simulation operates in the standard space. The maximum distant point of the standard space equal to 8 by default (this value may be changed with the setMaximumDistance() method).

Details on the directional sampling method may be found in the Reference Guide ( see files Reference Guide - Step C – Directional Sampling ).

The Directional Sampling simulation method is defined from :

  • an event,

  • a Root Strategy :

    • RiskyAndFast : for each direction, we check whether there is a sign change of the standard limit state function between the maximum distant point (at distance MaximumDistance from the center of the standard space) and the center of the standard space.

      In case of sign change, we research one root in the segment [origin, maximum distant point] with the selected non linear solver.

      As soon as founded, the segment [root, infinity point] is considered within the failure space.

    • MediumSafe : for each direction, we go along the direction by step of length stepSize from the origin to the maximum distant point (at distance MaximumDistance from the center of the standard space) and we check whether there is a sign change on each segment so formed.

      At the first sign change, we research one root in the concerned segment with the selected non linear solver. Then, the segment [root, maximum distant point] is considered within the failure space.

      If stepSize is small enough, this strategy garantees us to find the root which is the nearest from the origin.

    • SafeAndSlow : for each direction, we go along the direction by step of length stepSize from the origin to the maximum distant point(at distance MaximumDistance from the center of the standard space) and we check whether there is a sign change on each segment so formed.

      We go until the maximum distant point. Then, for all the segments where we detected the presence of a root, we research the root with the selected non linear solver. We evaluate the contribution to the failure probability of each segment.

      If stepSize is small enough, this strategy garantees us to find all the roots in the direction and the contribution of this direction to the failure probability is precisely evaluated.

  • a Non Linear Solver :

    • Bisection : bisection algorithm,

    • Secant : based on the evaluation of a segment between the two last iterated points,

    • Brent : mix of Bisection, Secant and inverse quadratic interpolation.

  • and a Sampling Strategy :

    • RandomDirection : we generate one point on the sphere unity according to the uniform distribution and we consider both opposite directions so formed. So one set of direction is composed of 2 directions.

    • OrthogonalDirection : this strategy is parameterized by k. We generate one direct orthonormalized base (e 1 ,,e n ) within the set of orthonormalized bases. We consider all the renormalized linear combinations of k vectors within the n vectors of the base, where the coefficients of the linear combinations are equal to +1,-1. There are C n k 2 k new vectors v i . We consider each direction defined by each vector v i . So one set of direction is composed of C n k 2 k directions.

      If k=1, we consider all the axes of the standard space.

The example here is a Directional Sampling simulation method defined by :

  • its parameters by default (CASE 1) : Root Strategy by default : Slow and Safe, Non Linear Solver : Brent algorithm, Sampling Strategy : Random Direction,

  • some other parameters (CASE 2) : Root Strategy by default : Risky And Fast, Non Linear Solver : Brent algorithm, Sampling Strategy : Orthogonal Direction.

Requirements  

the event in physical space : myEvent

type:

Event

 
Results  

Directional Sampling algorithm

type:

Simulation

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_DirectionalSampling.py

# CASE 1 : Directional Sampling from an event # Root Strategy by default : Safe And Slow # Non Linear Solver : Brent algorithm # Sampling Strategy : Random Direction   # Create a Directional Sampling algorithm myAlgo = DirectionalSampling(myEvent)   # CASE 2 : Directional Sampling from an event, a root strategy # and a directional sampling strategy # Root Strategy by default : MediumSafe # Non Linear Solver : Brent algorithm # Sampling Strategy : Orthogonal Direction   # Create a Directional Sampling algorithm # dimension of the input random vector dimInput = 2 myAlgo2 = DirectionalSampling(myEvent, RootStrategy(     MediumSafe()), SamplingStrategy(OrthogonalDirection(dimInput, 2)))  


3.5.8 UC : Parametrization of a simulation algorithm

The objective of this Use Case is to parameterize a simulation algorithm : parameters linked to the number of points generated, the precision of the probability estimator and the numerical sample storage strategy.

Details on thesimulation algorithm method may be found in the Reference Guide ( see files Reference Guide - Step C –Estimating the probability of an event using Sampling ).

The probability P is evaluated with a simulation methods by the estimator P n as defined :

P n =1 n i=1 n X i X i =1 p k=1 p Y k i . (3.5)

The random variable Y k i is adapted to the simulation used :

  • with the Monte Carlo method, Y k i =1 event ,

  • with the Importance sampling method, where the importance distribution pdf is denoted f and the distribution pdf of the input vector is denoted p, Y k i =1 event p f,

  • with the LHS method, Y k i is the Monte Carlo estimator when the sampling is restricted to one cell,

  • with the Directional Simulation, Y k i is the sum on one set of directions given by the Sampling strategy of the contribution of each direction to the event probability, this contribution being evaluated by the Root Strategy. With the RandomDirection Sampling Strategy, one set of directions is made of 2 directions. With the OrthogonalDirection Sampling Strategy parameterized by the integer q, one set of directions is made of C n q 2 q directions.

The parameter n is called the OuterSampling and the parameter p the BlockSize.

In the Monte Carlo method, the limit state function is evaluated n*p times. In the Directional Simulation method, the limit state function is evaluated in average n*p*(meannumberofevaluationsofthelimitstatefunctionononesetofdirections).

For the Directional Simulation method, it is recommended to fix BlockSize=1.

The simulation algorithm runs the simulations until its stopping criteria is verified, which is the case as soon as one of the following crieria is fulfilled :

  • the number n of external iterations has reached its maximum value, specified by the method setMaximumOuterSampling,

  • the coefficient of variation of P n has gone under its maximum value, specified by the method setMaximumCoefficientOfVariation,

  • the standard deviation of P n has gone under its maximum value, specified by the method setMaximumStandardDeviation.

To erase one of the criteria, you have to set the specific value that will never be reached : for N, a great value, for the coefficient of variation and the standard deviation the 0 value.

If not specified, OpenTURNS implements some default values :

  • MaximumOuterSampling = 1000,

  • BlockSize = 1,

  • MaximumCoefficientOfVariation = 0.1,

  • MaximumStandardDeviation = 0.0.

OpenTURNS enables to :

  • store the numerical sample of the input random vector and the associated one of the output random vector which have been used to evaluate the Monte Carlo probability estimator.

  • draw the convergence graph of the probability estimator with the confidence curves associated to a specified level. Values of P n and σ n 2 (empirical variance of the estimator P n ) are stored according to a particular HistoryStrategy that we specify thanks to the method setConvergenceStrategy proposed by the simulation algorithm.

In order to prevent a memory problem, the User has the possibility to choose the storage strategy used to save the numerical samples. Four strategies are proposed :

  • the Null strategy where nothing is stored. This strategy is proposed by the Null class which requires to specify no argument.

  • theFull strategy where every point is stored. Be careful! The memory will be exhausted for huge samples. This strategy is proposed by the Full class which requires to specify no argument.

  • the Last strategy where only the N last points are stored, where N is specified by the User. This strategy is proposed by the Last class which requires to specify the number of points to store.

  • the Compact strategy where a regularly spaced sub-sample is stored. The minimum size N of the stored numerical sample is specified by the User. OpenTURNS proceeds as follows :

    1. it stores the first 2N simulations : the size of the stored sample is 2N,

    2. it selects only 1 out of 2 of the stored simulations : then the size of the stored sample decreases to N (this is the compact step),

    3. it stores the next N simulations when selecting 1 out of 2 of the next simulations : the size of the stored sample is 2N,

    4. it selects only 1 out of 2 of the stored simulations : then the size of the stored sample decreases to N,

    5. it stores the next N simulations when selecting 1 out of 4 of the next simulations : the size of the stored sample is 2N,

    6. then it keeps on until reaching the stop criteria.

    The stored numerical sample will have a size within N and 2N. This strategy is proposed by the Compact class which requires to specify the number of points to store.

The following example illustrates the different storage strategies :

  Initial Sample =

  1 2 3 4 5 6 7 8 9 10 11 12

 

  Null strategy sample =

 

 

  Full strategy sample =

  1 2 3 4 5 6 7 8 9 10 11 12

 

  Last strategy sample (large storage :  36  last points) =

  1 2 3 4 5 6 7 8 9 10 11 12

 

  Last strategy sample (small storage :  4  last points) =

  9 10 11 12

 

  Compact strategy sample (large storage :  36  points) =

  1 2 3 4 5 6 7 8 9 10 11 12

 

  Compact strategy sample (small storage :  4  points) =

  2 4 6 8 10 12

The following Use Case illustrates the implementation of the whole storage strategies.

Requirements  

a simulation algorithm : myAlgo

type:

Simulation

 
Results  

none

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_SimulationAlgoParametrization.py

# Create a Monte Carlo algorithm myAlgo = MonteCarlo(myEvent)   # The simulation sampling is subsampled in samples of # BlockSize size (distribution service) # for MonteCarlo, LHS and Importance Sampling methods, we recommend # to use BlockSize = number of available CPU if the limit state function is low CPU, # else it is recommanded to fix BlockSize to a high value (Care : the less OuterSampling # iterations, the less points in the convergence graph!). myAlgo.setBlockSize(1)   # Define the 3 stopping criteria : # Criteria 1 : Define the Maximum Coefficient of variation of the # probability estimator myAlgo.setMaximumCoefficientOfVariation(0.1)   # Criteria 2 : Define the Maximum Outer Sampling of the simumlation myAlgo.setMaximumOuterSampling(10000)   # Criteria 3 :  Define the Maximum  Standard Deviation of the simumlation myAlgo.setMaximumStandardDeviation(0.1)   # Define the HistoryStrategy to store the values of $P_n$ and $\sigma_n$ # used ot draw the convergence graph   # Null strategy myAlgo.setConvergenceStrategy(Null())   # Full strategy myAlgo.setConvergenceStrategy(Full())   # Compact strategy : N points N = 1000 myAlgo.setConvergenceStrategy(Compact(N))   # Last strategy : N points myAlgo.setConvergenceStrategy(Last(N))


3.5.9 UC : Run and results exploitation of a simulation algorithm : probability estimation, estimator variance, confidence interval, convergence graph, stored samples

The objective of this Use Case is to launch the evaluation of the event probability with a simulation algorithm and to get all the results proposed by OpenTURNS : the probability estimation, the estimator variance, the confidence interval, the convergence graph of the estimator, the stored input and output numerical samples.

Details on the simulation algorithm method may be found in the Reference Guide ( see files Reference Guide - Step C –Estimating the probability of an event using Sampling ).

Before any simulation, it is possible to initialise or get the state of the random generator aqs explined in the Use Case 3.1.

It is often usefull to evaluate the exact number of calls to the limit state function the simulation has required. One way to get it is to ask the number of times the limit state function has been evaluated so far, just before and just after the simulation algorithm run.

In the Monte Carlo simulation case, this number can be directly obtained by multiplying the BlockSize parameter with the OuterSampling one.

The following Use Case illustrates both situations.

It also illustrates how to :

  • evaluate the probability estimator P N of the real probability p,

  • evaluate the confidence length of level α around the probabilty estimator, from the formula :

    IC α =[P N -t α P N (1-P N ) N;P N +t α P N (1-P N ) N]

    where t α =φ (0,1) (1-α 2). OpenTURNS evaluates the length l of the confidence interval, defined as : l=2t α P N (1-P N ) N ( . Thus we have pIC α =α.

  • draw the convergence graph of the probability estimator, with the confidence curves,

  • get the numerical sample used all along the simulation run,

  • get the mean point of all the simulations (X i ̲) (1in) generated by the Simulation algorithm that failed into the event domain in the physical space :

    X ̲ event * =1 n i=1 n X i ̲1 event (X ̲ i ) (18)

    Be carefull : this notion is only valuable for Monte Carlo or LHS sampling as the mean is evaluated from the (18) relation (only uniform weights over the realizations X i ̲.

  • get some importance factors estimated from the coordinates of the mean point (18) mapped into the standard space :

    α i =U i * 2 ||U ̲ * || 2 (19)

    where

    U ̲ * =T(X ̲ event * ) (20)

    Be carefull : the same restriction as previoulsly exists.

Requirements  

the simulation algorithm : myAlgo

type:

Simulation

the associated limit state function : model

type:

NumericalMathFunction

 
Results  

the probability estimation

type:

NumericalScalar

Confidence Interval length

type:

NumericalScalar

Variance of the simulation probability estimator

type:

NumericalScalar

the input and output samples used during the algorithm, and the values of the probability estimator and its variance

type:

NumericalSample

the convergence graphs

type:

Graph

the mean point in eventdoain and its importance factors

type:

NumericalPoint

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_SimulationAlgoExploitation.py

# Save the number of calls to the model, its gradient and hessian done so far modelCallNumberAfter = model.getEvaluationCallsNumber() modelGradientCallNumberAfter = model.getGradientCallsNumber() modelHessianCallNumberAfter = model.getHessianCallsNumber()   # Display the number of iterations executed and # the number of evaluations of the model print("number of evaluations of the model = ",       modelCallNumberAfter - modelCallNumberBefore)   # Stream out the complete simulation result structure result = myAlgo.getResult()   # Get the mean point in event  domain # care : only for Monte Carlo and LHS sampling methods meanPointEvent = result.getMeanPointInEventDomain()   # Get the associated importance factors # care : only for Monte Carlo and LHS sampling methods impFactorsSimulation = result.getImportanceFactors()   # Display the number of iterations executed and the number of # evaluations of the model # ONLY in the Monte Carlo case print("number of evaluations of the model = ",       result.getOuterSampling() * result.getBlockSize())   # Get the values of the stop criteria # Criteria 1 : Display the Coefficient of Variation of the estimator print("Coefficient of Variation of Pn = ", result.getCoefficientOfVariation())   #  Criteria 2 : Display the Outer Sampling of the simumlation print("Outer Sampling of the simulation =  ", result.getOuterSampling())   #  Criteria 3 : Display the Standard Deviation of the estimator print("Standard Deviation of the estimator =  ", result.getStandardDeviation())   # Display the simulation event probability probability = result.getProbabilityEstimate() print("simulation probability estimation = ", probability)   # Display the variance of the simulation probability estimator print("Variance of the simulation probability estimator = ",       result.getVarianceEstimate())   # Display the confidence interval length centered around the # MonteCarlo probability MCProb # IC = [Probability - 0.5*length, Probability + 0.5*length] # level 0.95 length95 = result.getConfidenceLength(0.95) print("0.95 Confidence Interval length = ", length95) print("IC at 0.95 = [", probability - 0.5 * length95,;      "; ", probability + 0.5 * length95, "]")   # Draw the convergence graph and the confidence interval of level alpha # By default, alpha = 0.95 alpha = 0.90 convergenceGraph = myAlgo.drawProbabilityConvergence(alpha)   # In order to see the graph without creating the associated files # View(convergenceGraph).show()   # Create the files i all formats convergenceGraph.draw("convergenceGraphe")   # View the PNG file within the TUI # View(convergenceGraph).show()   # Get the numerical samples of the input and output random vectors # stored according to the History Strategy specified # and used to evaluate the probability estimator and its variance inputSampleStored = model.getHistoryInput().getSample() outputSampleStored = model.getHistoryOutput().getSample()   # Get the values of the estimator and its variance # stored according to the History Strategy specified # and used to draw the convergence graph estimator_probability_sample = myAlgo.getConvergenceStrategy().getSample()[0] estimator_variance_sample = myAlgo.getConvergenceStrategy().getSample()[1]


3.5.10 UC : Probability evaluation from an analytical method (FORM/SORM) followed by a simulation method centered on the design point

This Use Case illustrates the following method in order to evaluate the event probability, melting the FORM or SORM method and the simulation one :

  • perform an FORM or SORM study in order to find the design point,

  • perform an importance sampling study centered around the design point : the importance distribution operates in the standard space and is the standard distribution of the standard space (the standard elliptical distribution in the case of an elliptic copula of the input random vector, the standard normal one in all the other cases).

The importance sampling technique in the standard space may be of two kinds :

  • the numerical sample is generated according to the new importance distribution : this technique is called post analytical importance sampling,

  • the numerical sample is generated according to the new importance distribution and is controlled by the value of the linearised limit state function : this technique is called post analytical controlled importance sampling.

This post analytical importance sampling algorithm is created from the result structure of a FORM or SORM algorithm, obtained on the Use Case 3.5.3.

It is parameterised exactly as a simulation algorithm, through the parameters OuterSampling, BlockSize, ..., defined in the Use case 3.5.8.

The results may be extracted and exploited exactly as defined in the Use case 3.5.9.

Let us note that the post FORM/SORM importance sampling method may be implemented thanks to the ImportanceSampling object as defined in the Use Case 3.5.6, where the importance distribution is defined in the standard space : then, it requires that the event initially defined in the pysical space be transformed in the standard space, as explained in the Use Case 3.5.1.

The controlled importance sampling technique is only accessible within the post analytical context.

Details on the simulation algorithm method may be found in the Reference Guide ( see files Reference Guide - Step C – FORM and Importance Sampling ).

Requirements  

the result of a FORM or SORM study : myAnalyticalResult

type:

FORMResult or a SORMResult

 
Results  

the post analytical importance sampling simulation algorithm : myPostAnalyticalISAlgo

type:

PostAnalyticalImportanceSampling

the post analytical controlled importance sampling simulation algorithm : myPostAnalyticalControlledISAlgo

type:

PostAnalyticalControlledImportanceSampling

 

Python script for this UseCase :

script_docUC_ThresholdExceedance_FORMandImportanceSampling.py

# Create the post analytical importance sampling simulation algorithm myPostAnalyticalISAlgo = PostAnalyticalImportanceSampling(myAnalyticalResult) myPostAnalyticalISAlgo.run() print('myPostAnalyticalISAlgo - Results = ',       myPostAnalyticalISAlgo.getResult())   # Create the post analytical controlled importance sampling simulation # algorithm myPostAnalyticalControlledISAlgo = PostAnalyticalControlledImportanceSampling(     myAnalyticalResult) myPostAnalyticalControlledISAlgo.run() print('myPostAnalyticalControlledISAlgo - Results = ',       myPostAnalyticalControlledISAlgo.getResult())


Creation of the limit state function and the output variable of interest  
Table of contents
Construction of a response surface