5 Stochastic process

This section details how to create, manipulate and propagate some stochastic processes.

In this document, we note:

If n=1, t may be interpreted as a time stamp to recover the classical notation of a stochastic process.

If the process is a second order process, we note:

We recall here some useful definitions.

Spatial (temporal) and Stochastic Mean: The spatial mean of the process X is the function m:Ω d defined by:

m(ω)=1 |𝒟| 𝒟 X(ω)(t ̲)dt ̲ (28)

If n=1 and if the mesh is a regular grid (t 0 ,,t N-1 ), then the spatial mean corresponds to the temporal mean defined by:

m(ω)=1 t N-1 -t 0 t 0 t N-1 X(ω)(t)dt (29)

The spatial mean is estimated from one realization of the process (see the use case on Field or Time series).

The stochastic mean of the process X is the function g:𝒟 d defined by:

g(t ̲)=𝔼X t ̲ (30)

The stochastic mean is estimated from a sample of realizations of the process (see the use case on the Process sample).

For an ergodic process, the stochastic mean and the spatial mean are equal and constant (equal to the constant vector noted c ̲):

ωΩ,t ̲,m(ω)=g(t ̲)=c ̲ (31)

Normal process: A stochastic process is normal if all its finite dimensional joint distributions are normal, which means that for all k and I k * , with card I k =k, there exist m ̲ 1 ,,m ̲ k d and C ̲ ̲ 1,,k kd,kd () such that :

𝔼expiX ̲ I k t U ̲ k =expiU ̲ k t M ̲ k -1 2U ̲ k t C ̲ ̲ 1,,k U ̲ k (32)

where X ̲ I k t =(X t ̲ 1 t ,,X t ̲ k t ), U ̲ k t =(u ̲ 1 t ,,u ̲ k t ) and M ̲ k t =(m ̲ 1 t ,,m ̲ k t ) and C ̲ ̲ 1,,k is the symmetric matrix :

C ̲ ̲ 1,,k =C(t ̲ 1 ,t ̲ 1 )C(t ̲ 1 ,t ̲ 2 )C(t ̲ 1 ,t ̲ k )C(t ̲ 2 ,t ̲ 2 )C(t ̲ 2 ,t ̲ k )C(t ̲ k ,t ̲ k ) (33)

A normal process is entirely defined by its mean function m and its covariance function C (or correlation function R).

Weak stationarity (second order stationarity) : A process X is weakly stationary or stationary of second order if its mean function is constant and its covariance function is invariant by translation :

(s ̲,t ̲)𝒟,m(t ̲)=m(s ̲)(s ̲,t ̲,h ̲)𝒟,C(s ̲,s ̲+h ̲)=C(t ̲,t ̲+h ̲) (5)

We note C stat (τ ̲) for C(s ̲,s ̲+τ ̲) as this quantity does not depend on s ̲.

In the continuous case, 𝒟 must be equal to n as it is invariant by any translation. In the discrete case, 𝒟 is a lattice =(δ 1 ××δ n ) where i,δ i >0.

Stationarity : A process X is stationary if its distribution is invariant by translation: k, (t ̲ 1 ,,t ̲ k )𝒟, h ̲ n , we have:

k,(t ̲ 1 ,,t ̲ k )𝒟,h ̲ n ,(X t ̲ 1 ,,X t ̲ k )= 𝒟(X t ̲ 1 +h ̲ ,,X t ̲ k +h ̲ ) (35)

Spectral density function: If X is a zero-mean weakly stationary continuous process and if for all (i,j), C i,j stat : n n is 1 ( n ) (ie n |C i,j stat (τ ̲)|dτ ̲<+), we define the bilateral spectral density function S: n + (d) where + (d) d () is the set of d-dimensional positive definite hermitian matrices, as the Fourier transform of the covariance function C stat :

f ̲ n ,S(f ̲)= n exp-2iπ<f ̲,τ ̲>C stat (τ ̲)dτ ̲ (36)

Furthermore, if for all (i,j), S i,j : n is 1 () (ie n |S i,j (f ̲)|df ̲<+), C stat may be evaluated from S as follows :

C stat (τ ̲)= n exp2iπ<f ̲,τ ̲>S(f ̲)df ̲ (37)

In the discrete case, the spectral density is defined for a zero-mean weakly stationary process, where 𝒟=(δ 1 ××δ n ) with i,δ i >0 and where the previous integrals are replaced by sums.


5.1 UC : Creation of a mesh

This section details how to create a mesh associated to a domain 𝒟 n .

A mesh is defined from vertices in n and a topology that connects the vertices: the simplices. The simplex Indices([i 1 ,,i n+1 ]) relies the vertices of index (i 1 ,,i n+1 ) in n . In dimension 1, a simplex is an interval Indices([i 1 ,i 2 ]); in dimension 2, it is a triangle Indices([i 1 ,i 2 ,i 3 ]).

The mesh can be imported from a MSH file.

OpenTURNS enables to easily create a mesh which is a box of dimension d=1 or d=2 regularly meshed in all its directions, thanks to the object IntervalMesher.

Consider X:Ω×𝒟 d a multivariate stochastic process of dimension d, where 𝒟 n . The mesh is a discretization of the domain 𝒟.

Requirements  

vertices : myVertices

type:

NumericalSample

simplices : mySimplices

type:

IndicesCollection

a MSH file : myMSHFile.msh

type:

a MSH file

 
Results  

meshes of dimension 1 and 2 : myMesh1D, myMesh2D, myMSHmesh, myMeshBox

type:

Mesh

 

Python script for this UseCase :

script_docUC_StocProc_Mesh.py

######################## # Case 1: Define a mesh from its vertices and simplices   # Define a one dimensional mesh vertices = [[0.5], [1.5], [2.1], [2.7]] simplicies = IndicesCollection([[0, 1], [1, 2], [2, 3]]) myMesh1D = Mesh(vertices, simplicies)   # Define a bi dimensional mesh vertices = [[0.0, 0.0], [1.0, 0.0], [1.0, 1.0],;            [1.5, 1.0], [2.0, 1.5], [0.5, 1.5]] simplicies = IndicesCollection(     [[0, 1, 2], [1, 2, 3], [2, 3, 4], [2, 4, 5], [0, 2, 5]]) myMesh2D = Mesh(vertices, simplicies)   # Import a mesh from a MSHFile # myMSHmesh=Mesh.ImportFromMSHFile('myMSHFile.msh')   # Draw the mesh mygraph1 = myMesh1D.draw() # Show(mygraph1) mygraph2 = myMesh2D.draw() # Show(mygraph2)   ######################## # Case 2: Define a mesh which is regularly meshed box # in dimension 1 or 2   # Define the number of intervals in each direction of the box myIndices = Indices([5, 10]) myMesher = IntervalMesher(myIndices)   # Create the mesh of the box [0., 2.] * [0., 4.] lowerBound = [0., 0.] upperBound = [2., 4.] myInterval = Interval(lowerBound, upperBound) myMeshBox = myMesher.build(myInterval) mygraph3 = myMeshBox.draw() # Show(mygraph3)  

The first example illustrated in the Figure 53 is the 1D case of the upper script where the mesh is defined in by 4 nodes and 3 intervals.

The second example illustrated in the Figure 53 is the 2D case of the upper script where the mesh is defined in 2 by 6 nodes and 5 triangles.

The last example illustrated in the Figure 54 is the 2D case of the upper script where the mesh is the box [0.,0.]×[2.,4.] regularly meshed with 5 intervals along the first direction and 10 intervals along the second direction.

The same kind of mesh, defined from the box [1.,2.]×[0.,Π], regularly meshed with 50 intervals along each direction and mapped through the function f(r,θ)=(rcosθcos5θ,rsinθsin5θ) is drawn in the Figure 54.

The Figure 55 draws a bidimensional mesh made of 19750 triangles and 10001 nodes.

A mesh in dimension 1.
A mesh in dimension 2.
Figure 53
A mesh defined from a Box.
A box mesh mapped through f(r,θ)=(rcosθcos5θ,rsinθsin5θ).
Figure 54
A mesh in dimension 2.
Figure 55

5.2 UC : Creation of a time grid

This section details first how to create a regular time grid. Note that a time grid is a particular mesh of 𝒟=[0,T].

A regular time grid is a regular discretization of the interval [0,T] into N points, noted (t 0 ,,t N-1 ).

The time grid can be defined using (t Min ,Δt,N) where N is the number of points in the time grid. Δt the time step between two consecutive instants and t 0 =t Min . Then, t k =t Min +kΔt and t Max =t Min +(N-1)Δt.

Consider X:Ω×𝒟 d a multivariate stochastic process of dimension d, where n=1, 𝒟=[0,T] and t𝒟 is interpreted as a time stamp. Then the mesh associated to the process X is a (regular) time grid.

Requirements   none  
Results  

a time grid : myRegularGrid

type:

RegularGrid

 

Python script for this UseCase :

script_docUC_StocProc_TimeGrid.py

# Define the lower bound of the time grid, the number of steps # and the time step between two instants   tMin = 0. timeStep = 0.1 n = 10   # Create the RegularGrid myRegularGrid = RegularGrid(tMin, timeStep, n)   # Get the first and the last instants, # the step and the number of points of a RegularGrid myMin = myRegularGrid.getStart() myMax = myRegularGrid.getEnd() myStep = myRegularGrid.getStep() myRegularGridSize = myRegularGrid.getN()


5.3 UC : Manipulation of a process

The objective here is to manipulate a multivariate stochastic process X:Ω×𝒟 d , where 𝒟 n is discretized on the mesh .

A continuous realization from the process X is defined as the numerical math function defined for a given ωΩ:

X(ω):𝒟 d (38)

According to the process, the continuous realizations are build :

  • either using a dedicated functional model if it exists: e.g. a functional basis process (see Figures 56 to 57);

  • or using an interpolation from a discrete realization of the process on : in dimension d=1, a linear interpolation and in dimension d2, a piecewise constant function (the value at a given position is equal to the value at the nearest vertex of the mesh of the process).

We get a continuous realization thanks to the method getContinuousRealization.

In addition, it is possible:

  • to extract its marginal process for j[0,d-1] : X j :Ω×𝒟 thanks to the method getMarginalProcess;

  • to get one or several realization(s) of the process, thanks to the methods getRealization, getSample;

  • to get its mesh, thanks to the method getMesh and its time grid, thanks to the method getTimeGrid when the mesh can be interpreted as a regular time grid;

  • to check wether the process is normal or stationary, thanks to the methods isNormal and isStationary.

Requirements  

a stochastic process myProcess

type:

Process

 
Results  

a time grid or a mesh: myTimeGrid, myMesh

type:

RegularGrid, Mesh

a field or a sample of fields :: myField, myFieldSample

type:

Field, ProcessSample

a stochastic process myMarginalProcess

type:

Process

a continuous realization: myContReal

type:

NumericalMathFunction

 

Python script for this Use Case : script_docUC_StocProc_Process.py

# Get the dimension d of the process dimension = myProcess.getDimension()   # Get the mesh of the process myMesh = myProcess.getMesh()   # Get the time grid of the process # only when the mesh can be interpreted as a regular time grid myTimeGrid = myProcess.getTimeGrid()   # Get a realisation of the process myField = myProcess.getRealization()   # Get several realisations of the process number = 10 myFieldSample = myProcess.getSample(number)   # Get a continuous realisation of the process myContReal = myProcess.getContinuousRealization()   # Get its first marginal myContReal_marg1 = myContReal.getMarginal(0) # Get the corners of the mesh minMesh = myMesh.getVertices().getMin() maxMesh = myMesh.getVertices().getMax() myGraph = myContReal_marg1.draw(minMesh[0], maxMesh[0]) # Show(myGraph)   # Get the marginal of the process at index 1 # Care! Numerotation begins at 0 # Not yet implemented for some processes # myMarginalProcess = myProcess.getMarginalProcess(1)   # Get the marginal of the process at index in indices # Not yet implemented for some processes # indices = Indices([0, 1]) # myMarginalProcess_2D = myProcess.getMarginalProcess(indices)   # Check  wether the process is normal print(myProcess.isNormal())   # Check  wether the process is stationary print(myProcess.isStationary())

Figures 56 and 56 draw two continuous realizations of the functional basis process X:Ω×[-4,4] 2 defined by:

X(ω,(x,y))= i=1 3 j=1 3 α i,j (ω)sin(ix)sin(jy) (39)

where the random variables α i,j are iid according to the standard normal distribution.

One realization of the functional basis process.
One realization of the functional basis process.
Figure 56

In Figure 57, we draw one realization versus one continuous realization of the functional basis process X:Ω×[0,1] defined by:

X(ω,x)= i=1 10 Ξ i (ω)sin(2iπx) (40)

on the regular grid of [0,1] discretized with 21 points, where the coefficient s Ξ i are independent and respectively distributed according to Normal(0,σ=1.0/i).

When we ask for a realization, we get a field (each vertex x k of the mesh is associated to a value of the process X). The method draw draws a linear interpolation of the values: see the blue line.

When we ask for a continuous realization, we get a function X(ω):[0,1] defined by:

X(ω)(x)= i=1 10 ξ i (ω)sin(2iπx) (41)

where ξ i =Ξ i (ω). The method draw draws the graph of the function, which is continuous and can be evaluated on other points than those of the mesh: see the red line.

One realization versus one continuous realization of the function basis process X.
Figure 57

In Figure 58, we define a normal process of dimension 1 which covariance model is Exponential(a,λ) with a=1.0 and λ=4.0. The associated mesh is the regular grid of [0,1] discretized with 21 points.

When we ask for a realization, we get a field and the method draw draws a linear interpolation of the values: see the blue line.

When we ask for a continuous realization, we get a function X(ω):[0,1] which is built using a linear interpolation of the values of the field: see the red line.

In that case, both methods getRealization and getContinuousRealization lead to the same graph.

One realization versus one continuous realization of temporal normal process of dimension 1.
Figure 58

5.4 UC : Manipulation of a field

The objective here is to create and manipulate a field. A field is the agregation of a mesh of a domain 𝒟 n and a sample of values in d associated to each vertex of the mesh.

We note (t ̲ 0 ,,t ̲ N-1 ) the vertices of and (x ̲ 0 ,,x ̲ N-1 ) the associated values in d .

The spatial mean of a field is defined by:

1 N i=0 N-1 x ̲ i (42)

It is possible to export the field into the VTK format thanks to the method exportToVTKFormat, which allows to visualize it using e.g. ParaView: see Figure 59.

A field is stored in the Field object that stores the mesh and the values at each vertex of the mesh.

The spatial mean is evaluated thanks to the method getSpatialMean. When the mesh is a regular time grid, the spatial mean corresponds to a temporal mean which is also evaluated with the method getTemporalMean.

Note that if the mesh is of dimension 1 or 2, it is possible to draw one marginal of the field as follows, where the marginal is noted j[0,d-1]:

  • drawMarginal(j,False) draws the marginal j of the field with no interpolation between the points. Then, in dimension 1, we get the cloud (t i ,v i j ) 0iN-1 . In dimension 2, we draw a bullet on each vertex t ̲ i which color depends on the value of the process v i j : see Figure 59.

  • drawMarginal(j) draws the marginal j of the field with linear interpolation between the points: see Figure 59.

A field can be obtained as a realization of a multivariate stochastic process X:Ω×𝒟 d of dimension d where 𝒟 n , when the realization is discretized on the mesh of 𝒟. The values (x ̲ 0 ,,x ̲ N-1 ) of the field are defined by:

i[0,N-1],x ̲ i =X(ω)(t ̲ i ) (43)

If each simplex of the mesh has the same volume, then the spatial mean (42) is an estimation of the spatial mean of the process X defined in (28).

If the mesh can be interpreted as a regular time grid, then the spatial mean (42) is an estimation of the temporal mean of the process X defined in (29).

Requirements  

a mesh : myMesh

type:

Mesh

some values : myValues

type:

NumericalSample

a process: myProcess

type:

Process

the mesh of the field: myMesh

type:

Mesh

the values of the field: myValues

type:

NumericalSample

 
Results  

two fields: myField, myField2

type:

Field

the spatial mean: mySpatMean

type:

NumericalPoint

a value of the field: myValue_i

type:

NumericalPoint

the values of the field: myGeneratedValues

type:

NumericalSample

 

Python script for this UseCase :

script_docUC_StocProc_Field.py

########################### # Case 1: Create a field from a mesh and some values myField = Field(myMesh, myValues)   ########################### # Case 2: Get a field from the process myProcess myField2 = myProcess.getRealization()   # Get all the values of the field myGeneratedValues = myField.getValues()   # Get the value of the field at the vertex i i = 1 myValue_i = myGeneratedValues[i]   # Compute the spatial mean of the field mySpatMean = myField.getSpatialMean()   # Draw the field without interpolation myGraph1 = myField.drawMarginal(0, False) # Show(myGraph1)   # Draw the field with interpolation myGraph2 = myField.drawMarginal(0) # Show(myGraph2)   # Export to the VTK format myField.exportToVTKFile('myFile.vtk')

Consider the temporal normal process of dimension 1 which covariance model is Exponential(a,λ) with (a,λ)=(1.0,0.2) and the bidimensional mesh which is the box [0,2]×[0,1] regularly discretized with 80 points in the first dimension and 40 points in the second one.

We draw a realization of this process on the mesh using no interpolation: see Figure 59 and using a linear interpolation: see the Figure 59 .

The Figure 59 shows the field as visualized using ParaView.

One field with no interpolation.
Field of Figure 59 using linear interpolation.
Field of Figure 59 visualized using ParaView.
Figure 59

5.5 UC : Manipulation of a time series

The objective here is to create and manipulate a time series. A time series is a particular field where the mesh can be considered as a regular time grid (t 0 ,,t N-1 ).

The spatial mean of a time series corresponds to a temporal mean and is evaluated as defined in (42).

It is possible to draw a time series, using interpolation between the values: see the use case on the Field.

A time series can be obtained as a realization of a multivariate stochastic process X:Ω×[0,T] d of dimension d where [0,T] is discretized according to the regular grid (t 0 ,,t N-1 ) . The values (x ̲ 0 ,,x ̲ N-1 ) of the time series are defined by:

i[0,N-1],x ̲ i =X(ω)(t i ) (44)

A time series is stored in the TimeSeries object that stores the regular time grid and the associated values.

Requirements  

a time grid : myTimeGrid

type:

RegularGrid

some values : myValues

type:

NumericalSample

 
Results  

a time series : myTimeSeries

type:

TimeSeries

 

Python script for this UseCase :

script_docUC_StocProc_TimeSeries.py

########################### # Case 1: Create a time series from a time grid and values # Care! The number of steps of the time grid must correspond to the size # of the values myTimeSeries = TimeSeries(myTimeGrid, myValues)   ########################### # Case 2: Get a time series from the process myProcess myTimeSeries2 = myProcess.getRealization()   # Get the number of values of the time series print('Dimension = ', myTimeSeries.getSize())   # Get the dimension of the values observed at each time print('Dimension = ', myTimeSeries.getDimension())   # Get the value Xi at index i # Care! Numerotation begins at i=0 i = 37 print('Xi = ', myTimeSeries.getValueAtIndex(i))   # Get the value Xi of the observed time series at time # the nearest of myTime myTime = 3.4 print('Xi at time the nearest of myTime',       myTimeSeries.getValueAtNearestTime(myTime))   # Get the time series at index i : (ti, Xi) i = 37 print('(ti, Xi) = ', myTimeSeries[i])   # Get a the marginal value at index i of the time series i = 37 # get the time stamp: print('ti = ', myTimeSeries[i, 0]) # get the first component of the corresponding value : print('Xi1 = ', myTimeSeries[i, 1])   # Get all the values (X1, .., Xn) of the time series allValues = myTimeSeries.getSample()   # Compute the temporal Mean # It corresponds to the mean of the values of the time series myTemporalMean = myTimeSeries.getTemporalMean()   # Draw the marginal i of the time series # Care! Numerotation begins at i=0 # using linear interpolation myMarginalTimeSerie = myTimeSeries.drawMarginal(0) # Show(myMarginalTimeSerie)   # with no interpolation myMarginalTimeSerie2 = myTimeSeries.drawMarginal(0, False) # Show(myMarginalTimeSerie2)

Consider the temporal normal process of dimension 1 which covariance model is Exponential(a,λ) with (a,λ)=(1.0,0.2) and the time grid [0,1] regularly discretized with 101 points.

We draw a realization of this process on the time grid using no interpolation: see Figure 60 and using a linear interpolation: see Figure 60.

One time series with no interpolation.
Time series of Figure 60 using linear interpolation.
Figure 60

5.6 UC : Manipulation of a process sample

The objective here is to create and manipulate a process sample. A process sample is a collection of fields which share the same mesh n .

The method computeMean evaluates the mean of the values associated to the same vertex t ̲ i of the common mesh . If K is the number of fields of the process sample, the method evaluates:

1 K k=1 K x ̲ i k (45)

where (x ̲ 0 k ,,x ̲ N-1 k ) are the values of the field k associated to the vertices (t ̲ 0 ,,t ̲ N-1 ) of . It creates a numerical sample of size N and dimension d.

The method computeSpatialMean evaluates the spatial mean defined in (42) for each field contained in the process sample. It creates a numerical sample of size K and dimension d.

A process sample can be obtained as K realizations of a multivariate stochastic process X:Ω×𝒟 d of dimension d where 𝒟 n , when the realizations are discretized on the same mesh of 𝒟. The values (x ̲ 0 k ,,x ̲ N-1 k ) of the field k are defined by:

i[0,N-1],x ̲ i =X(ω k )(t ̲ i ) (46)

The mean defined in (45) is an estimation of the stochastic mean of the process X defined in (30).

The q-quantiles per component vector of level q of the random variable X t ̲ i is the vector of the marginal quantiles of level q of X t ̲ i . The method computeQuantilePerComponent(q) of the process X creates a field that associates the q-quantiles per component vector of the random variable X t ̲ i to each vertex t ̲ i . The marginal quantiles are evaluated from the empirical distribution defined by the process sample.

Requirements  

a field : myField

type:

Field

a process : myProcess

type:

Process

 
Results  

a sample of processes : myProcessSample, myProcessSample_1

type:

ProcessSample

the stochastic mean process : myMeanField

type:

Field

the spatial mean : myMeanNS

type:

NumericalSample

the field of the quantiles per component : myQuantileField

type:

Field

 

Python script for this Use Case :

script_docUC_StocProc_ProcessSample.py

############################ # Case 1: Create a process sample # by duplicating the same field number = 10 myProcessSample_1 = ProcessSample(number, myField)   ############################ # Case 1: Create a process sample of size 10 # from a process myProcessSample = myProcess.getSample(10)   # Add a field to the process sample myProcessSample.add(myField)   # Get the field of index i=2 myFieldIndexI = myProcessSample[2]   # Compute the  mean of the process sample # The result is a field myMeanField = myProcessSample.computeMean()   # Compute the spatial mean of the process sample # The result is a numerical sample myMeanNS = myProcessSample.computeSpatialMean()   # Compute the quantiles per component associated to the level q # The result is a field q = 0.50 myQuantileField = myProcessSample.computeQuantilePerComponent(q)


5.7 Transformation of fields

5.7.1 UC : Trend computation

A multivariate stochastic process X:Ω×𝒟 d of dimension d where 𝒟 n may write as the sum of a trend function f trend : n d and a stationary multivariate stochastic process X stat :Ω×𝒟 d of dimension d as follows:

ωΩ,t ̲𝒟,X(ω,t ̲)=X stat (ω,t ̲)+f trend (t ̲) (47)

The objective here is to identify the trend function f trend from a given field of the process X and then to remove this last one from the initial field. The resulting field is a realization of the process X stat .

OpenTURNS also gives the possibility to the User to define the function f trend and to remove it from the initial field to get the resulting stationary field.

Among various method, one consists in fixing a basis of function and estimating f trend using a least square method. We consider the functional basis :

=(f 1 ,f 2 ,...,f K )withf j : n d j1,2,...,K

on which the trend function f trend is decomposed :

f trend (t ̲)= j=1 K α j f j (t ̲) (48)

The coefficients α j have to be computed, tanks to the least square method for example. However, in the case where the number of available data is of the same ordre as K, the least square system is ill-posed and a more complex algorithm may be used. Some algorithms combine cross validation techniques and advanced regression strategies, in order to provide the estimation of the coefficients associated to the best model among the basis functions (sparse model). For example, we can use the least angle regression (LAR) method for the choice of sparse models. Then, some fitting algorithms like the leave one out, coupled to the regression strategy, assess the error on the prediction and enable the selection of the best sparse model.

We note (x ̲ 0 ,,x ̲ N-1 ) the values of the initial field associated to the mesh of 𝒟 n , where x ̲ i d and (x ̲ 0 stat ,,x ̲ N-1 stat ) the values of the resulting stationary field.

OpenTURNS creates a factory of trend function thanks to the object TrendFactory, from :

  • a regression strategy that generates a basis using the Least Angle Regression (LAR) method thanks to the object LAR,

  • and a fitting algorithm that estimates the empirical error on each sub-basis using the leave one out strategy, thanks to the object CorrectedLeaveOneOut or the KFold algorithm thanks to the object KFold.

Then, OpenTURNS estimates the trend transformation from the initial field (x ̲ 0 ,,x ̲ N-1 ) and a function basis thanks to the method build of the object TrendFactory, which produces an object of type TrendTransform. This last object enables to :

  • add the trend to a given field w ̲ 0 ,,w ̲ N-1 defined on the same mesh : the resulting field shares the same mesh than the initial field. The addition is done with the operand ().

    For example, it may be useful to add the trend to a realization of the stationary process X stat in order to get a realization of the process X;

  • get the function f trend defined in (48) that evaluates the trend thanks to the method getEvaluation();

  • create the inverse trend transformation in order to remove the trend the intiail field (x ̲ 0 ,,x ̲ N-1 ) and to create the resulting stationary field (x ̲ 0 stat ,,x ̲ N-1 stat ) such that:

    x ̲ i stat =x ̲ i -f trend (t ̲ i ) (49)

    where t ̲ i is the simplex associated to the value x ̲ i .

    This creation of the inverse trend function -f trend is done thanks to the method getInverse() which produces an object of type InverseTrendTransform that can be evaluated on a a field thanks to the operand ().

    For example, it may be useful in order to get the stationary field (x ̲ 0 stat ,,x ̲ N-1 stat ) and then analyse it with methods adapted to stationary processes (ARMA decomposition for example).

Requirements  

some functions g,h:

type:

NumericalMathFunction

a basis sequence myBasisSequenceFactory

type:

BasisSequenceFactory

a fitting algorithm myFittingAlgorithm

type:

FittingAlgorithm

a basis of function myFunctionBasis

type:

NumericalMathFunctionCollection

a time series myYt

type:

TimeSeries

 
Results  

a trend factory : myTrendFactory

type:

TrendFactory

a trend transformation and its inverse (to perform ()) : myTrendTranform, myInverseTrendTransform

type:

TrendTransform, InverseTrendTransform

the trend function evaluation f myEvaluation_f

type:

EvaluationImplementation

 

Python script for this Use Case :

script_docUC_StocProc_TrendComputation.py

################################ # CASE 1 : we estimate the trend from the field ################################   # Define the regression stategy using the LAR method myBasisSequenceFactory = LAR()   # Define the fitting algorithm using the # Corrected Leave One Out or KFold algorithms myFittingAlgorithm = CorrectedLeaveOneOut() myFittingAlgorithm_2 = KFold()   # Define the basis function # For example composed of 3 functions func1 = NumericalMathFunction(["t", "s"], ["1"]) func2 = NumericalMathFunction(["t", "s"], ["t"]) func3 = NumericalMathFunction(["t", "s"], ["s"]) func4 = NumericalMathFunction(["t", "s"], ["t^2"]) func5 = NumericalMathFunction(["t", "s"], ["s^2"]) myFunctionBasis = NumericalMathFunctionCollection(0) myFunctionBasis.add(func1) myFunctionBasis.add(func2) myFunctionBasis.add(func3) myFunctionBasis.add(func4) myFunctionBasis.add(func5)   # Define the trend function factory algorithm myTrendFactory = TrendFactory(myBasisSequenceFactory, myFittingAlgorithm)   # Check the definition of the created factory print('regression stategy : ', myTrendFactory.getBasisSequenceFactory()) print('fitting strategy : ', myTrendFactory.getFittingAlgorithm())   # Create the trend transformation  of type TrendTransform myTrendTransform = myTrendFactory.build(myYField, Basis(myFunctionBasis))   # Check the estimated trend function print("Trend function = ", myTrendTransform)   ################################ # CASE 2 : we impose the trend (or its inverse) ################################   # The function g computes the trend : R^2 -> R # g :      R^2 --> R #          (t,s) --> 1+2t+2s g = NumericalMathFunction(["t", "s"], ["1+2*t+2*s"]) gTemp = TrendTransform(g)   # Get the inverse trend transformation # from the trend transform already defined myInverseTrendTransform = myTrendTransform.getInverse() print('Inverse trend fucntion = ', myInverseTrendTransform)   # Sometimes it is more useful to define # the opposite trend h : R^2 -> R # in fact h = -g h = NumericalMathFunction(["t", "s"], ["-(1+2*t+2*s)"]) myInverseTrendTransform_2 = InverseTrendTransform(h)   ################################ # Remove the trend from the field myYField # myXField = myYField - f(t,s) myXField2 = myTrendTransform.getInverse()(myYField) # or from the class InverseTrendTransform myXField3 = myInverseTrendTransform(myYField)   # Add the trend to the field myXField2 # myYField = f(t,s) + myXField2 myInitialYField = myTrendTransform(myXField2)   # Get the trend function f(t,s) myEvaluation_f = myTrendTransform.getEvaluation()   # Evaluate the trend function f at a particular vertex # which is the lower corner of the mesh myMesh = myYField.getMesh() vertices = myMesh.getVertices() vertex = vertices.getMin() trend_t = myEvaluation_f(vertex)


5.7.2 UC : Box Cox transformation

The objective of this Use Case is to estimate a Box Cox transformation from a field which all values are positive (eventually after a shift to satisfy the positiveness) and to apply it on the field.

We consider X:Ω×𝒟 d a multivariate stochastic process of dimension d where 𝒟 n and ωΩ is an event. We suppose that the process is 2 (Ω).

We note X t ̲ :Ω d the random variable at the vertex t ̲𝒟 defined by X t ̲ (ω)=X(ω,t ̲).

If the variance of X t ̲ depends on the vertex t ̲, the Box Cox transformation maps the process X into the process Y such that the variance of Y t ̲ is constant (at the first order at least) with respect to t ̲.

We present here :

  • the estimation of the Box Cox transformation from a given field of the process X,

  • the action of the Box Cox transformation on a field generated from X.

We note h: d d the Box Cox transformation which maps the process X into the process Y:Ω×𝒟 d , where Y=h(X), such that Var Y t ̲ is independent of t ̲ at the first order.

We suppose that X t ̲ is a positive random variable for any t ̲. To verify that constraint, it may be needed to consider the shifted process X+α ̲.

We illustrate some usual Box Cox transformations h in the scalar case (d=1), using the Taylor development of h: at the mean point of X t ̲ .

In the mutlivariate case, we estimate the Box Cox transformation component by component and we define the multivariate Box Cox transformation as the aggregation of the marginal Box Cox transformations.

Marginal Box Cox tranformation:

The first order Taylor development of h around 𝔼Y t ̲ writes:

t ̲𝒟,h(X t ̲ )=h(𝔼X t ̲ )+(X t ̲ -𝔼X t ̲ )h ' (𝔼X t ̲ )

which leads to:

𝔼h(X t ̲ )=h(𝔼X t ̲ )

and then:

Var h(X t ̲ )=h ' (𝔼X t ̲ ) 2 Var X t ̲

To have Var h(X t ̲ ) constant with respect to t ̲ at the first order, we need :

h ' (𝔼X t ̲ )=k Var X t ̲ -1/2 (50)

Now, we make some additional hypotheses on the relation between 𝔼X t ̲ and Var X t ̲ :

  • If we suppose that Var X t ̲ 𝔼X t ̲ , then (50) leads to the function h(y)y and we take h(y)=y,y>0;

  • If we suppose that Var X t ̲ (𝔼X t ̲ ) 2 , then (50) leads to the function h(y)logy and we take h(y)=logy,y>0;

  • More generally, if we suppose that Var X t ̲ (𝔼X t ̲ ) β , then (50) leads to the function h λ parametered by the scalar λ :

    h λ (y)=y λ -1 λλ0log(y)λ=0 (3)

    where λ=1-β 2.

The inverse Box Cox transformation is defined by:

h λ -1 (y)=(λy+1) 1 λ λ0exp(y)λ=0 (5.7)

Estimation of the Box Cox tranformation:

The parameter λ is estimated from a given field of the process X as follows.

The estimation of λ given below is optimized in the case when h λ (X t ̲ )𝒩(β,σ 2 ) at each vertex t ̲. If it is not the case, that estimation can be considered as a proposition, with no garantee.

The parameters (β,σ,λ) are then estimated by the maximum likelihood estimators. We note Φ β,σ and φ β,σ respectively the cumulative distribution function and the density probability function of the 𝒩(β,σ 2 ) distribution.

For all vertices t ̲, we have :

v0,X t ̲ v=h λ (X t ̲ )h λ (v)=Φ β,σ h λ (v) (5.7)

from which we derive the density probability function p of X t ̲ for all vertices t ̲ :

p(v)=h λ ' (v)φ β,σ (v)=v λ-1 φ β,σ (v) (5.7)

Using (5.7), the likelihood of the values (x 0 ,,x N-1 ) with respect to the model (5.7) writes:

L(β,σ,λ)=1 (2π) N/2 ×1 (σ 2 ) N/2 ×exp-1 2σ 2 k=0 N-1 h λ (x k )-β 2 Ψ(β,σ) × k=0 N-1 x k λ-1 (5.7)

We notice that for each fixed λ, the likelihood equation is proportional to the likelihood equation which estimates (β,σ 2 ). Thus, the maximum likelihood estimator for (β(λ),σ 2 (λ)) for a given λ are :

β ^(λ)=1 N k=0 N-1 h λ (x k )σ ^ 2 (λ)=1 N k=0 N-1 (h λ (x k )-β(λ)) 2 (5.7)

Substituting (5.7) into (5.7) and taking the log-likelihood, we obtain :

(λ)=logL(β ^(λ),σ ^(λ),λ)=C-N 2logσ ^ 2 (λ)+λ-1 k=0 N-1 log(x i ), (5.7)

The parameter λ ^ is the one maximising (λ) defined in (5.7).

OpenTURNS objects:

The OpenTURNS object BoxCoxFactory enables to create a factory of Box Cox transformation.

Then, OpenTURNS estimates the Box Cox transformation h λ ̲ from the initial field values (x ̲ 0 ,,x ̲ N-1 ) thanks to the method build of the object BoxCoxFactory, which produces an object of type BoxCoxTransform.

If the field values (x ̲ 0 ,,x ̲ N-1 ) have some negative values, it is possible to translate the values with respect a given shift α ̲ which has to be mentionned either at the creation of the object BoxCoxFactory or when using the method build.

Then the Box Cox transformation is the composition of h λ ̲ and this translation.

The object BoxCoxTransform enables to :

  • transform the field values (x ̲ 0 ,,x ̲ N-1 ) of dimension d into the values (y ̲ 0 ,,y ̲ N-1 ) with stabilized variance, such that for each vertex t ̲ i we have:

    y ̲ i =h λ ̲ (x ̲ i ) (58)

    or

    y ̲ i =h λ ̲ (x ̲ i +α ̲) (59)

    thanks to the operand (). The field based on the values y ̲ i shares the same mesh than the initial field.

  • create the inverse Box Cox transformation such that :

    x ̲ i =h λ ̲ -1 (y ̲ i ) (60)

    or

    x ̲ i =h λ ̲ -1 (y ̲ i )-α ̲ (61)

    thanks to the method getInverse() which produces an object of type InverseBoxCoxTransform that can be evaluated on a field thanks to the operator (). The new field based shares the same mesh than the initial field.

Requirements  

somefields: myField

type:

Field

λ ̲: myLambda

type:

NumericalPoint

 
Results  

a Box Cox factory: myBoxCoxFactory

type:

BoxCoxFactory

a Box Cox transformation and its inverse: myModelTranform, myInverseModelTransform

type:

BoxCoxTransform, InverseBoxCoxTransform

λ ̲: estimatedLambda

type:

NumericalPoint

some mapped fields: myStabilizedField

type:

Field

 

Python script for this Use Case :

script_docUC_StocProc_BoxCox.py

################################ # CASE 1 : we estimate the Box Cox transformation from the data ################################   # Initiate a BoxCoxFactory myBoxCoxFactory = BoxCoxFactory()   # We estimate the lambda parameter from the field myField # In dimension upper than one, the estimate is done marginal by marginal # We suppose here that all values of the field are positive myModelTransform = myBoxCoxFactory.build(myField) print(myModelTransform)   # Get the estimated lambda estimatedLambda = myModelTransform.getLambda()   ################################ # CASE 2 : we impose the Box Cox transformation ################################   # It is possible to impose the lambda factor myLambda # for example in dimension 1 myLambda = 0.01 myModelTransform_lambda = BoxCoxTransform(myLambda)   ################################ # Get the statibilized field   # Apply the transform method to myField # myStabilizedField = h(myField) or h(myField + alpha) myStabilizedField = myModelTransform(myField) myStabilizedField.setName('Stabilized TS')   # Get the inverse of the Box Cox transformation myInverseModelTransform = myModelTransform.getInverse()   # Apply it to the time series myFinalTimeSeries # myInitialField = h^-1(myFinalField) or h^-1(myFinalField) - alpha myInitialField = myInverseModelTransform(myStabilizedField)

Consider the temporal normal process of dimension 1 which covariance model is Exponential(a,λ) with (a,λ)=(1.0,0.2) and which bidimensional mesh is the box [0,2]×[0,1] regularly discretized with 40 points in the first dimension and 20 points in the second one.

Then we map this process X into the process Y=exp(X) in order to get a non stationary positive process.

Then we get a field generated by the process Y and we apply the Box Cox transformation. Figures 61 and 61 respectively draw the initial field and the stabilized one. We note that the variations in the values have decreased (colors are more uniform after the Box Cox transformation). The associated λ0.

One field from the Y process.
Field of Figure 61 after the Box Cox transformation.
Figure 61

5.8 ARMA stochastic process

Consider a stationary multivariate stochastic process X:Ω×[0,T] d of dimension d, where X t :Ω d is the random variable at time t. Under some general conditions,X can be modeled by an ARMA(p,q) model defined at the time stamp t by :

X t +A ̲ ̲ 1 X t-1 ++A ̲ ̲ p X t-p =ε t +B ̲ ̲ 1 ε t-1 ++B ̲ ̲ q ε t-q (62)

where the coefficients of the recurrence are matrix in d and (ε t ) t is white noise discretized on the same time grid as the process X.

The coefficients (A ̲ ̲ 1 ,,A ̲ ̲ p ) form the Auto Regressive (AR) part of the model, while the coefficients (B ̲ ̲ 1 ,B ̲ ̲ q ) the Moving Average (MA) part.

We introduce the homogeneous system associated to (62) :

X t +A ̲ ̲ 1 X t-1 ++A ̲ ̲ p X t-p =0 (63)

To get stationary solutions of (62), it is necessary to get its characteristic polynomial defined in (64) :

Φ(r ̲)=r ̲ p + i=1 p a i r ̲ p-i (64)

Thus the solutions of (63) are of the form P(t)r ̲ i t where the r ̲ i are the roots of the polynomials Φ(r ̲) defined in (64) and P is a polynomials of degree the order of the root r ̲ i :

The processes P(t)r ̲ i t decrease with time if and only if the modulus of all the components of the roots r ̲ i are less than 1 :

i,j[1,d],|r ij |<1 (65)

Once given the coefficients of the model ARMA(p,q), OpenTURNS evaluates the roots of the polynomials Φ(r ̲) and checks the previous condition (65). The roots r ̲ i , are the eigen values of the matrix M ̲ ̲ which writes in dimension d as :

M ̲ ̲=0 ̲ ̲ d 1 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 1 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 0 ̲ ̲ d 1 ̲ ̲ d -A ̲ ̲ 1 -A ̲ ̲ 2 -A ̲ ̲ 3 -A ̲ ̲ p-1 -A ̲ ̲ p (66)

and in dimension 1 :

M ̲ ̲=010000010000001-α 1 -α 2 -α 3 -α p-1 -α p (67)

The matrix M ̲ ̲ is known to be the companion matrix.


5.8.1 UC : Creation of an ARMA process

The creation of an ARMA model requires the data of the AR and MA coefficients which are :

  • a list of scalars in the unidmensional case : (a 1 ,,a p ) for the AR-coefficients and (b 1 ,,b q ) for the MA-coefficients, defined thanks to a NumericalPoint

  • a list of square matrix (A ̲ ̲ 1 ,,A ̲ ̲ p ) for the AR-coefficients and (B ̲ ̲ 1 ,B ̲ ̲ q ) for the MA-coefficients, defined thanks to a SquareMatrixCollection

Il also requires the definition of a white noise ε ̲ that contains the same time grid as the one of the process.

The current state of an ARMA model is characterized by its last p values and the last q values of its white noise. It is possible to get that state thanks to the methods getState.

It is possible to create an ARMA with a specific current state. That specific current state is taken into account to generate possible futurs but not to generate realizations (in order to respect the stationarity property of the model).

At the creation step, OpenTURNS checks whether the process ARMA(p,q) is stationnary, by evaluating the roots of the polynomials (64) associated to the homogeneous system (63). When the process is not stationary, OpenTURNS sends a message to prevent the User.

Requirements  

coefficients : myARList, myMAList

type:

NumericalPoint or SquareMatrixCollection

a white noise : myWhiteNoise

type:

WhiteNoise

the last realizations : myLastValues,myLastNoiseValues

type:

NumericalSample

 
Results  

the AR and MA coefficients : myARCoef, myMACoef

type:

ARMACoefficients

an ARMA process myARMA

type:

ARMA

an ARMAState myARMAState

type:

ARMAState

 

Python script for this UseCase :

script_docUC_StocProc_ARMA_Creation.py

##################################### # CASE 1 : Whithout specifying the current state #####################################   # Create the AR and MA coefficients   # From the lists of the coefficeints # which are vectors in dimension 1 and # square matrix in dimension d>1 myARCoef = ARMACoefficients(myARList) myMACoef = ARMACoefficients(myMAList)   # Create the ARMA model   # From the ARMA coefficients, the white noise # whithout specifying the current state myARMA = ARMA(myARCoef, myMACoef, myWN_1d)   ##################################### # CASE 2 : Specifying the current state # Usefull to get possible futurs from the current state #####################################   # Define the current state of the ARMA   # Set the last p-values of the process # and the last q-values of the noise myARMAState = ARMAState(myLastValues, myLastNoiseValues)   # Create the ARMA model   # From the AR-MA coefficients, the white noise and a specific state myARMA = ARMA(myARCoef, myMACoef, myWN_1d, myARMAState)


5.8.2 UC : Manipulation of an ARMA process

Once an ARMA(p,q) model has been created, it is possible to get :

  • its linear recurrence thanks to the method print,

  • its AR and MA coefficients thanks to the methods getARCoefficients, getMACoefficients,

  • its white noise thanks to the method getWhiteNoise, that contains the time grid of the process,

  • its current state, that is its last p values and the last q values of its white noise, thanks to the method getState,

  • a realization thanks to the method getRealization,

  • a sample of realizations thanks to the method getSample,

  • a possible future of the model, which is a possible prolongation of the current state on the next n prol instants, thanks to the method getFuture.

  • n possible futures of the model, which correspond to n possible prolongations of the current state on the next n prol instants, thanks to the method getFuture(n prol , n).

It is important to note that :

  • when asking for a realization of the stationary process modeled by ARMA(p,q), one has to obtain a realization that does not depend on the current state of the process;

  • whereas, when one asks for a possible future extending a particular curent state of the process, the realization of the model must depend on that current sate.

How to proceed to respect these constraints?

If we note X ̲ 1 (ω,t) and X ̲ 2 (ω,t) two distinct solutions of (62) associated to two distinct intial states, then, the process D ̲(ω,t)=X ̲ 2 (ω,t)-X ̲ 1 (ω,t) is solution of the homogeneous equation associated to (62) and then decreases with time under the condition (65). Let us note N ther the number such that :

max i,j |r ij | N ther <ε,N ther >lnε lnmax i,j |r ij | (68)

where the r i are the roots of the polynomials (64) and ε is the precision of the computer ( ε=2 -53 10 -16 ). Then, after N ther instants, the process D ̲(ω,t) has disappeared, which means that the processes X ̲ 1 (ω,t) and X ̲ 2 (ω,t) do not differ any more. As a conclusion, after N ther instants, the realization of the ARMA process does not depend any more on the initial state.

That is why, when making a realization of the ARMA model, OpenTURNS makes a thermalization step that simply consists in realizing the model upon N ther additional instants, erasing the N ther first values and finally only retaining the other ones. That step ensures that the realization of the process does not depend on the intial state.

By default, the number N ther is evaluated according to (68) by the method computeNThermalization. The User could get access to it with the method getNThermalization and can change the value with the method setNThermalization. (In order to give back to N ther its default value, it is necessary to re-use the method computeNThermalization).

On the contrary, in the context of getting a possible future from a specified current state, the User should care that the number of additional instants N it on which he wants to extend the process, is such that N it <N ther because beyond N ther , the future has no link with the present.

More precisely, after N it * instants, such that :

max i,j |r ij | N it * <max i σ i ,N ther >max i σ i lnmax i,j |r ij | (69)

where the σ i are the components of the covariance matrix of the white noise ε ̲, the influence of the initial state is of same order than the influence of the white noise.

Let us note that when the ARMA model is created whithout specifying the current state, Open TURN automatically proceeds to a thermalization step at the creation of the ARMA object.

Before asking for the generation of a possible futur, the User has to specify the current state of the ARMA model, thanks to the creation method that takes into account the current state. In that case, OpenTURNS does not proceed to the thermalization step.

As an ARMA model is a stochastic process, the object ARMA inherits the methods of the Process object. Thus, it is possible to get its marginal processes, its time grid, its dimension and to get several realizations at a time of the process.

Requirements  

an ARMA process myARMA

type:

ARMA

 
Results  

the AR and MA coefficients : myARCoef, myMACoef

type:

ARMACoefficients

an ARMAState myARMAState

type:

ARMAState

the last realizations : myLastValues,myLastNoiseValues

type:

NumericalSample

a white noise : myWhiteNoise

type:

WhiteNoise

a time series : ts

type:

TimeSeries

 

Python script for this Use Case :

script_docUC_StocProc_ARMA_Manipulation.py

# Check the linear recurrence print("myARMA = ", myARMA)   # Get the coefficients of the recurrence print('AR coeff = ', myARMA.getARCoefficients()) print('MA coeff = ', myARMA.getMACoefficients())   # Get the white noise myWhiteNoise = myARMA.getWhiteNoise()   # Generate one time series ts = myARMA.getRealization() ts.setName('my time series')   # Draw the time series  : marginal index 0 myTSGraph = ts.drawMarginal(0) # View(myTSGraph).show()   # Generate a k time series k = 5 myProcessSample = myARMA.getSample(k)   # Then get the current state of the ARMA myARMAState = myARMA.getState() # From the myARMAState, get the last values myLastValues = myARMAState.getX() # From the ARMAState, get the last noise values myLastEpsilonValues = myARMAState.getEpsilon()   # Get the number of iterations before getting a stationary state nThermal = myARMA.getNThermalization() print('Thermalization number = ', nThermal)   # This may be important to evaluate it  with another precision epsilon epsilon = 1e-8 newThermalValue = myARMA.computeNThermalization(epsilon) myARMA.setNThermalization(newThermalValue)   # Make a prediction from the curent state of the ARMA # on the next Nit instants Nit = 100 # at first, specify a current state myARMAState myARMA = ARMA(myARCoef, myMACoef, myWhiteNoise, myARMAState)   # then, generate a possible future possibleFuture = myARMA.getFuture(Nit)   # Generate N possible futures on the Nit next points N = 5 possibleFuture_N = myARMA.getFuture(Nit, N) possibleFuture_N.setName('Possible futures')   # Draw the future  : marginal index 0 myFutureGraph = possibleFuture_N.drawMarginal(0) # View(myFutureGraph).show()

The example illustrated below is a 1D ARMA process with p=4, q=0 with the following coefficients :

X t +0.4X t-1 +0.3X t-2 +0.2X t-3 +0.1X t-4 =ε t

The ε considered is a Normal white noise with σ=0.01. This process is stationary.

Figures (62) to (63) respectively draw the graphs of one realization of the stationary process, a sample of 5 realizations of the process, one then 5 possible futures from the first realization on the next 50 instants (let us note that the thermalization number according to (68) is 75 greater than 50).

One realization of ARMA(4,0).
5 realizations of ARMA(4,0).
Figure 62
One possible future of ARMA(4,0) on the next 50 instants.
5 possible futures of ARMA(4,0) on the next 50 instants.
Figure 63

5.8.3 UC : Estimation of a scalar ARMA process

The objective of the Use Case is to estimate an ARMA model from a scalar stationary time series using the Whittle estimator and a centered normal white noise.

The data can be a unique time series or several time series collected in a process sample.

If the User specifies the order (p,q), OpenTURNS fits a model ARMA(p,q) to the data by estimating the coefficients (a 1 ,,a p ),(b 1 ,,b q ) and the variance σ of the white noise.

If the User specifies a range of orders (p,q)Ind p ×Ind q , where Ind p =[p 1 ,p 2 ] and Ind q =[q 1 ,q 2 ], OpenTURNS finds the best model ARMA(p,q) that fits to the data and estimates the corresponding coefficients. The best model is considered with respect to the AIC c criteria (corrected Akaike Information Criterion), defined by :

AICc=-2logL w +2(p+q+1)m m-p-q-2

where m is half the number of points of the time grid of the process sample (if the data are a process sample) or in a block of the time series (if the data are a time series).

Two other criteria are computed for each order (p,q) :

  • the AIC criterion :

    AIC=-2logL w +2(p+q+1)
  • and the BIC criterion:

    BIC=-2logL w +2(p+q+1)log(p+q+1)

The BIC criterion leads to a model that gives a better prediction; the AIC criterion selects the best model that fits the given data; the AIC c criterion improves the previous one by penalizing a too high order that would artificially fit to the data.

For each order (p,q), the estimation of the coefficients (a k ) 1kp , (b k ) 1kq and the variance σ 2 is done using the Whittle estimator which is based on the maximization of the likelihood function in the frequency domain.

The principle is detailed hereafter for the case of a time series : in the case of a process sample, the estimator is similar except for the periodogram which is computed differently.

We consider a time series associated to the time grid (t 0 ,,t n-1 ) and a particular order (p,q). Using the notation (), the spectral density function of the ARMA(p,q) process writes :

f(λ,θ ̲,σ 2 )=σ 2 2π|1+b 1 exp(-iλ)+...+b q exp(-iqλ)| 2 |1+a 1 exp(-iλ)+...+a p exp(-ipλ)| 2 =σ 2 2πg(λ,θ ̲) (70)

where θ ̲=(a 1 ,a 2 ,...,a p ,b 1 ,...,b q ) and λ is the frequency value.

The Whittle log-likelihood writes :

logL w (θ ̲,σ 2 )=- j=1 m logf(λ j ,θ ̲,σ 2 )-1 2π j=1 m I(λ j ) f(λ j ,θ ̲,σ 2 ) (71)

where :

  • I is the non parametric estimate of the spectral density, expressed in the Fourier space (frequencies in [0,2π] instead of [-f max ,f max ]). OpenTURNS uses by default the Welch estimator.

  • λ j is the j-th Fourier frequency, λ j =2πj n, j=1...m with m the largest integer n-1 2.

We estimate the (p+q+1) scalar coefficients by maximizing the log-likelihood function. The corresponding equations lead to the following relation :

σ 2* =1 m j=1 m I(λ j ) g(λ j ,θ ̲ * ) (72)

where θ ̲ * maximizes :

logL w (θ ̲)=m(log(2π)-1)-mlog1 m j=1 m I(λ j ) g(λ j ,θ ̲)- j=1 m g(λ j ,θ ̲) (73)

The Whitle estimation requires that :

  • the determinant of the eigen values of the companion matrix associated to the polynomial 1+a 1 X++a p X p are outside the unit disc,, which garantees the stationarity of the process;

  • the determinant of the eigen values of thez companion matrix associated to the polynomial 1+ba 1 X++b q X q are outside the unit disc, which garantees the invertibility of the process.

OpenTURNS proceeds as follows :

  • the object WhittleFactory is created with either a specified order (p,q) or a range Ind p ×Ind q . By default, the Welch estimator (object Welch) is used with its default parameters.

  • for each order (p,q), the estimation of the parameters is done by maximizing the reduced equation of the Whittle likelihood function (73), thanks to the method build of the object WhittleFactory. This method applies to a time series or a process sample.

    If the User wants to get the quantified criteria AIC c ,AIC and BIC of the model ARMA(p,q), he has to specify it by giving a NumericalPoint of size 0 (NumericalPoint()) as input parameter of the method build.

  • the output of the estimation is, in all the cases, one unique ARMA : the ARMA with the specified order or the optimal one with respect to the AIC c criterion.

  • in the case of a range Ind p ×Ind q , the User can get all the estimated models thanks to the method getHistory of the object WhittleFactory. If the build has been parameterized by a NumericalPoint of size 0, the User also has access to all the quantified criteria.

Requirements  

a time series or collection of time series : myTimeSeries, myProcessSample

type:

TimeSeries or ProcessSample

 
Results  

the Whittle factory : myFactory

type:

WhittleFactory

type:

SpectralFactory

the white noise : myWhiteNoise

type:

WhiteNoise

an ARMA process myARMA, arma

type:

ARMA

a set of information criteria myCriterion

type:

NumericalPoint

a collection of Whittle estimates myWhittleHistory

type:

WhittleFactoryStateCollection

 

Python script for this UseCase :

script_docUC_StocProc_ARMA_Estimation_Whittle.py

################################### # CASE 1 : we specify a (p,q) order ###################################   # Specify the order (p,q) p = 4 q = 2   # Build a Whittle factory # with default SpectralModelFactory (WelchFactory) myFactory_42 = WhittleFactory(p, q)   # Check the default SpectralModelFactory print("Default Spectral Model Factory = ",       myFactory_42.getSpectralModelFactory())   # To set the spectral model factory # For example, set WelchFactory as SpectralModelFactory # with the Hanning filtering window # The Welch estimator splits the time series in four blocs without overlap myFilteringWindow = Hanning() mySpectralFactory = WelchFactory(myFilteringWindow, 4, 0) myFactory_42.setSpectralModelFactory(mySpectralFactory) print("The new Spectral Model Factory = ",       myFactory_42.getSpectralModelFactory())   ################################### # CASE 2 : we specify a range of (p,q) orders ###################################   # Range for p = [1, 2, 4] pIndices = Indices([1, 2, 4]) # Range for q = [4,5,6] qIndices = Indices(3) # fill form 4 by step = 1 qIndices.fill(4, 1)   # Build a Whittle factory with default SpectralModelFactory (WelchFactory) # this time using ranges of order p and q myFactory_Range = WhittleFactory(pIndices, qIndices)   ################################### # Coefficients estimation ###################################   # Estimate the ARMA model from a time series # To get the quantified AICc, AIC and BIC criteria myCriterion = NumericalPoint()   myARMA_42 = myFactory_42.build(TimeSeries(myTimeSeries), myCriterion)   # Estimate the arma model from a process sample myARMA_Range = myFactory_Range.build(myProcessSample, myCriterion)   ################################### # Results exploitation ###################################   # Get the white noise of the (best) estimated arma myWhiteNoise = myARMA_Range.getWhiteNoise()   # When specified, get the quantified criterion # for the best model print("Criteria AICc = ", myCriterion[0]) print("Criteria AIC = ", myCriterion[1]) print("Criteria BIC = ", myCriterion[2])   # Get all the estimated models myWhittleHistory = myFactory_Range.getHistory()   # Print the all the models and the criterion in the history for i in range(myWhittleHistory.getSize()):     model_i = myWhittleHistory[i]     arma = model_i.getARMA()     print("Order(p,q) = ",  model_i.getP(), ", ", model_i.getQ())     print("AR coeff = ", model_i.getARCoefficients())     print("MA coeff = ", model_i.getMACoefficients())     print("White Noise - Sigma = ", model_i.getSigma2())     print("Criteria AICc, AIC, BIC = ", model_i.getInformationCriteria())     print('')


5.8.4 UC : Estimation of a multivariate ARMA process

The objective of the Use Case is to estimate a multivariate ARMA model from a stationary time series using the maximum likelihood estimator and a centered normal white noise.

The data can be a unique time series or several time series collected in a process sample.

Let (t i ,X ̲ i ) 0in-1 be a multivariate time series of dimension d generated by an ARMA process represented by equation (62), where (p,q) are supposed to be known. We assume that the white noise ε is distributed according to the normal distribution with zero mean and with covariance matrix Σ ̲ ̲ ε =σ 2 Q ̲ ̲ where |Q ̲ ̲|=1 .

The normality of the white noise implies the normality of the process. If we note W ̲=(X ̲ 0 ,,X ̲ n-1 ), then W ̲ is normal with zero mean. Its covariance matrix writes 𝔼(W ̲W ̲ t )=σ 2 Σ W ̲ which depends on the coefficients (A ̲ ̲ k ,B ̲ ̲ l ) for k=1,...,p and l=1,...,q and on the matrix Q ̲ ̲.

The likelihood of W ̲ writes :

L(β ̲,σ 2 |W ̲)=(2πσ 2 ) -dn 2 |Σ w | -1 2 exp-(2σ 2 ) -1 W ̲ t Σ W ̲ -1 W ̲ (74)

where β ̲=(A ̲ ̲ k ,B ̲ ̲ l ,Q ̲ ̲),k=1,...,p, l=1,...,q and where |.| denotes the determinant.

The difficulty arises from the great size (dn×dn) of Σ W ̲ which is a dense matrix in the general case. Maurucio J. A.(The exact likelihood function of a vector autoregressive moving average model, 1996) proposes an efficient algorithm to evaluate the likelihood function. The main point is to use a change of variable that leads to a block-diagonal sparse covariance matrix.

The Whitle estimation requires that :

  • the determinant of the eigen values of the companion matrix associated to the polynomial I ̲ ̲+A ̲ ̲ 1 X ̲ ̲++A ̲ ̲ p X ̲ ̲ p are outside the unit disc, which garantees the stationarity of the process;

  • the determinant of the eigen values of the companion matrix associated to the polynomial I ̲ ̲+B ̲ ̲ 1 X ̲ ̲++B ̲ ̲ q X ̲ ̲ q are outside the unit disc, which garantees the invertibility of the process.

OpenTURNS estimates (β ̲,σ 2 ) thanks to the ARMALikelihoodFactory object and its method build, acting on a time series or on a sample of time series. It produces a result of type ARMA.

Note that no evaluation of selection criteria such as AIC and BIC is done.

Requirements  

order of the model and dimension of the underlying process: p, q, d

type:

integer

a time series or collection of time series : myTimeSeries, mySample

type:

TimeSeries, ProcessSample

 
Results  

the ARMA likelihood factory : myFactory

type:

ARMALikelihoodFactory

the white noise : myWhiteNoise

type:

WhiteNoise

an ARMA process myARMA

type:

ARMA

 

Python script for this Use Case :

    # Build a factory   myFactory = ARMALikelihoodFactory(p, q, d)     # Estimate the ARMA model coefficients from a time series   myARMA = myFactory.build(myTimeSeries)     # Estimate the ARMA model coefficients from a process sample   myARMA = myFactory.build(mySample)     # Get the white noise of the ARMA given   myWhiteNoise = myARMA.getWhiteNoise()


5.9 Normal processes

5.9.1 UC : Creation of a parametric stationary covariance function

Let X:Ω×𝒟 d be a multivariate stationary normal process where 𝒟 n . The process is supposed to be zero mean. It is entirely defined by its covariance function C stat :𝒟 d×d (), defined by C stat (τ ̲)=𝔼X s ̲ X s ̲+τ ̲ t for all s ̲ n .

If the process is continuous, then 𝒟= n . In the discrete case, 𝒟 is a lattice.

This use case illustrates how the User can create a covariance function from parametric models. OpenTURNS implements the multivariate Exponential model as a parametric model for the covariance function C stat .

The multivariate exponential model: This model defines the covariance function C stat by:

τ ̲𝒟,C stat (τ ̲)=A ̲ ̲Δ ̲ ̲(τ ̲)R ̲ ̲Δ ̲ ̲(τ ̲)A ̲ ̲ (75)

where R ̲ ̲ d×d ([-1,1]) is a correlation matrix, Δ ̲ ̲(τ ̲) d×d () is defined by:

Δ ̲ ̲(τ ̲)=Diag(e -λ 1 |τ|/2 ,,e -λ d |τ|/2 ) (76)

and A ̲ ̲ d×d () is defined by:

A ̲ ̲=Diag(a 1 ,,a d ) (77)

whith λ i >0 and a i >0 for any i.

We call a ̲ the amplitude vector and λ ̲ the scale vector.

The expression of C stat is the combination of:

  • the matrix R ̲ ̲ that models the spatial correlation between the components of the process X at any vertex t ̲ (since the process is stationary):

    t ̲𝒟,R ̲ ̲= Cor X t ̲ ,X t ̲ (78)
  • the matrix Δ ̲ ̲(τ ̲) that models the correlation between the marginal random variables X t ̲ i and X t ̲+τ ̲ i :

    Cor X t ̲ i ,X t ̲+τ ̲ i =e -λ i |τ| (79)
  • the matrix A ̲ ̲ that models the variance of each marginal random variable:

    Var X t ̲ =(a 1 ,,a d ) (80)

This model is such that:

C ij stat (τ ̲)=a i e -λ i |τ|/2 R i,j a j e -λ j |τ|/2 ,ijC ii stat (τ ̲)=a i e -λ i |τ|/2 R i,i a i e -λ j |τ|/2 =a i 2 e -λ i |τ| (5.9)

It is possible to define the exponential model from the spatial covariance matrix C ̲ ̲ spat rather than the correlation matrix R ̲ ̲ :

t ̲𝒟,C ̲ ̲ spat =𝔼X t ̲ X t ̲ t =A ̲ ̲R ̲ ̲A ̲ ̲ (82)

OpenTURNS implements the multivariate exponential model thanks to the object ExponentialModel wich is created from :

  • the scale and amplitude vectors (λ ̲,a ̲): in that case, by default R ̲ ̲=I ̲ ̲;

  • the scale and amplitude vectors and the spatial correlation matrix (λ ̲,a ̲,R ̲ ̲);

  • the scale and amplitude vectors and the spatial covariance matrix (λ ̲,a ̲,C ̲ ̲); Then C ̲ ̲ is mapped into the associated correlation matrix R ̲ ̲ according to (82) and the previous constructor is used.

Requirements  

a ̲, λ ̲ : amplitude, scale

type:

NumericalPoint

R ̲ ̲ : spatialCorrelation

type:

CorrelationMatrix

C ̲ ̲ : spatialCovariance

type:

CovarianceMatrix

 
Results  

a covariance model : myCovarianceModel

type:

StationaryCovarianceModel

 

Python script for this UseCase :

script_docUC_StocProc_StationaryCovarianceFunction_Param.py

# Create the covariance model # for example : the Exponential model with spatial dimension=1 spatialDimension = 1   # from the amplitude and scale, no spatial correlation myCovarianceModel = ExponentialModel(spatialDimension, amplitude, scale)   # from the amplitude, scale and spatialCovariance myCovarianceModel = ExponentialModel(     spatialDimension, amplitude, scale, spatialCorrelation)   # or from the scale and spatialCovariance myCovarianceModel = ExponentialModel(     spatialDimension, scale, spatialCovariance)


5.9.2 UC : Creation of a User defined stationary covariance function

This use case illustrates how the User can define his own stationary covariance model.

A stationary covariance model C stat is defined by : C stat :𝒟 d×d () where C stat (τ ̲) is a covariance matrix of dimension d.

Note that 𝒟= n in the continuous case and 𝒟 is a lattice in the discrete case.

OpenTURNS allows the User to define his own stationary covariance model thanks to the object UserDefinedStationaryCovarianceModel defined from :

  • a mesh of dimension n defined by the vertices (τ ̲ 0 ,,τ ̲ N-1 ) and the associated simplices,

  • a collection of covariance matrices stored in the object CovarianceMatrixCollection noted (C ̲ ̲ 0 ,,C ̲ ̲ N-1 ) where C ̲ ̲ k d×d () for 0kN-1.

Then OpenTURNS builds a stationary covariance function which is a piecewise constant function on 𝒟 defined by:

τ ̲𝒟,C stat (τ ̲)=C ̲ ̲ k wherekissuchthatτ ̲ k isthevertexofthenearesttot ̲.

Care: in its version 1.3, OpenTURNS only implements the case n=1 where the mesh is a regular time grid (t 0 ,,t N-1 ) discretizing 𝒟=[0,T].

Requirements  

a mesh : myShiftMesh

type:

Mesh

a collection of covariance matrices : myCovarianceCollection

type:

CovarianceMatrixCollection

one vertex : tau

type:

NumericalPoint

 
Results  

a stationary covariance model : myCovarianceModel

type:

StationaryCovarianceModel

 

Python script for this UseCase :

script_docUC_StocProc_StationaryCovarianceFunction_UserDefined.py

# Create the covariance model myCovarianceModel = UserDefinedStationaryCovarianceModel(     myShiftMesh, myCovarianceCollection)   # Get the covariance function computed at the vertex tau myCovarianceMatrix = myCovarianceModel(tau)

In the following example, we illustrate how to create a covariance model of dimension d=1 and when the domain 𝒟=[0,T].

We model the covariance function defined by: C stat : where C stat (τ)=1 1+τ 2 . In this example, the domain 𝒟 is with [0,20] discretized with the time step Δt=0.5.

The Figure 64 draws the covariance function.

User defined stationary covariance model in dimension 1 when 𝒟=[0,T].
Figure 64

5.9.3 UC : Creation of a User defined covariance function

This use case illustrates how the User can define his own covariance model.

A covariance function C is defined by : C:𝒟×𝒟𝕄 d×d () where C(s ̲,t ̲) is a covariance matrix of dimension d.

The domaine 𝒟 is discretized on the mesh .

OpenTURNS allows the User to define his own covariance model thanks to the object UserDefinedCovarianceModel defined from :

  • a mesh n defined by the vertices (t ̲ 0 ,,t ̲ N-1 ) and the associated simplices,

  • a collection of N(N+1)/2 covariance matrices stored in the object CovarianceMatrixCollection noted (C ̲ ̲ k, ) 0kN-1 where C ̲ ̲ k, d×d ().

    Care: The covariance matrices (C ̲ ̲ i,j ) 0jiN-1 must be given in the following order:

    C ̲ ̲ 0,0 ,C ̲ ̲ 1,0 ,C ̲ ̲ 1,1 ,C ̲ ̲ 2,0 ,C ̲ ̲ 2,1 ,C ̲ ̲ 2,2 ,

    which corresponds to the global covariance matrix, which lower part is:

    C ̲ ̲ 0,0 C ̲ ̲ 1,0 C ̲ ̲ 1,1 C ̲ ̲ 2,0 C ̲ ̲ 2,1 C ̲ ̲ 2,2

Using that collection of covariance matrices, OpenTURNS builds a covariance function which is a piecewise constant function defined on 𝒟×𝒟 by:

(s ̲,t ̲)𝒟×𝒟,C(s ̲,t ̲)=C ̲ ̲ k(s ̲),k(t ̲)

where k(s ̲) is such that t ̲ k(s ̲) is the vertex of the nearest to s ̲.

It follows that:

C(t ̲ k ,t ̲ )=C ̲ ̲ k,

Concerning the collection of covariance matrices that is used to build the discretized covariance model, we have that:

  • the matrix C ̲ ̲ k, has the index n=+k(k+1) 2.

  • inversely, the matrix stored at index n in the collection of covariance matrices, is the matrix C ̲ ̲ k, where:

    k=1 28n+1-1

    and

    =n-k(k+1) 2
Requirements  

a mesh : myMesh

type:

Mesh

a collection of covariance matrices : myCovarianceCollection

type:

CovarianceMatrixCollection

two vertices : s,t

type:

NumericalPoint

 
Results  

a covariance model : myCovarianceModel

type:

UserDefinedCovarianceModel

 

Python script for this UseCase :

script_docUC_StocProc_NonStationaryCovarianceFunction_UserDefined.py

# Create the covariance model myCovarianceModel = UserDefinedCovarianceModel(myMesh, myCovarianceCollection)   # Get the covariance function computed at (s,t) # for example (1.5, 2.5) s = 1.5 t = 2.5

In the following example, we illustrate the piecewise constant covariance that OpenTURNS builds from a collection of covariance matrices that comes from the continuous covariance function C:𝒟×𝒟 defined by:

C(s,t)=exp-4|s-t| 1+s 2 +t 2 (83)

where the domain 𝒟=[-4,4] discretized on the mesh which is the regular time grid with N=64 points: (t 0 ,,t 63 ).

As we have n=1 and d=1, the covariance matrices are scalars and the colelction corresponds to (C(t i ,t j )) 0ji63 .

The Figure 65 draws the iso contours of the continuous model C and the piecewise constant model built by OpenTURNS: the discretized models approaches the real one with a good precision.

User defined non stationary covariance model in dimension 1 on 𝒟=[-4,4].
Figure 65

5.9.4 UC : Estimation of a stationary covariance function

Let X:Ω×𝒟 d be a multivariate stationary normal process of dimension d. We only treat here the case where the domain is of dimension 1: 𝒟 (n=1).

If the process is continuous, then 𝒟=. In the discrete case, 𝒟 is a lattice.

X is supposed a second order process with zero mean. It is entirely defined by its covariance function C stat :𝒟 d×d (), defined by C stat (τ)=𝔼X s X s+τ t for all s𝒟.

In addition, we suppose that its spectral density function S: + (d) is defined, where + (d) d () is the set of d-dimensional positive definite hermitian matrices.

The objective of this use case is to estimate C stat from a field or a sample of fields from the process X, using first the estimation of the spectral density function and then mapping S into C stat using the inversion relation (37), when it is possible.

As the mesh is a time grid (n=1), the fields can be interpreted as time series.

The estimation algorithm is outlined hereafter.

Let (t 0 ,t 1 ,t N-1 ) be the time grid on which the process is observed and let also (X ̲ 0 ,,,X ̲ M-1 ) be M independent realizations of X or M segments of one realization of the process.

Using (37), the covariance function writes:

C i,j stat (τ)= exp2iπfτS i,j (f)df (84)

where C i,j stat is the element (i,j) of the matrix C stat (τ) and S i,j (f) the one of S(f). The integral (84) is approximated by its evaluation on the finite domain Ω:

C i,j stat (τ)= Ω exp2iπfτS i,j (f)df (85)

Let us consider the partition of the domain as follows:

  • Ω=[-Ω c ,Ω c ] is subdivised into M segments Ω = k=1 M k with k =[f k -Δf 2,f k +Δf 2]

  • Δf be the frequency step, Δf=2Ω c M

  • f k be the frequences on which the spectral density is computed, f k =-Ω c +k-1 2Δf=2k-1-MΔf 2 with k=1,2,,M

The equation (85) can be rewritten as:

C i,j stat (τ)= k=1 M k exp2iπfτS i,j (f)df

We focus on the integral on each subdomain k . Using numerical approximation, we have:

k exp2iπfτS i,j (f)dfΔfS i,j (f k )exp2iπf k τ

τ must be in correspondance with frequency values with respect to the Shannon criteria. Thus the temporal domain of estimation is the following:

  • Δt is the time step, Δt=1 2Ω c such as ΔfΔt=1 M

  • 𝒯 ˜ =[-T,T] is subdivised into M segments 𝒯 ˜ = m=1 M 𝒯 m with 𝒯 m =[t m -Δt 2,t m +Δt 2]

  • t m be the time values on which the covariance is estimated, t m =-M 2Ω c +m-1 2Δt=2m-1-MΔt 2

The estimate of the covariance value at time value τ m depends on the quantities of form:

k exp2iπfτ m S i,j (f)dfΔfS i,j (f k )exp2iπf k τ m (86)

We develop the expression of f k and τ m and we get:

2m-1-M=2(m-1)-(M-1)2k-1-M=2(k-1)-(M-1)

Thus,

(2m-1-M)(2k-1-M)=4(m-1)(k-1)-(M-1)(2m-1-M)-2(k-1)(M-1)

and :

(2m-1-M)(2k-1-M)Δt 2Δf 2=(m-1)(k-1) M-(M-1)(2m-1-M) 4M-(k-1)(M-1) 2M

We denote :

δ(m)=exp-iπ 2M(M-1)(2m-1-M)φ k =exp-iπ M(k-1)(M-1)S i,j (f k )

Finally, we get the followig expression for integral in (86) :

k exp2iπfτ m S i,j (f)dfΔfexp2iπ M(m-1)(k-1)δ(m)φ k

It follows that :

C i,j stat (τ m )=Δfδ(m) k=1 M φ k *exp2iπ M(m-1)(k-1) (87)

In the equation (87), we notice a discret inverse Fourier transform.

OpenTURNS builds an estimation of the stationary covariance function on a ProcessSample or TimeSeries using the previous algorithm implemented in the StationaryCovarianceModelFactory class. The result consists in a UserDefinedStationaryCovarianceModel which is easy to manipulate.

Such an object is composed of a time grid and a collection of K square matrices of dimension d. K corresponds to the number of time steps of the final time grid on which the covariance is estimated. When estimated from a time series , the UserDefinedStationaryCovarianceModel may have a time grid different from the initial time grid of the time series.

Requirements  

a time series myTimeSeries

type:

TimeSeries

a sample of time series mySample

type:

ProcessSample

a spectral model factory mySpectralFactory

type:

SpectralModelFactory

 
Results  

a factory myFactory

type:

StationaryCovarianceModelFactory

a stationary covariance model: myEstimatedModel_TS, myEstimatedModel_PS,

type:

UserDefinedStationaryCovarianceModel

 

Python script for this Use Case:

script_docUC_StocProc_StationaryCovarianceFunction_Estimation.py

# Build a factory of stationary covariance function myCovarianceFactory = StationaryCovarianceModelFactory()   # Set the spectral factory algorithm myCovarianceFactory.setSpectralModelFactory(mySpectralFactory)   # Check the current spectral factory print(myCovarianceFactory.getSpectralModelFactory())   ######################################### # Case 1 :  Estimation on a ProcessSample   # The spectral model factory computes the spectral density function # without using the block and overlap arguments of the Welch factories myEstimatedModel_PS = myCovarianceFactory.build(mySample)   ######################################### # Case 2 :  Estimation on a TimeSeries   # The spectral model factory compute the spectral density function using # the block and overlap arguments of spectral model factories myEstimatedModel_TS = myCovarianceFactory.build(myTimeSeries)   ######################################### # Manipulation of the estimated model   # Evaluate the covariance function at each time step # Care : if estimated from a time series, the time grid has changed for i in range(N):     tau = myTimeGrid.getValue(i)     cov = myEstimatedModel_PS(tau)

The following example illustrates the case where the avalaible data is a sample of 10 4 realizations of the process, defined on the time grid [0,10], discretized every Δt = 0.1. The covariance model is the Exponential model parametered by λ=1 and a=1, i.e. C stat (τ)=exp(-|τ|).

The Figure (66) draws the graph of the exact covariance function and its estimation.

Covariance function C(0,t) for t in the time grid : estimation and exact model.
Figure 66

5.9.5 UC : Estimation of a non stationary covariance function

Let X:Ω×𝒟 d be a multivariate normal process of dimension d where 𝒟 n . X is supposed to be a second order process and we note C:𝒟×𝒟 d×d () its covariance function.

The objective of this use case is to estimate C from several fields generated by the process X. We suppose that the process is not stationary.

We denote (t ̲ 0 ,,t ̲ N-1 ) the vertices of the common mesh and (x ̲ 0 k ,,x ̲ N-1 k ) the associated values of the field k. We suppose that we have K fields.

We recall that the covariance function C writes:

(s ̲,t ̲)𝒟×𝒟,C(s ̲,t ̲)=𝔼X s ̲ -m(s ̲)X t ̲ -m(t ̲) t (88)

where the mean function m:𝒟 d is defined by :

t ̲𝒟,m(t ̲)=𝔼X t ̲ (89)

First, OpenTURNS estimate the covariance function C on the vertices of the mesh . At each vertex t ̲ i , we use the empirical mean estimator applied to the K fields to estimate :

  1. m(t ̲ i ) at the vertex t ̲ i :

    t ̲ i ,m(t ̲ i )1 K k=1 K x ̲ i k (90)
  2. C(t ̲ i ,t ̲ j ) at the vertices (t ̲ i ,t ̲ j ):

    (t ̲ i ,t ̲ j )𝒟×𝒟,C(t ̲ i ,t ̲ j )1 K k=1 K x ̲ i k -m(t ̲ i )x ̲ j k -m(t ̲ j ) t (91)

Then, OpenTURNS builds a covariance function defined on 𝒟×𝒟 which is a piecewise constant function defined on 𝒟×𝒟 by:

(s ̲,t ̲)𝒟×𝒟,C stat (s ̲,t ̲)=C(t ̲ k ,t ̲ l )

where k is such that t ̲ k is the vertex of the nearest to s ̲ and t ̲ l the nearest to t ̲.

OpenTURNS uses the object NonStationaryCovarianceModelFactory wich creates a

UserDefinedCovarianceModel.

Requirements  

a set of fields myFieldSample

type:

ProcessSample

 
Results  

a factory myFactory

type:

NonStationaryCovarianceModelFactory

a covariance model: myEstimatedModel

type:

UserDefinedCovarianceModel

 

Python script for this Use Case:

script_docUC_StocProc_NonStationaryCovarianceFunction_Estimation.py

# Build a covariance model factory myFactory = NonStationaryCovarianceModelFactory()   # Estimation on a the ProcessSample myEstimatedModel = myFactory.build(myFieldSample)  

In the following example, we illustrate the estimation of the non stationary covariance model C:𝒟×[-4,4] defined by:

C(s,t)=exp-4|s-t| 1+s 2 +t 2 (92)

The domaine 𝒟 is discretized on a mesh which is a time grid with 64 points.

We build a normal process X:Ω×[-4,4] with zero mean and C as covariance function. OpenTURNS discretizes the covariance model C using C(t k ,t ) for each (t k ,t )×.

We get a N=10 3 fields from the process X from wich we estimate the covariance model C. The Figure 67 draws the iso contours of the estimated model compared to the theoretical one.

Estimation of a non stationary covariance model of a scalar process on [-4,4].
Figure 67

5.9.6 UC : Creation of a parametric spectral density function

Let X:Ω×𝒟 d be a multivariate stationary normal process of dimension d. We only treat here the case where the domain is of dimension 1: 𝒟 (n=1).

If the process is continuous, then 𝒟=. In the discrete case, 𝒟 is a lattice.

X is supposed to be a second order process with zero mean and we suppose that its spectral density function S: + (d) defined in (36) exists. + (d) d () is the set of d-dimensional positive definite hermitian matrices.

This use case illustrates how the User can create a density spectral function from parametric models. OpenTURNS implements the Cauchy spectral model as a parametric model for the spectral density fucntion S.

The Cauchy spectral model: Its is associated to the Exponential covariance model. The Cauchy spectral model is defined by :

S ij (f)=4R ij a i a j (λ i +λ j ) (λ i +λ j ) 2 +(4πf) 2 ,(i,j)d (93)

where R ̲ ̲, a ̲ and λ ̲ are the parameters of the Exponential covariance model defined in section 5.9.1. The relation (93) can be explicited with the spatial covariance function C ̲ ̲ spat (τ) defined in (82).

OpenTURNS defines this model thanks to the object CauchyModel.

Requirements  

a ̲, λ ̲ : amplitude, scale

type:

NumericalPoint

R ̲ ̲, : spatialCorrelation

type:

CorrelationMatrix

C ̲ ̲ s , : spatialCovariance

type:

CovarianceMatrix

a time grid : myTimeGrid

type:

RegularGrid

 
Results  

a spectral model : mySpectralModel_Corr, mySpectralModel_Cov

type:

SpectralModel

 

Python script for this UseCase :

script_docUC_StocProc_DensitySpectralFunction_Param.py

# Create the spectral model # for example : the Cauchy model   # from the amplitude, scale and spatialCovariance mySpectralModel_Corr = CauchyModel(amplitude, scale, spatialCorrelation)   # or from the scale and spatialCovariance mySpectralModel_Cov = CauchyModel(scale, spatialCovariance)


5.9.7 UC : Creation of a User defined spectral density function

Let X:Ω×𝒟 d be a multivariate stationary normal process of dimension d. We only treat here the case where the domain is of dimension 1: 𝒟 (n=1).

If the process is continuous, then 𝒟=. In the discrete case, 𝒟 is a lattice.

X is supposed to be a second order process with zero mean and we suppose that its spectral density function S: + (d) defined in (36) exists. + (d) d () is the set of d-dimensional positive definite hermitian matrices.

This use case illustrates how the User can define his own density spectral function from parametric models. OpenTURNS allows it thanks to the object UserDefinedSpectralModel defined from :

  • a frequency grid (-f c ,,f c ) with step δf, stored in the object RegularGrid,

  • a collection of hermitian matrices 𝕄 d () stored in the object HermitianMatrixCollection, which are the images of each point of the frequency grid through the density spectral function.

OpenTURNS builds a constant piecewise function on [-f c ,f c ], where the intervals where the density spectral function is constant are centered on the points of the frequency grid, of length δf. Then, it is possible to evaluate the spectral density function for a given frequency thanks to the method computeSpectralDensity: if the frequency is not inside the interval [-f c ,f c ], OpenTURNS returns an exception. Otherwise, it returns the hermitian matrix of the subinterval of [-f c ,f c ] that contains the given frequency.

Requirements  

a frequency grid : myFrequencyGrid

type:

RegularGrid

a collection of hermitian matrices : myHermitianCollection

type:

HermitianMatrixCollection

 
Results  

a spectral model : mySpectralModel

type:

SpectralModel

 

Python script for this UseCase :

script_docUC_StocProc_DensitySpectralFunction_UserDefined.py

# Create the spectral model mySpectralModel = UserDefinedSpectralModel(myFrequencyGrid, myCollection)   # Get the spectral density function computed at first frequency values firstFrequency = myFrequencyGrid.getStart() frequencyStep = myFrequencyGrid.getStep() myFirstHermitian = mySpectralModel(firstFrequency)   # Get the spectral function at frequency + 0.3 * frequencyStep mySpectralModel(frequency + 0.3 * frequencyStep)  

In the following example, we illustrate how to create a modified low pass model of dimension d=1 with exponential decrease defined by : S: where

  • Frequency value f should be positive,

  • for f<5Hz, the spectral density function is constant : S(f)=1.0,

  • for f>5Hz, the spectral density function is equal to S(f)=exp-2.0(f-5.0) 2 .

The frequency grid is ]0,f c ]=]0,10] with δf=0.2 Hz. The Figure 68 draws the spectral density.

User defined spectral model
Figure 68

5.9.8 UC : Estimation of a spectral density function

Let X:Ω×𝒟 d be a multivariate stationary normal process of dimension d. We only treat here the case where the domain is of dimension 1: 𝒟 (n=1).

If the process is continuous, then 𝒟=. In the discrete case, 𝒟 is a lattice.

X is supposed to be a second order process with zero mean and we suppose that its spectral density function S: + (d) defined in (36) exists. + (d) d () is the set of d-dimensional positive definite hermitian matrices.

This objective of this use case is to estimate the spectral density function S from data, which can be a sample of time series or one time series.

Depending on the available data, we proceed differently :

  • if the data correspond to several independent realizations of the process (stored in a ProcessSample object), a statistical estimate is performed using statistical average of a realization-based estimator;

  • if the data correspond to one realization of the process at different time stamps (stored in a TimeSeries object), the process being observed during a long period of time, an ergodic estimate is performed using a time average of an ergodic-based estimator.

The estimation of the spectral density function from data may use some parametric or non parametric methods.

The Welch method is a non parametric estimation technique, known to be performant. We detail it in the case where the available data on the process is a time series which values are (x ̲ 0 ,,x ̲ N-1 ) associated to the time grid (t 0 ,,t N-1 ) which is a discretization of the domain [0,T].

We assume that the process has a spectral density S defined on |f|T 2.

The method is based on the segmentation of the time series into K segments of length L, possibly overlapping (size of overlap R).

Let X ̲ 1 (j),j=0,1,...,L-1 be the first such segment. Then :

X ̲ 1 (j)=X ̲(j),j=0,1,...,L-1

Applying the same decomposition,

X ̲ 2 (j)=X ̲(j+(L-R)),j=0,1,...,L-1

and finally :

X ̲ K (j)=X ̲(j+(K-1)(L-R)),j=0,1,...,L-1

The objective is to get a statistical estimator from these K segments. We define the periodogram associated with the segment X ̲ k by:

X ̲ k (f p ,T)=Δt n=0 L-1 x ̲(nΔt)exp-j2πpn N,p=0,,L-1G ^ x ̲ (f p ,T)=2 TX ̲ k (f p ,T)X ̲ k (f p ,T) * t ,p=0,,L/2-1

with Δt=T N and f p =p T=p N1 Δt.

It has been proven that the periodogram has bad statistical properties. Indeed, two quantities summarize the properties of an estimator: its bias and its variance. The bias is the expected error one makes on the average using only a finite number of time series of finite length, whereas the covariance is the expected fluctuations of the estimator around its mean value. For the periodogram, we have:

  • Bias=𝔼[G ^ x ̲ (f p ,T)-G X ̲ (f p )]=(1 TW B (f p ,T)-δ 0 )*G X ̲ (f p ) where W B (f p ,T)=sinπfT πfT 2 is the squared module of the Fourier transform of the function w B (t,T) (Barlett window) defined by:

    w B (t,T)=1 [0,T] (t) (94)

    This estimator is biased but this bias vanishes when T as lim T 1 TW B (f p ,T)=δ 0 .

  • Covariance=1 TW B (f p ,T)*G X ̲ (f p )G X ̲ (f p ) as T, which means that the fluctuations of the periodogram are of the same order of magnitude as the quantity to be estimated and thus the estimator is not convergent.

The periodogram's lack of convergence may be easily fixed if we consider the averaged periodogram over K independent time series or segments:

G ^ x ̲ (f p ,T)=2 KT k=0 K-1 X ̲ (k) (f p ,T)X ̲ (k) (f p ,T) t (95)

The averaging process has no effect on the significant bias of the periodogram.

The use of a tapering window w(t,T) may significantly reduce it. The time series x ̲(t) is replaced by a tapered time series w(t,T)x ̲(t) in the computation of X ̲(f p ,T). One gets :

𝔼[G ^ x ̲ (f p ,T)-G X ̲ (f p )=(1 TW(f p ,T)-δ 0 )*G X ̲ (f p ) (96)

where W(f p ,T) is the square module of the Fourier transform of w(t,T) at the frequency f p . A judicious choice of tapering function such as the Hanning window w H can dramatically reduce the bias of the estimate:

w H (t,T)=8 31-cos 2 πt T1 [0,T] (t) (97)

OpenTURNS builds an estimation of the spectrum on a TimeSeries by fixing the number of segments, the overlap size parameter and a FilteringWindows. The available ones are :

  • The Hamming window

    w(t,T)=1 K0.54-0.46cos 2 2πt T1 [0,T] (t) (98)

    with K = 0.54 2 +1 20.46 2

  • The Hanning window described in (97) which is supposed to be the most usefull.

The result consists in a UserDefinedSpectralModel which is simple to manipulate.

Furthermore, OpenTURNS build an estimation of the spectral function on a process sample by considering that k-th segment is the k-th time series of the process sample . User should pay attention that data must be centred otherwise user could center them himself.

Requirements  

number of segments mySegmentNumber

type:

integer

size of overlap myOverlapSize

type:

integer

a time series myTimeSeries

type:

TimeSeries

a set of time series mySample

type:

ProcessSample

 
Results  

a spectral model : myEstimatedModel_TS, myEstimatedModel_PS

type:

UserDefinedSpectralModel

 

Python script for this Use Case :

script_docUC_StocProc_DensitySpectralFunction_Estimation.py

# Build a spectral model factory myFactory = WelchFactory(Hanning(), mySegmentNumber, myOverlapSize)   # Estimation on a TimeSeries or on a ProcessSample myEstimatedModel_TS = myFactory.build(myTimeSeries) myEstimatedModel_PS = myFactory.build(mySample)   # Change the filtering window myFactory.setFilteringWindows(Hamming())   # Get the FFT algorithm myFFT = myFactory.getFFTAlgorithm()   # Get the frequencyGrid frequencyGrid = myEstimatedModel_PS.getFrequencyGrid()

The following example illustrates the case where the available data is a sample of 10 3 realizations of the process, defined on the time grid [0,102.3], discretized every Δt=0.1. The spectral model of the process is the Cauchy model parameterized by λ ̲=(5) and a ̲=(3).

The Figure (69) draws the graph of the real spectral model and its estimation from the sample of time series.

Comparison of spectral density : estimation with the Welch method vs Cauchy Model
Figure 69

5.9.9 UC : Creation of stationary parametric second order model

This use case details how to create a stationary second order model that insures the coherence between the covariance function C stat :𝒟×𝒟 d×d () and the spectral density function S: + (d). We only treat here the case where the domain is of dimension 1: 𝒟 (n=1).

If the process is continuous, then 𝒟=. In the discrete case, 𝒟 is a lattice.

The coherence is done through the relation (36) and in some cases, it is not possible: for example, the spectral model is defined but the associated covariance model is not analytical.

OpenTURNS saves the complete information of a second order model in the object SecondOrderModel.

A second order model can be used to create zero-mean stationary normal processes, stored either in a TemporalNormalProcess object or in a SpectralNormalProcess one (see the use case of section 5.9.10).

OpenTURNS implements the parametric second order model ExponentialCauchy where the covariance function is the Exponential model (see the use case 5.9.1) and the associated spectral density function is the Cauchy model (see the use case 5.9.6) .

Requirements  

a ̲, λ ̲ : amplitude, scale

type:

NumericalPoint

R ̲ ̲ : spatialCorrelation

type:

CorrelationMatrix

C ̲ ̲ : spatialCovariance

type:

CovarianceMatrix

 
Results  

a second order model : mySecondOrderModel

type:

SecondOrderModel

 

Python script for this UseCase :

  # Create the second order model   # for example : the Exponential Cauchy     # from the amplitude, scale and spatialCovariance   mySecondOrderModel = ExponentialCauchy(amplitude, scale, spatialCorrelation)     # or from the scale and spatialCovariance   mySecondOrderModel = ExponentialCauchy(scale,spatialCovariance)


5.9.10 UC : Creation of a normal process

This Use Case details how to create a normal process X:Ω×𝒟 d either from its temporal covariance function or/and its spectral density function S (when it exists).

A normal process defined by its temporal covariance function may have a trend: in that case, the normla process is the sum of the trend fucntion and a zero-mean normal process.

A zero-mean normal process is completely defined either by :

  • its (temporal) covariance function C:𝒟×𝒟 d×d (), which writes, in the stationary case: C stat :𝒟 d×d (). In that case, the normal process is used through its temporal view only, thanks to the object TemporalNormalProcess,

  • or its bilateral spectral density function (when it exists), in the stationary case, S: + (d). In that case, the normal process is used through its spectral view only, thanks to the object SpectralNormalProcess. OpenTURNS restricts that possibility to processes for which the domain is of dimension 1: 𝒟 (n=1).

When the zero-mean process is stationary, in order to manipulate the same normal process through both the temporal and spectral views, it is necessary to create a second order model that insures the coherence between the covariance function C stat and the spectral density function S through the relation (36).

In that purpose, the object SecondOrderModel is built (see the use case 5.9.9) and then used to create a TemporalNormalProcess and the associated SpectralNormalProcess .

The choice between a TemporalNormalProcess and a SpectralNormalProcess based on the same second order model is motivated by performance considerations of the specific algorithms either in terms of memory requirements and CPU requirements. For example, the performance of the algorithms related to the spectral density function (FFT) varies greatly if the number of frequencies is a power of 2 or not.

In the case of a TemporalNormalProcess, we can ask for the trend function of the process thanks to the method getTrend or the covariance model and evaluate the covariance function thanks to the method computeCovariance or discretize it on a specific mesh thanks to the method discretizeCovariance, which creates the matrix defined in (33).

In the case of a SpectralNormalProcess, we can ask for the spectral model and evaluate the bilateral spectral density function S defined in (36) thanks to the method computeSpectralDensity.

We note N the number of vertices of the mesh on wich 𝒟 is discretized.

The first call to the method getRealization implies different actions according to the type of the normal process :

  • in case of a TemporalNormalProcess, OpenTURNS builds the covariance matrix C ̲ ̲ 1,,N Nd,Nd () defined in (33), using the method discretizeCovariance. Then C ̲ ̲ 1,,N is factorized using the Cholesky algorithm: C ̲ ̲ 1,,N =G ̲ ̲G ̲ ̲ t .

  • in case of a SpectralNormalProcess, OpenTURNS builds the n bilateral spectral density matrices (S ̲ ̲(f i )) 1i,N defined in (36) where (f i ) 1i,N are the frequencies associated to the time grid. Then each matrix matS(f i ) is factorized using the Cholesky algorithm: S ̲ ̲(f i )=H ̲ ̲ i H ̲ ̲ ¯ i t .

These matrices G ̲ ̲ and H ̲ ̲ i are used to get some realizations of the process from realizations of a standard normal process (with zero mean and unit covariance matrix).

In order to get the Cholesky factor of C ̲ ̲ 1,,N or (S ̲ ̲(f i )) 1iN , we might need to scale the matrices, due to some numerical precisions. This scale consists in replacing the matrix C ̲ ̲ 1,,N (resp. S ̲ ̲(f i )) by C ̲ ̲ 1,,N +ϵ*I ̲ ̲ (resp. S ̲ ̲(f i )+ϵ*I ̲ ̲) with I ̲ ̲ the identity matrix and ϵ a negligible scalar variable.

In this case, the User gets a warning message to inform him about the used ϵ value to get the Cholesky factor.

Requirements  

a mesh : myMesh

type:

Mesh

a time grid : myTimeGrid

type:

RegularGrid

a trend function : myTrend

type:

TrendTransform

covariance models : myCovarianceModel

type:

StationaryCovarianceModel

a spectral model : mySpectralModel

type:

SpectralModel

a second order model : mySecondOrderModel

type:

SecondOrderModel

 
Results  

normal processes : myTempNormProc1, myTempNormProc2

type:

TemporalNormalProcess

the normal process : mySpectNormProc1, mySpectNormProc2

type:

SpectralNormalProcess

 

Python script for this UseCase :

script_docUC_StocProc_NormalProcess_Creation.py

#################################### # CASE 1 : the normal process is defined by its temporal covariance # function ONLY   # Create a normal process with the temporal view ONLY myTempNormProc1 = TemporalNormalProcess(myTrend, myCovarianceModel, myMesh)   #################################### # CASE 2 : the normal process is defined by its spectral density function ONLY # Stationary process ONLY # Care! The mesh must be a time grid (n=1)   # Create a normal process with the spectral view ONLY mySpectNormProc1 = SpectralNormalProcess(mySpectralModel, myTimeGrid)   ########################################################################## # CASE 3 : the normal process is defined by a second order that contains both the temporal covariance function and the associated spectral density function # Stationary process ONLY # Care! The mesh must be a time grid (n=1)   # Create the normal process to use its temporal properties myTempNormProc2 = TemporalNormalProcess(mySecondOrderModel, myTimeGrid)   # Create the normal process to use its spectral properties mySpectNormProc2 = SpectralNormalProcess(mySecondOrderModel, myTimeGrid)  

The example illustrated below is a bivariate normal process with the Exponential model for the covariance one, parameterized by λ ̲=(1,1), a ̲=(1,1) and R ̲ ̲ the identity matrix. We built a TemporalNormalProcess and a SpectralNormalProcess using the same SecondOrderModel and the same RegularGrid. Figures (70) to (71) respectively draw the graphs of:

  • one realization of the temporal process (both marginals are illustrated),

  • a sample of 5 realizations of the process (the first marginal is presented here) based on the covariance of the second order model,

  • one realization of the spectral process (both marginals are illustrated),

  • a sample of 5 realizations of the process (the first marginal is presented here) based on the spectral density of the second order model.

Realization of TemporalNormalProcess
5 realizations of the TemporalNormalProcess
Figure 70
Realization of SpectralNormalProcess
5 realizations of the SpectralNormalProcess
Figure 71

5.10 Other processes

5.10.1 UC : Creation of a White Noise

This section details first how to create and manipulate a white noise.

A second order white noise ε:Ω×𝒟 d is a stochastic process of dimension d such that the covariance function C(s ̲,t ̲)=δ(t ̲-s ̲)C(s ̲,s ̲) where C(s ̲,s ̲) is the covariance matrix of the process at vertex s ̲ and δ the Kroenecker function.

A process ε is a white noise if all finite family of locations (t ̲ i ) i=1,,n 𝒟, (ε t ̲ i ) i=1,,n is independent and identically distributed.

OpenTURNS proposes to model it through the object WhiteNoise defined on a mesh and a distribution with zero mean and finite standard deviation.

If the distribution has a mean different from zero, OpenTURNS sends a message to prevent the User and does not allow the creation of such a white noise.

Requirements  

a distribution : myDist

type:

Distribution

a mesh : myMesh

type:

Mesh

 
Results  

a white noise : myWN

type:

WhiteNoise

 

Python script for this UseCase :

script_docUC_StocProc_WhiteNoise.py

# Create  a white noise myWN = WhiteNoise(myDist, myMesh)

The first example illustrated in the Figures 72 to 72 is a univariate white noise distributed according to the standard normal distribution. We draw some realizations of the time grid = [0,99] with Δt=1.0.

The second example illustrated in the Figures 73 and 73 is a univariate white noise distributed according to the standard normal distribution. We draw some realizations on a mesh of dimension 2.

The Figures 73 and 73 respectively draw one realization of the process when the values are interpolated and not interpolated.

Realization of a white noise with distribution 𝒩(0,1)
5 realizations of a white noise with distribution 𝒩(0,1).
Figure 72
One realization of the white noise defined on the mesh with distribution 𝒩(0,1) when the values are interpolated.
Previous realization of the white noise defined on the mesh with distribution 𝒩(0,1) when the values are not interpolated.
Figure 73

5.10.2 UC : Creation of a Random Walk

This section details first how to create and manipulate a random walk.

A random walk X:Ω×𝒟 d is a process where 𝒟= discretized on the time grid (t i ) i0 such that:

X t 0 =x ̲ t 0 n>0,X t n =X t n-1 +ε t n (5.10)

where x ̲ 0 d and ε is a white noise of dimension d.

OpenTURNS proposes to model it through the object RandomWalk defined thanks to the origin, the distribution of the white noise and the time grid.

Requirements  

a numerical point : myOrigin

type:

NumericalPoint

two distributionq : myDist

type:

Distribution

a time grid : myTimeGrid

type:

RegularGrid

 
Results  

a random walk myRandomWalk

type:

RandomWalk

 

Python script for this UseCase :

script_docUC_StocProc_RandomWalk.py

# Creation of a random walk myRandomWalk = RandomWalk(myOrigin, myDist, myTimeGrid)

Figures (74) and (74) illustrate realizations of a 1D random walk where the distribution 𝒟 is respectively :

  • discrete : the support is the two points {-1,10} with respective weights 0.9 and 0.1,

  • continuous : the Normal distribution 𝒩(0,1).

Realizations of a random walk with the discrete distribution : P(-1)=0.9,P(10)=0.1.
Realizations of a random walk with distribution 𝒩(0,1).
Figure 74

Figures (75) and (75) illustrate realizations of a 2D random walk where the distribution 𝒟 is respectively :

  • discrete : the support is the two points {-1,10} with respective weights 0.9 and 0.1,

  • continuous : the 2D Normal distribution 𝒩(0 ̲,1 ̲ ̲).

The realizations are presented in the phase space X 1 versus X 2 .

Realizations of a random walk with the uniform discrete distribution over the points (-1,-2) and (1,3).
5 realizations of a random walk with distribution 𝒩(0 ̲,I ̲ ̲)
Figure 75

5.10.3 UC : Creation of a Functional Basis Process

The objective of this Use Case is to define X:Ω×𝒟 d a multivariate stochastic process of dimension d where 𝒟 n , as a linear combination of K deterministic functions (φ i ) i=1,,K : n d :

X(ω,t ̲)= i=1 K A i (ω)φ i (t ̲)

where A ̲=(A 1 ,,A K ) is a random vector of dimension K.

We suppose that is discretized on the mesh wich has N vertices.

A realization of X on consists in generating a realization α ̲ of the random vector A ̲ and in evaluating the functions (φ i ) i=1,,K on the mesh . If we note (x ̲ 0 ,,x ̲ N-1 ) the realization of X, where X(ω,t ̲ k )=x ̲ k , we have:

k[0,N-1],x ̲ k = i=1 K α i φ i (t ̲ k )
Requirements  

the distribution of A ̲: coefDist

type:

Distribution

the mesh : myMesh

type:

Mesh

the functional basis (φ i ) i=1,,K : myBasis

type:

Basis

 
Results  

the functional process myOutputProcess

type:

FunctionalBasisProcess

 

Python script for this UseCase :

script_docUC_StocProc_FunctionalBasisProcess.py

# Create the output process of interest myOutputProcess = FunctionalBasisProcess(coefDist, Basis(myBasis), myMesh)


5.11 Process transformation

5.11.1 UC : Creation of a Dynamical Function

The objective here is to create dynamical functions, that can act on fields.

OpenTURNS defines a new function called dynamical function and two particular dynamical functions: the spatial function and the temporal function.

Dynamical function:

A dynamical function f dyn :𝒟× d 𝒟 ' × q where 𝒟 n and 𝒟 ' p is defined by:

f dyn (t ̲,x ̲)=(t ' (t ̲),v ' (t ̲,x ̲)) (100)

with t ' :𝒟𝒟 ' and v ' :𝒟× d q .

A dynamical function f dyn transforms a multivariate stochastic process:

X:Ω×𝒟 d (101)

where 𝒟 n is discretized according to the into the multivariate stochastic process:

Y=f dyn (X) (102)

such that:

Y:Ω×𝒟 ' q

where the mesh 𝒟 ' p is discretized according to the ' .

A dynamical function f dyn also acts on fields and produces fields of possibly different dimension (qd) and mesh (𝒟𝒟 ' or ' ).

OpenTURNS v1.3 only proposes dynamical functions where 𝒟 ' =𝒟 and ' = which means that t ' =Id through the spatial function and the temporal function. It follows that the process Y shares the same mesh with X, only its values have changed.

Spatial function:

A spatial function f spat :𝒟× d 𝒟× q is a particular dynamical function that lets invariant the mesh of a field and defined by a function g: d q such that:

f spat (t ̲,x ̲)=(t ̲,g(x ̲)) (103)

Let's note that the input dimension of f spat still designs the dimension of x ̲ : d. Its output dimension is equal to q.

The creation of the SpatialFunction object of OpenTURNS requires the numerical math function g and the integer n: the dimension of the vertices of the mesh . This data is required for tests on the compatibility of dimension when a composite process is created using the spatial function.

Temporal function:

A temporal function f temp :𝒟× d 𝒟× q is a particular dynamical function that lets invariant the mesh of a field and defined by a function h: n × d q such that:

f temp (t ̲,x ̲)=(t ̲,h(t ̲,x ̲)) (104)

Let's note that the input dimension of f temp still design the dimension of x ̲ : d. Its output dimension is equal to q.

The creation of the TemporalFunction object of OpenTURNS requires the numerical math function h and the integer n: the dimension of the vertices of the mesh .

The use case illustrates the creation of a spatial (dynamical) function from the function g: 2 2 such as :

g(x ̲)=(x 1 2 ,x 1 +x 2 ) (105)

and the creation of a temporal (dynamical) function from the function h: 2 × 2 such as :

h(t ̲,x ̲)=(t 1 +t 2 +x 1 2 +x 2 2 ) (106)
Requirements  

some functions g,h

type:

NumericalMathFunction

 
Results  

a spatial function mySpatialFunction

type:

SpatialFunction

a temporal function myTemporalFunction

type:

TemporalFunction

two dynamical functions myDynamicalFunctionFromSpatial and myDynamicalFunctionFromTemporal

type:

DynamicalFunction

 

Python script for this use case :

script_docUC_LSF_DynamicalFunction.py

# Create the function g : R^d --> R^q # for example : R^2 --> R^2 #               (x1,x2) --> (x1^2, x1+x2) g = NumericalMathFunction(['x1', 'x2'],  ['x1^2', 'x1+x2'])   # Create the function h : R^n*R^d --> R^q # for example : R^2*R^2 --> R #               (t1,t2,x1,x2) --> (t1+t2+x1^2+x2^2) h = NumericalMathFunction(['t1', 't2', 'x1', 'x2'],  ['t1+t2+x1^2+x2^2'])   ########################################### # Creation of a spatial dynamical function ###########################################   # Convert g : R^d --> R^q into a spatial fucntion # n is the dimension of the mesh # of the field on wich g will be applied n = 2 mySpatialFunction = SpatialFunction(g, n) print("spatial function=", mySpatialFunction)   ########################################### # Creation of a temporal dynamical function ###########################################   # Convert h: R^n*R^d --> R^q into a temporal function # n is the dimension of the mesh # here n = dimension (t1,t2) n = 2 myTemporalFunction = TemporalFunction(h, n) print("temporal function=", myTemporalFunction)  


5.11.2 UC : Trend addition, Box Cox transformation, Composite process

The objective here is to create a process Y as the image through a dynamical function f dyn of another process X:

Y=f dyn (X)

General case:

In the general case, X:Ω×𝒟 d is a multivariate stochastic process of dimension d where 𝒟 n , Y:Ω×𝒟 ' q a multivariate stochastic process of dimension q where 𝒟 ' p and f dyn :𝒟× d 𝒟 ' × q and f dyn is defined in (100).

OpenTURNS builds the transformed process Y thanks to the object CompositeProcess from the data: f dyn and the process X.

OpenTURNS proposes two kinds of dynamical function: the spatial functions defined in (103) and the temporal functions defined in (104).

Trend modifications:

Very often, we have to remove a trend from a process or to add it. If we note f trend : n d the function modelling a trend, then the dynamical function which consists in adding the trend to a process is the temporal function f temp :𝒟× d n × d defined by:

f temp (t ̲,x ̲)=(t ̲,x ̲+f trend (t ̲)) (107)

OpenTURNS enables to directly convert the numerical math function f trend into the temporal function f temp thanks to the TrendTranform object which maps f trend into the temporal function f temp .

Then, the process Y is built with the object CompositeProcess from the data: f temp and the process X such that:

ωΩ,t ̲𝒟,Y(ω,t ̲)=X(ω,t ̲)+f trend (t ̲)

Box Cox transformation:

If the transformation of the process X into Y corresponds to the Box Cox transformation f BoxCox : d d which transforms X into a process Y with stabilized variance, then the corresponding dynamical function is the spatial function f spat :𝒟× d 𝒟× d defined by:

f spat (t ̲,x ̲)=(t ̲,f BoxCox (x ̲)) (108)

OpenTURNS enables to directly convert the numerical math function f BoxCox into the spatial function f spat thanks to the SpatialFunction object.

Then, the process Y is built with the object CompositeProcess from the data: f spat and the process X such that:

ωΩ,t ̲𝒟,Y(ω,t ̲)=f BoxCox (X(ω,t ̲))
Requirements  

a stochastic process : myXtProcess

type:

Process

a dynamical function : myDynFct

type:

DynamicalFunction

the trend function : fTrend

type:

NumericalMathFunction

 
Results  

the trend transformation : fTemp

type:

TrendTransform

the Box Cox transformation : fSpat

type:

BoxCoxTransform

some transformed processes : myYtProcess, myYtProcess_trend, myYtProcess_boxcox

type:

CompositeProcess

 

Python script for this Use Case :

script_docUC_StocProc_CompositeProcess.py

####################### # Case 1 : General case #######################   # Create the  image Y myYtProcess = CompositeProcess(myDynFunc, myXtProcess)   # Get the antecedent : myXtProcess print('My antecedent process = ', myYtProcess.getAntecedent())   # Get the dynamical function # which performs the transformation print('My dynamical function = ', myYtProcess.getFunction())   ############################## # Case 2 : Addition of a trend ##############################   # Create the temporal function fTemp # from the function fTrend fTemp = TrendTransform(fTrend)   # Create the composite process # myYtProcess_trend = myXtProcess + fTrend(t) myYtProcess_trend = CompositeProcess(fTemp, myXtProcess)   ################################# # Case 3 : Box Cox transformation #################################   # Create a Box Cox transformation # for example for a process of dimension 2 # fBoxCox : R^2 --> R^1 #            (x1,x2) --> (log(x1),log(x2)) # fSpat will be applied on process with bidimensional mesh n = 2 fSpat = SpatialFunction(BoxCoxTransform([0.0, 0.0]), n)   # Create the process # myYtProcesss_boxcox = BoxCoxTransform(myXtProcess) myYtProcess_boxcox = CompositeProcess(fSpat, myXtProcess)  


5.12 Stationarity test

5.12.1 UC : Dickey Fuller stationarity tests

This use case details some Dickey Fuller stationarity tests and the strategy proposed by OpenTURNS. The tests are applied on a scalar time series or a sample of scalar time series.

Two forms of non stationarity can be distinguished :

  • deterministic non stationarity when the marginal distribution of the process is time dependent. For example, the mean is time dependent (temporal trend) or the variance is time dependent;

  • stochastic non stationarity when the random perturbation at time t does not vanish with time (for example a random walk).

Specific tests exist to detect the first case. The Dickey-Fuller tests only focus on the stochastic non stationarity. It assumes that the underlying process, discretized on the time grid (t 0 ,,t n-1 ) writes :

X t =a+bt+ρX t-1 +ε t (109)

where ρ>0 and where a or b or both (a,b) can be assumed to be equal to 0.

When (a0 and b=0), the model (109) is said to have a drift. When (a=0 and b0), the model (109) is said to have a linear trend.

In the model (109), the only way to have stochastic non stationarity is to have ρ=1 (if ρ>1, then the time series diverges with time which is readily seen in data). The Dickey-Fuller test in the general case is a unit root test to detect whether ρ=1 against ρ<1 :

0 :ρ=1 1 :ρ<1 (110)

The test statistics and its limit distribution depend on the a priori knowledge we have on a and b. In case of absence of a priori knowledge on the structure of the model, several authors have proposed a global strategy to cover all the subcases of the model (109), depending on the possible values on a and b. Figure Fig.76 explicitates the strategy implemented in OpenTURNS, recommended by Enders (Applied Econometric Times Series, Enders, W., second edition, John Wiley & sons editions, 2004.).

Dickey Fuller strategy
Figure 76

We note (X 1 ,,X n ) the data, by W(r) the Wiener process, and W a (r)=W(r)- 0 1 W(r)dr, W b (r)=W a (r)-12r-1 2 0 1 s-1 2W(s)ds

We assume the model 1:

X t =a+bt+ρX t-1 +ε t (111)

The coefficients (a,b,ρ) are estimated by (a ^ n ,b ^ n ,ρ ^ n ) using ordinary least-squares fitting, which leads to:

n-1 i=1 n t i i=2 n y i-1 i=1 n t i i=1 n t i 2 i=2 n t i y i-1 i=2 n y i-1 i=2 n t i y i-1 i=2 n y i-1 2 M ̲ ̲ a ^ n b ^ n ρ ^ n = i=1 n y i i=1 n t i y i i=2 n y i-1 y i (112)

We first test :

0 :ρ=1 1 :ρ<1 (113)

thanks to the Student statistics :

t ρ=1 =ρ n -1 σ ^ ρ n (114)

where σ ρ n is the least square estimate of the standard deviation of ρ ^ n , given by:

σ ρ n =M ̲ ̲ 33 -1 1 n-1 i=2 n y i -(a ^ n +b ^ n t i +ρ ^ n y i-1 ) 2 (115)

residual which converges in distribution to the Dickey-Fuller distribution associated to the model with drift and trend:

t ρ=1 0 1 W b (r)dW(r) 1 0 W b (r) 2 dr (116)

The null hypothesis 0 from (113) is accepted when t ρ=1 >C α where C α is the test threshold of level α, which is tabulated in table 4.

α   0.01   0.05   0.1  
C α   -3.96   -3.41   -3.13  
 
Quantiles of the Dickey-Fuller statistics for the model with drift and linear trend
Table 4
  • If the null hypothesis 0 from (113) is rejected, we test whether b=0 :

    0 :b=0 1 :b0 (117)

    where the statistics t n =|b ^ n | σ b n converges in distribution to the Student distribution Student(ν=n-4), where σ b n is the least square estimate of the standard deviation of b ^ n , given by:

    σ b n =M ̲ ̲ 22 -1 1 n-1 i=2 n y i -(a ^ n +b ^ n t i +ρ ^ n y i-1 ) 2 (118)
    • If 0 from (117) is rejected, then the model 1 (111) is confirmed. And the test (113) proved that the unit root is rejected : ρ<1. We then conclude that the final model is : X t =a+bt+ρX t-1 +ε t whith ρ<1 which is a trend stationary model.

    • If 0 from (117) is accepted, then the model 1 (111) is not confirmed, since the trend presence is rejected and the test (113) is not conclusive (since based on a wrong model). We then have to test the second model (121).

  • If the null hypothesis 0 from (113) is accepted, we test whether (ρ,b)=(1,0) :

    0 :(ρ,b)=(1,0) 1 :(ρ,b)(1,0) (119)

    with the Fisher statistics :

    F ^ 1 =(S 1,0 -S 1,b )/2 S 1,b /(n-3) (120)

    where S 1,0 = i=2 n y i -(a ^ n +y i-1 ) 2 is the sum of the square errors of the model 1 (111) assuming 0 from (119) and S 1,b = i=2 n y i -(a ^ n +b ^ n t i +ρ ^ n y i-1 ) 2 is the same sum when we make no assumption on ρ and b.

    The statistics F ^ 1 converges in distribution to the Fisher-Snedecor distribution F(2,n-3). The null hypothesis 0 from (113) is accepted when F ^ 1 <Φ α where Φ α is the test threshold of level α.

    • If 0 from (119) is rejected, then the model 1 (111) is confirmed since the presence of linear trend is confirmed. And the test (113) proved that the unit root is accepted : ρ=1. We then conclude that the model is : X t =a+bt+X t-1 +ε t which is a non stationary model.

    • If 0 from (119) is accepted, then the model 1 (111) is not confirmed, since the presence of the linear trend is rejected and the test (113) is not conclusive (since based on a wrong model). We then have to test the second model (121).

We assume the model 2:

X t =a+ρX t-1 +ε t (121)

The coefficients (a,ρ) are estimated as follows :

n-1 i=2 n y i-1 i=2 n y i-1 i=2 n y i-1 2 N ̲ ̲ a ^ n ρ ^ n = i=1 n y i i=2 n y i-1 y i (122)

We first test :

0 :ρ=1 1 :ρ<1 (123)

thanks to the Student statistics :

t ρ=1 =ρ n -1 σ ρ n (124)

where σ ρ n is the least square estimate of the standard deviation of ρ ^ n , given by:

σ ρ n =N ̲ ̲ 22 -1 1 n-1 i=2 n y i -(a ^ n +ρ ^ n y i-1 ) 2 (125)

which converges in distribution to the Dickey-Fuller distribution associated to the model with drift and no linear trend:

t ρ=1 0 1 W a (r)dW(r) 1 0 W a (r) 2 dr (126)

The null hypothesis 0 from (123) is accepted when t ρ=1 >C α where C α is the test threshold of level α, which is tabulated in table 5.

α   0.01   0.05   0.1  
C α   -3.43   -2.86   -2.57  
 
Quantiles of the Dickey-Fuller statistics for the model with drift
Table 5
  • If the null hypothesis 0 from (123) is rejected, we test whether a=0 :

    0 :a=0 1 :a0 (127)

    where the statistics t n =|a ^ n | σ a n converges in distribution to the Student distribution Student(ν=n-3), where σ a n is the least square estimate of the standard deviation of a ^ n , given by:

    σ a n =N ̲ ̲ 11 -1 1 n-1 i=2 n y i -(a ^ n +ρ ^ n y i-1 ) 2 (128)
    • If 0 from (127) is rejected, then the model 2 (121) is confirmed. And the test (123) proved that the unit root is rejected : ρ<1. We then conclude that the final model is : X t =a+ρX t-1 +ε t whith ρ<1 which is a stationary model.

    • If 0 from (127) is accepted, then the model 2 (121) is not confirmed, since the drift presence is rejected and the test (113) is not conclusive (since based on a wrong model). We then have to test the third model (131).

  • If the null hypothesis 0 from (123) is accepted, we test whether (ρ,a)=(1,0) :

    0 :(ρ,a)=(1,0) 1 :(ρ,a)(1,0) (129)

    with a Fisher test. The statistics is :

    F ^ 2 =(SCR 2,c -SCR 2 )/2 SCR 2 /(n-2) (130)

    where SCR 2,c is the sum of the square errors of the model 2(121) assuming 0 from (129) and SCR 2 is the same sum when we make no assumption on ρ and a.

    The statistics F ^ 2 converges in distribution to the Fisher distribution F(2,n-2). The null hypothesis 0 from (113) is accepted if when F ^ 2 <Φ α where Φ α is the test threshold of level α.

    • If 0 from (129) is rejected, then the model 2 (121) is confirmed since the presence of the drift is confirmed. And the test (123) proved that the unit root is accepted : ρ=1. We then conclude that the model is : X t =a+X t-1 +ε t which is a non stationary model.

    • If 0 from (129) is accepted, then the model 2 (121) is not confirmed, since the drift presence is rejected and the test (123) is not conclusive (since based on a wrong model). We then have to test the third model (131).

We assume the model 3::

X t =ρX t-1 +ε t (131)

The coefficients ρ are estimated as follows :

ρ ^ n = i=2 n y i-1 y i i=2 n y i-1 2 (132)

We first test :

0 :ρ=1 1 :ρ<1 (133)

thanks to the Student statistics :

t ρ=1 =ρ ^ n -1 σ ρ n (134)

where σ ρ n is the least square estimate of the standard deviation of ρ ^ n , given by:

σ ρ n =1 n-1 i=2 n y i -ρ ^ n y i-1 2 / i=2 n y i-1 2 (135)

which converges in distribution to the Dickey-Fuller distribution associated to the random walk model:

t ρ=1 0 1 W(r)dW(r) 1 0 W(r) 2 dr (136)

The null hypothesis 0 from (133) is accepted when t ρ=1 >C α where C α is the test threshold of level α, which is tabulated in table 6.

α   0.01   0.05   0.1  
C α   -2.57   -1.94   -1.62  
 
Quantiles of the Dickey-Fuller statistics for the random walk model
Table 6
  • If 0 from (133) is rejected, we then conclude that the model is : X t =ρX t-1 +ε t where ρ<1 which is a stationary model.

  • If 0 from (133) is accepted, we then conclude that the model is : X t =X t-1 +ε t which is a non stationary model.

OpenTURNS implements the estimation of the coefficients of the different models by the following methods of the object DickeyFuller:

  • coefficients of (112): method estimateDriftAndLinearTrendModel

  • coefficients of (122): method estimateDriftModel

  • coefficients of (132): method estimateAR1Model

The global DickeyFuller strategy is implemented in the method testUnitRoot of the object DickeyFuller. For more expert Users, the different tests of the Dickey-Fuller strategy are implemented by the following methods of the object DickeyFuller:

  • test (113) : method testUnitRootInDriftAndLinearTrendModel

  • test (117) : method testNoUnitRootAndNoLinearTrendInDriftAndLinearTrendModel

  • test (119) : method testUnitRootAndNoLinearTrendInDriftAndLinearTrendModel

  • test (123) : method testUnitRootInDriftModel

  • test (127) : method testNoUnitRootAndNoDriftInDriftModel

  • test (129) : method testUnitRootAndNoDriftInDriftModel

  • test (133) : method testUnitRootInAR1Model

Requirements  

a time series : myTimeSeries

type:

TimeSeries

the level of the test : level

type:

NumericalScalar

 
Results  

a test class : myDickeyFullerTest

type:

DickeyFullerTest

a test result : myTestResult

type:

TestResult

 

Python script for this Use Case :

      # Initiate a DickeyFullerTest class       myDickeyFullerTest = DickeyFullerTest(myTimeSeries)         # H0 : rho = 1       # Test = True <=> time series non stationary       # p-value threshold : probability of the H0 reject zone : 1 - 0.95       # p-value : probability (test variable decision > test variable decision evaluated on the time series)       # Test = True <=> p-value > p-value threshold         # Test of the model (1)       myTestResultNoConstant = myDickeyFullerTest.testLinearTrendModel(level)         # Test the model (2)       # The model contains a drift       myTestResultNoConstant = myDickeyFullerTest.testDriftModel(level)         # Test of model (3)       # The test contains a trend       myTestResultNoConstant = myDickeyFullerTest.testDriftAndLinearTrendModel(level)         # Run the strategy test       myTestResultNoConstant = myDickeyFullerTest.runStrategy(level)  


5.13 Event based on process

5.13.1 UC : Creation of an event based on a process

This section gives elements to create an event based on a multivariate stochastic process.

Let X:Ω×𝒟 d be a stochastic process of dimension d, where 𝒟 n is discretized on the mesh . We suppose that contains N vertices.

We define the event as:

(X)= t ̲ X t ̲ 𝒜 (137)

where 𝒜 is a domain of d .

A particular domain 𝒜 is the cartesian product of type :

𝒜= i=1 d [a i ,b i ]

In that case, OpenTURNS defines 𝒜 by its both extreme points : a ̲ and b ̲.

In the general case, 𝒜 is a Domain object that is able to check if it contains a given point in d .

OpenTURNS creates an Event object from the process X and the domain 𝒜. Then, it is possible to get a realization of the event , which is scalar 1 (X) (x ̲ 0 ,,x ̲ N-1 ) if (x ̲ 0 ,,x ̲ N-1 ) is a realization of X on .

Requirements  

the domain 𝒜: myDomainA

type:

Domain

the process : myProcess

type:

Process

 
Results  

the Event : myEvent

type:

Event

 

Python script for this UseCase :

script_docUC_StocProc_Event.py

myEvent = Event(myProcess, myDomainA)


5.13.2 UC : Monte Carlo Probability of an event based on a process

The objective of this Use Case is to evaluate the probability of an event based on a stochastic process, using the Monte Carlo estimator.

Let X:Ω×𝒟 d be a stochastic process of dimension d, where 𝒟 n is discretized on the mesh .

We define the event as:

(X)= t ̲ X t ̲ 𝒜 (138)

where 𝒜 is a domain of d .

We estimate the probabilty p=(X) with the Monte Carlo estimator.

The Monte Carlo algorithm is manipulated the same way as in the case where the event is based on a random variable independent of time. Details on the manipulation of the Monte Carlo algorithm and its results are presented in the Use Case 3.5.9.

Requirements  

the domain 𝒜: myDomainA

type:

Domain

the process X: myProcess

type:

Process

 
Results  

the event : myEvent

type:

Event

the Monte-Carlo algorithm : myMonteCarloAlgo

type:

MonteCarlo

 

Python script for this Use Case :

script_docUC_StocProc_MonteCarlo.py

# Create an event from a Process and a Domain # myEvent = EventProcess(myProcess, myDomain) myEvent = Event(myProcess, myDomainA)   # Create a Monte-Carlo algorithm based on myEvent myMonteCarloAlgo = MonteCarlo(myEvent)   # Define the maximum number of simulations myMonteCarloAlgo.setMaximumOuterSampling(1000)   # Define the block size myMonteCarloAlgo.setBlockSize(100)   # Define the maximum coefficient of variation myMonteCarloAlgo.setMaximumCoefficientOfVariation(0.0025)   # Run the algorithm myMonteCarloAlgo.run()   # Get the result result = myMonteCarloAlgo.getResult() print(result)   # Draw the convergence graph convGraph = myMonteCarloAlgo.drawProbabilityConvergence(0.95)

We illustrate the algorithm on the example of the bidimensionnal white noise process ε:Ω×𝒟 2 where 𝒟, distributed according to the bidimensionnal standard normal distribution (with zero mean, unit variance and independent marginals).

We consider the domain 𝒜=[1,2]×[1,2]. Then the event writes :

(ε)= t ̲ ε t 𝒜

For all time stamps t, the probability p 1 that the process enters into the domain 𝒜 at time t writes, using the independence property of the marginals :

p 1 =ε t 𝒜=(Φ(2)-Φ(1)) 2

with Φ the cumulative distribution function of the scalar standard Normal distribution.

As the proces is discretized on a time grid of size N and using the independance property of the white noise between two different time stamps and the fact that the white noise follows the same distribution at each time t, the final probability p writes :

p=(ε)=1-(1-p 1 ) N (139)

With K=10 4 realizations, using the Monte Carlo estimator, we obtain p K =0.1627, to be compared to the exact value p=0.17008 for a time grid of size N=10.

The Figure (77) draws the convergence graph of the estimator, with confidence level = 95%, equal to CI 0.95 =[0.168,0.173].

Convergence of the Monte-Carlo estimator based on a process event.
Figure 77


Construction of a response surface  
Table of contents
How to save and load a study ?