Abstract

The Polynomial Chaos Expansion (PCE) technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity , hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ) method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

1. Introduction

In this paper, we use the Polynomial Chaos Expansion (PCE) technique to study the strong solution of the following Itô stochastic differential equation (SDE):where is the initial value of the unknown stochastic process , and the terms and represent the drift and the volatility of the process, respectively, while is a standard Brownian motion and it is considered up to a certain finite time to which we refer, taking into account the financial setting, as the maturity time, for example, of some underlying. According to [1, Section 4.5], if the coefficients and are smooth enough, then we have the existence and uniqueness of the solution to (1).

The PCE approach to the aforementioned problem belongs to the so-called Uncertain Quantification (UQ) methods, which constitute a fundamental step towards the validation of numerical techniques to be used for critical decisions; see, for example, [2, 3] and the references therein. The term polynomial chaos was originally introduced by Wiener in his 1938 paper [4] in which he applies his generalized harmonic analysis (see, e.g., [5]). Then, Ghanem and Spanos in [6, Section 2.4, Subsection 3.3.6] combined the PCE with finite element method to compute the solution of PDEs in the presence of uncertain parameters. Then, such an approach has been applied in several frameworks. For example, for the simulation of probabilistic chemical reactions, see [7], for the stochastic optimal trajectories generation, see [8], for the sensitivity analysis, see [9]. In order to refine specific numerical methods, see [10], and also for a rather large set of problems arising in engineering and computational fluid dynamics (CFD), see, for example, [11, Chapter 6], [12, 13], and references therein.

Strictly speaking, PCE recovers a random variable in terms of a linear combination of functionals whose entries are known random variables called germs or basic variables. Several methods are available to compute these coefficients. In particular, we choose the Nonintrusive Spectral Projection (NISP) method which employs a set of deterministic realizations; see, for example, [11, Chapter 3] for further details.

In order to show the advantages of the PCE approach, the results of our PCE implementation have been compared versus the ones obtained using both the standard Monte Carlo (MC) and the quasi-Monte Carlo (QMC) techniques. The former is based on pseudorandom sampling and its convergence properties basically rely on the law of large numbers and the central limit theorem. The latter uses low-discrepancy sequences to simulate the process and it is theoretically based on the Koksma-Hlawka inequality; see [14], which provides error bounds for computations involving QMC method; see, for example, [15, Chapter 5].

Our analysis has been focused on three well known one-dimensional SDEs whose solutions are related to the Geometric Brownian Motion equity model; see, for example, [16, Chapter 11, Section 3], the Vasicek model, [17] and the Cox-Ingersoll-Ross (CIR) interest rate model, and [18]. We also refer to [19, Chapter 3, Section 3.2.3], for more details and references about such models.

We underline that the first two cases are meaningful because they have an analytical solution to the related SDE, so that we provide a rigorous study on convergence property of the PCE method applied to them. In the CIR case, as we do not have an analytical solution, the application of the PCE technique allows us also to derive some considerations on the convergence properties of the related approximating solutions.

The paper is organized as follows: in Section 2, the PCE machinery is described in a general setting, while Section 3 deals with the NISP approach. Section 4 points out how to apply PCE in order to decompose the solution of the considered SDE, while Section 5 is focused on the relation between the usual numerical method for solving (1) and the PCE technique. In Sections 6, 7, and 8, we consider numerical applications of the PCE approach to approximate the GBM equity and the Vasicek and the CIR model. In Section 9, we give an overview of the PCE results in terms of accuracy and computational time cost for everyone of the aforementioned considered models. Eventually, in the Appendix, we provide a table with the evaluation of the first six Hermite polynomials on a set of 40 equally spaced points in the interval .

2. Polynomial Chaos Expansion

Let be a probability space, where is the set of elementary events, is a -algebra of subsets of , and is a probability measure on .

We consider the Hilbert space of scalar real-valued random variables , whose generic element is a random variable defined on , such that

Notice that, as for Lebesgue spaces, the elements are equivalent classes of random variables. Note that is a Hilbert space endowed with the following scalar product: withbeing the usual norm. To shorten the notation, , and the related convergence has been always referred to as mean square convergence or strong convergence.

Among elements in , there is the class of basic random variables, which is used to decompose, as entries of functionals, the quantity of interest such as a random variable of interest, the solution of the SDE at time . We notice that not all the functions can be used to such a decomposition because they have to satisfy, for example, [20, Section 3], at least the following two properties:

(i)has finite raw moments of all orders;(ii)the distribution function of the basic random variable is continuous, with being its probability density function (pdf).

Because the increments of the Brownian motion are independent and normally distributed, from now on, we consider as basic random variable. The latter choice is motivated in Remark 1 in Section 3.1.

The elements in can be gathered in two groups: in the first one, we have the basic random variables; let us indicate them by , ruling the decomposition; for simplicity, let us call it , while the second is composed of generic elements; let us say , which we want to decompose using the elements of the first set, which is referred to as .

Let us denote by the -algebra generated by the basic random variable ; hence, . If we want to polynomially decompose the random variable in terms of , then has at least to be measurable with respect to the -algebra . Exploiting the Doob-Dynkin lemma, [21, Lemma 1.13], we have that is measurable, by detecting a Borel measurable function , such that . In what follows, without loss of generality, we restrict ourselves to consider the decomposition in . Moreover, the basic random variable determines the class of orthogonal polynomials indicated as the generalized polynomial chaos (gPC) basis. We underline that their orthogonality property is detected by means of the measure induced by in the image space , where denotes the Borel -algebra of . In particular, for each , we have

Because , the related set is represented by the family of Hermite polynomials defined on the whole real line; namely, , andFigure 1 provides the graph of the first six orthonormal polynomials, achieved by scaling each in (6) by its norm in ; namely, , and is divided by . See also the Appendix where these polynomials are evaluated on equally spaced points in .

Such polynomials are a maximal system in ; therefore, every finite second-order random variable can be approximated as follows: for suitable coefficients depending on the random variable itself. We refer to (7) as the truncated PCE, at degree , of . Exploiting the previous definitions, taking , and considering the orthogonality property of the polynomials , we have and as we also obtain Moreover, converges in mean square sense into ; see, for example, [20, Section 3.1].

We underline that the PCE of approximates the -statistics using the coefficients appearing in (7); for example, the first two centered moments are determined by

3. Nonintrusive Spectral Projection

We consider the case in which a given functional governs the dynamic of a quantity we want to study. We assume that is characterized by a random output depending on the input , which is a random variable too. The goal is to describe the behavior of in dependent of the variations of and with respect to the action of the functional . In what follows, without loss of generality, we consider the 1-dimensional case, because the results we will obtain can be easily generalized to greater dimensions. In our setting, represents the solution functional to an SDE describing a specific interest rate model; see later Sections 6, 7, and 8 for further details, and we also allow the output to depend on a set of parameters; let us call it , with , which, in fact, characterizes the interest rate dynamic; namely, In order to shorten the notation, the parameters are omitted in the definition. Our aim is to exploit the so-called Nonintrusive Spectral Projection (NISP) method to obtain the truncated PCE of the output , taking into account that the basic random variable inherits all the assumptions made in the previous section. Therefore, we have the following spectral projection:where the coefficients are defined as in (8).

We recall that, for example, [11, Section 3.3], the NISP approach computes the scalar product in (8) by Gaussian quadrature formulae, in particular, , and we have the following Gauss-Hermite type result: where and are the quadrature nodes and the weights of the one-dimensional Gaussian integration rule. Moreover, (14) requires the evaluation of the process at a well defined set of realizations for the input random variable . By the very definition of NISP, achieves spectral convergence with respect to for increasing degree ; see, for example, [11, Appendix B].

We note that the number of quadrature points is not linked to the degree of the truncation, a priori. Instead, the precision of the coefficients is related with ; thus a reasonable and standard choice which we have been using in what follows is to take , because the polynomials involved are at least of degree .

Moreover, in the following, we refer to as quadrature nodes as well as a set of independent realizations of the basic random variable.

3.1. Flowchart of the NISP Computation

In order to better explain the NISP approach, let us describe it using the associated flowchart in Figure 2.

We underline that the set of orthogonal polynomials, which constitutes a basis in , is computed off-line once the basic random variable has been given independently on the particular model we want to decompose. Analogously, the same happens for the set of realizations of and weights needed to apply the Gaussian quadrature method.

Different situation concerns the simulations of the process and related computations. In fact, in this case, we have to deal with the data of the type , which are sensible to the changes of the model parameters. This is why such a difference has been highlighted in Figure 2 using yellow and blue rectangles, which represent the off-line computations and the model correlated computations, respectively.

Remark 1. Let us focus our attention on the basic random variable. Other possibilities are available. For example, and , but the Gaussian choice is motivated by the fact that the Wiener increments are distributed as Gaussian variables. In particular, in our computations, we consider . Latter choice has been imposed by software restrictions. In particular, we have used the Scilab toolbox which provides a powerful environment to implement PCE decomposition, also giving to the user a very flexible and efficient tool for the concrete computations involved by the PCE method itself.
In particular, such toolbox implements only Hermite polynomials, which requires such input random variable in order to satisfy (5); see, for example, [22, 23], for further details.

Remark 2. The sampling of size of the PCE is achieved by detecting a unique sampling of size , of the basic random variable , and then we use (9), to each realization; namely, and therefore the collection is the required PCE sampling.

4. Polynomial Chaos Expansion and Stochastic Differential Equations

We consider a stochastic process which satisfies the following SDE: Our aim is to write the PCE for the process at a given positive and finite time , namely the random variable which represents the process at that time. Without loss of generality, in what follows we consider the one-dimensional version of (17); hence, is a scalar, real-valued random variable. In particular, we focus our attention on three specific models of the type described by (17), namely, the Geometric Brownian Motion (GBM) and the Vasicek and CIR interest rate models. Because in each of the aforementioned models the random variable has finite variance, there exists a suitable probability space , where is well defined such that . It follows that the PCE method can be applied to approximate provided definition of the functional such that , for a suitable choice of the basic variable as seen in Section 3.1.

5. Usual Numerical Methods for SDE and PCE

Numerical methods for solving (1) exploit the independent increments of the Wiener process. For instance, by considering the Euler-Maruyama scheme, for example, [1, Section 10.2], for each , we have where and is a set of increasing time steps . Moreover, is the value of the process at starting time and . To shorten the notation, let for each time step; hence, at each time step, the solution can be written as follows: therefore adding a new independent random variable for every , . Hence, following the general approach discussed in Section 3.1, in order to apply the PCE method, we consider a set of i.i.d. Gaussian random variables . Then, the solution of (17) at time is or equivalently where is the aforementioned set of i.i.d. and we are in position to apply the PCE decomposition; see, for example, [20, Section 3.2] for further details concerning the multivariate decomposition. We underline that, even if we consider the Euler-Maruyama scheme, previous approach can be applied considering any other method which is based on independent increments of the Brownian motion, such as Milstein method; see, for example, [1, Section 10.2, Section ].

Let us recall that to get reliable results has to be great enough; usually , which implies a high computational cost as the complexity of the NISP method increases dramatically when more than inputs are involved. The latter problem is referred to as the curse of dimensionality; see [11].

6. Geometric Brownian Motion

In what follows, we analyze the Geometric Brownian Motion (GBM) defined as the solution of the following stochastic differential equation: where and is the usual Wiener process. In particular, let us focus our attention on for .

6.1. PCE Approximation

The PCE method is based on the definition of a suitable process . We consider (22), taking into account the dependence of the process on the basic random variable .

Following the approach discussed in [24, Chapter 2] and [25, Chapter 4], instead of solving (22), we have studied a suitable transformation of it, in order to reduce the instability in numerical simulation. By defining and applying the Itō-Döebling lemma, we get and once is computed, for any time , the original process is defined as

Because the solution of (22) can be written through a functional , such that Moreover, it embeds the transformed process (23) combined with (24). In particular, exploiting (25), it is natural to set as a Gaussian random variable. More specifically, we take due to the restrictions imposed by the software we have used to numerically implement the PCE method; see Section 3.1. We note that if the GBM model is used to describe the behaviour of the interest rate , for example, considering interest rate for equity markets, the related output at fixed time , that is, , depends not only on but also on other parameters, gathered by , which characterize its dynamics. To shorten the notation, in what follows, the dependence on is omitted.

Turning back to the analysis of , we apply the NISP method, exploiting (14), in order to obtain the truncated following PCE: In particular, , where , is the evaluation of the process (26) at , defined by the Gaussian quadrature formula as stated in (14).

6.2. Numerical Application

In what follows, we give a PCE approximation of the solution to the SDE (22), for some particular values of the parameters involved; see Table 1.

As (22) has an analytical solution, such a solution constitutes our benchmark; in particular, at time , we havewhose mean and variance are, respectively,

Figure 3 and Table 2 show the absolute errors of the mean and the variance; namely, while Figure 4 and Table 2 show the absolute value of the relative error. Due to the logarithmic scale on 3, we consider their absolute values, allowing us to highlight the related order, rather than its numerical value. Therefore, We note that such data have been computed for an increasing set of degrees .

Both average and variance errors are stationary, despite an expected spectral convergence for increasing degree . This can be motivated by the presence of an error, not due to PCE method that corrupts the expected spectral converge of . There are only two possible causes: the approximation of (22) with (26) and the error coming from numerical computations of used in (14).

Let us focus our attention on the latter: as displayed in Figures 5 and 6 and Table 3, by increasing the precision used to compute , the errors decrease. Hence, PCE errors are influenced by such approximation which is required to compute the coefficients. In Figure 5, we can see the spectral convergence of the error for the PCE approximation up to 4th degree for mean, respectively, up to the 6th degree for variance. Note that, despite the increased accuracy, the error displays again a stationary behavior, but actually no improvement on numerical methods can be implemented, which allows us to say that this is the only source of error.

Coming back to Table 1, let us compute a sampling of size of for , achieved with Latin Hypercube Sampling (LHS) technique; see [26, 27] and also [28] for further references, which is compared with the probability density function of the analytical solution to (22) at time , whose distribution is lognormal; namely, and computed values shown in Figure 7 were the standard Monte Carlo sampling of shown.

Moreover, we have compared the two quantiles , where and , of the PCE approximation , writing their analytical values for the following set of degree: . Recall that the standard statistics for is the th sample quantile, namely, the th realization of the sampling of such that , where is the size of the sampling, and denotes the integer part of the real number within the brackets. In our analysis, we consider , while ; see, for example, [29] for further details.

In particular, we are aiming at computing the absolute error of with respect to the analytical values, which are where represents the quantile of the normal random variable of zero mean and unitary variance. Moreover, the values of Table 1 are considered; thus the errors are defined as for and . Obtained values are shown in Figure 8 and in Table 4.

7. Vasicek Interest Rate Model

In what follows, we consider the Vasicek model; see [17]. We also refer to [30, Section 4.4.3], for further details, which is defined by where , , and are positive, real-valued constants. The analytical solution to the SDE (35), at time , is Therefore, is normally distributed and its mean and variance are given by Note that we have used the notation in order to highlight that the Vasicek model (35) is used to describe the evolution of interest rates. We also underline that such a model belongs to the class of the so-called one factor short rate model which includes, for example, the Rendleman-Bartter model, the CIR model, the Hull-White model, and the Ho-Lee model, because depends only on one market risk factor; see, for example, [31, 32] and [19, Section 3.2.1].

It is worth mentioning that the Vasicek model solution allows for negative values for the interest rate evaluated at given time ; hence, its use in credit market framework is often criticized by practitioners.

7.1. PCE Approximation

Analogous to what we have done concerning the study of the GBM model, Section 6, we can rewrite the solution to (35) using a functional such that for every fixed time , which is often referred to as maturity time. As in Section 6, we shorten the notation omitting the dependence on vector of parameters , and we consider the basic random variable to be Gaussian variable; namely, we take in order to respect the restrictions imposed by the software we have used to numerically implement the PCE method.

Applying the NISP procedure up to degree , the PCE decomposition of is given by where the coefficients are detecting, using (14), and evaluating on a set of realizations of the basic random variable , namely, considering values . Moreover, in what follows, we set . The obtained simulated values for are indicated as follows:

7.2. Numerical Application

Let us implement the theoretical approach developed in previous section by a working example, considering the Vasicek model parameters as we have reported in Table 5.

Figure 9 displays the absolute error as well as the relative error of the variance, while Table 6 gathers the absolute error and the relative error of the mean and variance of , respectively, with respect to degree . In particular, , the absolute errors for the mean and for the variance are as follows:while the relative errors are given byDue to the logarithmic scale, we have used the semilogy scale to obtain the graph in Figure 9, and we consider their absolute values, highlighting the related order, rather than their numerical values.

The absolute and relative errors of mean and variance are stationary; see Table 6; thus, there is an error that inhibits the spectral convergence of the PCE. The latter could be caused by the approximation of (35) with (38); indeed, the numerical approximations made during computations of produce error of order .

Let us first analyze the error of the variance by considering for a fixed degree , and for a decreasing set of -s approaching zero. In particular, we consider , and then we obtain the results displayed in Figure 10, which points out how the error of the variance is influenced by the parameters; this agrees with the achieved value in previous computation (Table 6).

Coming back to the values in Table 5, let us compute a sampling of size of , with , whose histogram, Figure 11, is compared with the probability density function of the normal random variable . Moreover in Figure 11 we show a sampling obtained exploiting the standard Monte Carlo technique.

Let us compute the two quantiles , where and , of the PCE approximation , where . Proceeding as in Section 6, we consider the th sample quantile, namely, the th realization of the sampling of such that , where is the size of the sampling and denotes the integer part of the real number within the brackets. In particular, we have , while , and we compute the absolute error of versus the analytical values . Therefore, we evaluate , where is the quantile of a normal random variable of mean and variance and, indicating its cumulative density function by , we have . The aforementioned errors for and , respectively, are shown in Figure 12 and in Table 7.

8. CIR Interest Rate Model

In what follows, we consider the PCE method for the CIR interest rate model, introduced in 1985 by Cox et al.; see [18]. Considerwhere , , and are positive and real-valued constants and is a standard Wiener process. Let us recall that (44) generalizes the Vasicek model seen in Section 7; in fact, even if the drift term is common between the two, the standard deviation factor is different and, in particular, it equals for the CIR model ensuring the fact of avoiding negative interest rates if and are positive. Moreover, if , we cannot have to be equal to zero. It is worth mentioning that, unlike the case of the Vasicek model, there is not an analytical solution to (44); nevertheless, the CIR process is ergodic and possesses a stationary distribution; see [19, Section 3.2.3], [16, Section 23.5], and [33] for further details.

The statistics of the CIR model for fixed are

8.1. PCE Approximation

In order to apply the PCE method to the process in (44) evaluated at a specific time , we consider the stochastic process and then applying the Itō-Döebling lemma, satisfies the following SDE: Therefore, we have canceled the interactions between the state of the original process and the increments of the Wiener process. The transformation defined by (46), for example, [24, Chapter 2] and [25, Chapter 4] for details, allows us to reduce simulations instability.

As seen from studying the GBM model, Section 6 and the Vasicek model and Section 7, exploiting the fact that the Wiener process in (46) is normally distributed, namely, , we can rewrite the solution of (44), through (46), using a suitable functional , in order to obtain for every fixed end time . Moreover, proceeding as in previous Sections 6 and 7, we set the basic variable and we shorten the notation omitting the explicit dependence of the CIR model on the parameters vector , with , , and being positive constants. Applying the truncated, at degree , PCE to , we obtain where the coefficients are detecting, using (14), and evaluating on a set of realizations of the basic random variable; namely, we compute at , taking and referring to these values as

8.2. Numerical Application

In order to develop a concrete example for the application of the PCE to the CIR model, we consider the parameters values shown in Table 8.

Figure 13 and Table 9 show, respectively, the absolute errors of the mean of the variance of the following statistics:while Figure 14 and Table 9 display the absolute value of the relative error, where, as we have assumed discussing the GBM and the Vasicek model, in Figure 4, in which we use a logarithmic scale, we report their absolute values, to highlight its order, rather than their numerical values. While the relative errors are given by and the data are computed for an increasing set of degree .

The errors in Figure 13 and in Table 9 are stationary; hence, there is not a spectral convergence when the degree of truncation increases, as we have already observed for the GBM model and for the Vasicek model. The latter can be implied by the model approximation we have chosen, namely, replacing (44) with (48), and/or by numerical approximations made during computations of , if we increase the precision in computing , and using these values to compute again the PCE for the same set of degree.

The related new values obtained for the mean, the variance, and error are shown in Table 10 and Figures 15 and 16. Although more precise values are computed, the stationary behavior still holds. Therefore, the lack of spectral convergence is due to having replaced (44) by means of (48).

Even if CIR model does not admit close solution, for example, [30, Section 4.4.4], it possesses a stationary distribution. Namely, by following [19, Section 3.2.3], features a noncentral chi-squared distribution. In particular, let us denote by the analytical probability density function of , for a positive end time , and then where the subscript denotes the random variable considered. Moreover, the constants are defined as The last two constants are the degree of freedom and noncentrality parameter, respectively. For further features, see [19, Section 3.2.3]. In order to achieve a natural number , let us set , where . The staring value is still , while the ending time is set to be .

Therefore, let us employ a LHS sampling of size of the PCE and a standard Monte Carlo sampling of the same size of , exploiting its noncentered chi-squared distribution. The results are displayed in Figure 17 and compared with the analytical probability density function .

Let us compute the two quantiles , where and , of the PCE approximation , where . Proceeding as in Section 6, we consider the th sample quantile, namely, the th realization of the sampling of such that , where is the size of the sampling and denotes the integer part of the real number within the brackets. In particular, we have , while , and we compute the absolute error of versus the analytical values . Therefore, we evaluate , where is the quantile of a normal random variable of mean and variance and, indicating its cumulative density function by , we have . The aforementioned errors for and , are, respectively, shown in Figure 18 and Table 11.

Remark 3. In each of the above considered cases, the lack of spectral convergence arises. Nevertheless, such a point is not connected with the model nor with the related parameters; in fact, it is a general consequence due to the approximations implied by the concrete application of the PCE machinery used to compute the solution at a certain positive time .

9. Polynomial Chaos Expansion Compared with MC and QMC

In this section. we compare the PCE method with the standard Monte Carlo (MC) method and with the quasi-Monte Carlo (QMC) one. The accuracy in approximating the statistics of each model studied, namely, GBM, the Vasicek, and the CIR models, evaluated at time , and the computational time costs to detect their mean and variance are considered as criteria for comparison.

It is worth mentioning that both the MC and the QMC methods are usually applied to approximate analytical solutions; hence, from our interest models point of view, we can consider them only for the GBM and Vasicek cases, while, for the CIR model, we consider them in relation to the Euler-Maruyama scheme.

The usual Monte Carlo approximations require a set of independent realizations of the process . Then, the mean and variance are determined as and then, exploiting the Law of large numbers, we have that both (55) and (56) converge to the correspondent true value, and hence their standard errors,are more informative than the single execution error. The QMC method uses low-discrepancy sequence generated by Sobol algorithm that maximizes the uniformity of the sample points; see, for example, [15, Chapter 5]. Then, it uses such values to approximate the statistics we are interested in, or, generally speaking, to compute integrals of the following type: where is the measure induced by the considered random variable which characterizes the related image space. By setting and , respectively, we can approximate the mean of the random variable and its variance, respectively. Moreover, in QMC computations, the parameters for accuracy are given by the absolute errors: We underline that, for both MC and QMC, we consider only the effective time to compute the statistics, without taking into account the time consumed for the detection of the grid of samplings which is necessary to simulate the process. Moreover, we highlight that, in each of the cases we consider, namely, the GBM, the Vasicek, and the CIR models, all the computations concerning the PCE method have followed the schemes developed in Sections 6, 7, and 8, respectively, with accuracy estimated by means of the absolute error of the average and of the variance.

9.1. Geometric Brownian Motion

We consider the SDE in (11), where the parameters are taken as follows: , , , and . The MC and QMC methods approximate the statistics of the solution to the GBM model exploiting its analytical solution; see (28), for an increasing size of samplings , and we then compute, for each value of , the standard errors for the MC, respectively, and the absolute error for the QMC is computed. In order to compare the accuracy and computational time costs of the three aforementioned methods, we consider the plot in Figure 19 where the -axis represents the computational time costs of the methods, while the error of the statistics, namely, , , , , , and , are displayed along the -axis. Moreover, the -axis is normalized with respect to the highest data among MC, QMC, and PCE values. Actually on the -axis, we report related relative computational time costs shown, which are more effective than the absolute ones. We also note that each point of the plot represents the computations for different number of realization points, namely, for MC and QMC, and for the PCE method.

Figure 19 points out the high accuracy as well as the very low computational effort required by PCE to get the solution.

Let us display the values of computational time costs for the PCE; see Table 12. For MC and QMC methods, respectively, see Table 13. For the errors made using both MC and QMC methods, see Table 14.

For the computational time costs required for the detection of PCE’s coefficients, see (14), and its statistics, and see (10) and (11), which are irrelevant if compared with time required to evaluate the process, (26), at , which are defined by Gaussian quadrature formula as stated in (14).

This motivates the data in Table 12.

9.2. Vasicek Interest Rate Model

Proceeding as in Section 9.1, we consider the SDE (35) characterizing the Vasicek model for the following set of values for its parameters , , and , and we apply both the MC and QMC methods to approximate the analytical solution (36) to (35) and to compute the statistics of at time for an increasing size of samplings We use the same type of plot exploited in the previous section, in order to show the accuracy and the computational time costs and the errors of the PCE, the MC, the QMC method, respectively.

In particular, Figure 20 points out the high accuracy of PCE and the corresponding low computational effort to get these results: the key points are the low number of simulations required to get the solution of (35).

In order to underline the efficiency and accuracy of the PCE method, let us show the related computational time costs, Table 15, compared to those of the MC and QMC methods, Table 16. We also highlight the numerical values concerning the absolute errors achieved using PCE method, Table 15, as well as MC and QMC approaches, Table 17.

As observed in the GBM case, the computational time costs required for the detection of PCE’s coefficients, (14), and its statistics, (10) and (11), are irrelevant if compared with time required to get the evaluation of the process in (38) at quadrature nodes , as we can see looking at data displayed in Table 15.

9.3. CIR Interest Rate Model

Even if the CIR model does not have a solution in closed form, we can still apply both the MC and the QMC methods, considering the numerical solution of the SDE (44).

As described in [24], the Euler-Maruyama scheme applied to the transformed process (47) is actually the Milstein scheme applied to the original SDE (46). Hence, let us integrate numerically the transformed SDE from up to using time steps. The mean and the variance are estimated, via MC and QMC, using a set of samplings of increasing size , setting the CIR model parameters as follows: , , and . We note that, while the MC method applies straightforward, the QMC implementation requires some further discussion which is, in fact, close to the analysis made in Section 5. Namely, the numerical scheme allows us to write the solution at as a functional of all the independent increments of the Wiener process, one for each time step.

As the dimensionality of the problem equals the number of time steps, that is, , then a multidimensional low-discrepancy sequence is computed, where the size represents the number of points belonging to this sequence. Once the the SDE is solved, the Euler-Maruyama scheme prescribes that each independent increment has to be replaced by an entry of a point belonging to such a multidimensional low-discrepancy sequence. Eventually, by applying (58) to the detected solution, we get the required statistics.

As we made in Sections 9.1 and 9.2, in Figure 21, we compare the computational time costs and its related accuracy reached by PCE, MC, and QMC methods, respectively. The data used to apply the PCE technique are computed following the approach developed in Section 8. We recall that the plot is characterized by setting along the -axis the computational time costs of the methods, while the errors of the statistics , , , and and the PCE absolute errors are placed along the -axis. Moreover, the time costs are normalized with respect to the highest time value among MC, QMC, and PCE; thus, we display their relative computational time costs, hence achieving more information than merely using the related absolute values.

Let us underline that Figure 21 points out the high accuracy as well as the very low computational effort required by the PCE method to get the solution. We can appreciate such an effectiveness property of the PCE approach by comparing the computational effort required by the PCE, Table 18, and by the MC and QMC method, Table 19. Moreover, the PCE errors suggest better accuracy for the latter, particularly if compared with MC and QMC; see Tables 19 and 20.

The computational time costs required for the detection of PCE’s coefficients, (14), and its statistics, (10) and (11), are irrelevant if compared with time required to get the evaluation of the process in (48) at the quadrature nodes . This motivates the time data in Table 18.

10. Conclusions

In this paper, we show how the Polynomial Chaos Expansion (PCE) method can be effectively used to approximate the solution of a given SDE, exploiting only some basic properties of the Brownian motion, which is assumed to drive the stochastic process we are interested in, and a suitable transformation of the original SDE we have taken into consideration. In particular, we have applied the PCE technique to approximate the solutions of some of the most relevant interest rate models, namely, the Geometric Brownian Motion (GBM) model, the Vasicek model, and the Cox-Ingersoll-Ross (CIR) model.

The provided numerical examples deeply discuss the convergence properties of the PCE method. Indeed, detailed analysis on error properties of the approximated statistics is given when analytical solution is available also providing a comparison with respect to the related analytical probability density function. The latter is not the case for the CIR model, conversely to what happens for both the GBM and the Vasicek models. Hence, in this latter case, the PCE approach is even more interesting also because it gives a general method to study those stochastic processes which are defined by SDEs which do not have solution in closed form.

We show by concrete examples how the PCE method overcomes the performances of the standard Monte Carlo (MC) method as well as those of the quasi-Monte Carlo (QMC) one. In particular, a close comparison between PCE, MC, and QMC techniques points out the advantage in using the PCE approach, especially in terms of computational costs. The latter result is due to the fact that the PCE method requires just few realizations of the random variable we want to approximate. In particular, even if the basic operations, required by the PCE, are more complicated than those implied by the MC or the QMC approach (weighted average versus quadrature formulas), the advantages in using the PCE technique are great and indubitable, as shown, for example, for the analysis of the CIR model, in Tables 18, 19, and 20.

Last but not least, it is worth mentioning that the PCE approach can be split in two distinct sections: the off-line computations and the real PCE approximation of the random variable we want to study. Thus, the basic random variable, the orthogonal polynomials, and some data used in the Gaussian quadrature formulas are computed once and for all, because they do not depend on the particular model, for example, the GBM, Vasicek, or the CIR models in our analysis. Hence, a clear computational effort saving is achieved when the model is sensible to time variation or if a recalibration of the involved parameters is required.

Appendix

Value of the Polynomials

In Table 21, the first six orthonormal Hermite polynomials are evaluated on a set of equally spaced points in the interval .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.