Abstract

Hajek’s univariate stochastic comparison result is generalised to multivariate stochastic sum processes with univariate convex data functions and for univariate monotonic nondecreasing convex data functions for processes with and without drift, respectively. As a consequence strategies for a class of multivariate optimal control problems can be determined by maximizing variance. An example is passport options written on multivariate traded accounts. The argument describes a narrow path between impossibilities of generalisations to jump processes or impossibilities of more general data functions.

1. Introduction and Statement of Results

Mean stochastic comparison results may have applications in many areas beyond mathematical finance. However, it seems that they were first applied in order to find optimal strategies for passport options in the univariate case in [1], where the result in [2] is used. More general univariate processes are considered in [3]. Anyway, path continuity seems to be essential as it can be shown that Hajek’s comparison result cannot be generalised to Poisson processes even in the univariate case (cf. [4]). More recent research on controlled options with applications of Hamilton-Jacobi-Bellman equations can be found in [5]. However the results for passport options are still univariate. Multivariate problems are essentially different as optimal strategies may depend on the correlations between processes. For example, ifis a trading account with lognormal processes , where correlations of Brownian motions are encoded to be , and are bounded trading positions, then the solution of an optimal control problemfor the trading positions reduces under mild assumptions to the maximization of the basket volatility; that is,Hence, signs of correlations (and space-time dependence of the signs of correlations) can change an optimal strategy essentially. This indicates also that multivariate mean comparison results are significant extensions of univariate results.

Applications of the extension of Hajek’s results to stochastic sums were described in [6, 7], but a full proof was not given in these notes. Here we give a short complete proof of related results. Hajek’s results are recovered by the different method of proof.

In the following denotes the space of continuous functions on the set of real numbers , denotes a standard -dimensional Brownian motion, and denotes the expectation of a process starting at . Furthermore, for an -valued process the th component of this process is denoted by . For processes without drift we prove the following.

Theorem 1. Let , be convex, and assume that satisfies an exponential growth condition. Assume that are some positive real constants for . Furthermore, let be Itô’s diffusions with , wherewith -matrix-valued bounded Lipschitz-continuous functions and . If , then for we haveHere, the symbol ≤ refers to the usual order of positive matrices. Furthermore, if in addition (in the sense of distributions), then this result holds with strict inequalities.

For processes with drift we prove the following.

Theorem 2. Let , be nondecreasing and convex, and assume that satisfies an exponential growth condition. Assume that are some positive real constants for . Furthermore, let be Itô’s diffusions with nonzero drifts with , wherewith bounded Lipschitz-continuous drift functions and -matrix-valued bounded Lipschitz-continuous functions and . If and , then for we haveHere, is understood componentwise. Furthermore, if in addition (in the sense of distributions), then this result holds with strict inequalities.

Remark 3. Bounded Lipschitz-continuity, that is, the condition that for some holds for all , implies existence of a -continuous solution in stochastic sense of . Similarly for , proofs are based on a generalisation of ODE-proofs to infinite dimensional function spaces and can be found in elementary standard textbooks such as [8].

2. Proof of Theorem 1

We first remark that the initial data function has to be univariate: for a general multivariate data function the results do not hold, because simple examples show that convexity can be strongly violated in this general situation. Since classical representations of the value functions in terms of the probability density (fundamental solution) are not convolutions we use the adjoint of the fundamental solution. For this and other technical reasons we need some more regularity of the data function and the diffusion matrix in order to treat the problem at an analytical level. We will observe then that the pointwise result is preserved as we consider certain data and coefficient function limits reducing the regularity assumptions. First we need some regularity assumptions which ensure existence of the fundamental solution and the adjoint fundamental solution in a classical sense, that is, have pointwise well-defined spatial derivatives up to second order and a pointwise well-defined partial time derivative up to first order (in the domain where it is continuous). For the sake of possible generalisations in the next section we consider the more general operatorWe include even the potential term coefficients because such a coefficient appears in the adjoint even if . Recall that the adjoint operator is given bywhereHere we use Einstein notation, that is, and , and for the sake of brevity. In this section we will assume that and . Note that even in this restrictive situation we have and . For our purposes it suffices to assume that the coefficients are of spatial dependence (the generalisation to additional time dependence is straightforward). In order that the adjoint exists in a classical sense we should have bounded continuous derivatives.

We assume(i)where denotes the space of real-valued -time continuously differentiable functions and denotes the standard Sobolev space of order . In the next section we assume in addition that for all . For the following considerations concerning the adjoint we assume that if a potential coefficient is considered.(ii)We have uniform ellipticity; that is, there exists such thatWe use one observation concerning the adjoint. Note that the adjoint equations are known in the context of probability theory as the forward and backward Kolmogorov equations. The derivation of these equations (cf. Feller’s classical treatment) shows that the density and its adjoint are equal and that it is possible to switch from a representation of Cauchy problem solutions in terms of the backward density to an equivalent representation in terms of the forward equations. For example, forward and backward representations of option prices for regime switching models are used in [9]. However, the essential additional observation we need here is the relation of partial derivatives of densities and their adjoint densities. We use again Einstein’s notation for classical derivatives.

Lemma 4. Assume that conditions (12) and (13) hold and let be the fundamental solution ofand let be the fundamental solution ofThen for and , have spatial derivatives up to order 2:Here, for and along with (Kronecker ) and , we denote

Proof. For and for we show that for hold. Let be the ball of radius around zero. As there exists such that and using Green’s identity Gaussian upper bounds of the fundamental solution and its first-order spatial derivatives, and , we getThis leads to the identitiesIn the limit we get the relations stated.

For technical reasons we need more approximations concerning the data. As we are aiming at a pointwise comparison result and we have Gaussian upper bounds it suffices to consider approximating data which are regular convex in a core region and decay to zero at spatial infinity. We have the following.

Proposition 5. Let be a real-valued continuous convex function. Let be the ball of finite radius around the origin. Then there is a function such that(i)(ii)the second (classically well-defined) derivative is strictly positive; that is,

Proposition 5 can be proved by using regular polynomial interpolation. Here the fact that classical derivatives of second order exist for the convex continuous function almost everywhere can be used. The function is not convex in general of course, but it is convex in a core region . For all and all using Lemma 4 and integration by parts we getHere for a univariate function the symbol denotes its second derivative. Since , for all , and and by the standard Gaussian estimatefor some finite constants we get from (23)which means that the Hessian is positive in a smaller core region for large enough. Furthermore, classical regularity theory (cf. [10] and references therein) tells us thatwhere .

Remark 6. Actually, the regularity follows from the smoothness of the density for positive time and holds in the more general context of highly degenerate parabolic equations of second order (cf. [11]). In this paper we consider equations with uniform elliptic second-order part, because this implies that the density, its adjoint, and spatial derivatives up to second order have spatial decay at infinity to zero. This is not true in general for highly degenerate equations (cf. [12]). Extensions to some classes of degenerate equations are possible (cf. [10]).

It follows thatwhere is the coefficient matrix and is the Hessian of evaluated at . Hence,and as the limit of the Hessian is well defined for we getNow consider matrices and where and solveand . Note that satisfieswhere for all . We have the classical representationwhere is the fundamental solution ofAs and we conclude that . Now we have proved the main theorem for . Next, for each and there exists a matrix with components in , where for all with being the transpose of , and whereHere is the original dispersion matrix related to the process of the main theorem (which is assumed to be bounded and Lipschitz-continuous). Consider the following:For which satisfies analogous conditions we defineThen the preceding argument shows that for and then for we haveThis leads towhere are processes:with a bounded continuous which satisfiesThe process is defined analogously. Similarly is a limit of functions which equals on . In (39) can be replaced by and by by the probability law of the processes, and a limit consideration for data which equal for each the function on leads to the statement of the theorem by an uniform exponential bound of the data functions, the boundedness of the Lipschitz-continuous coefficients, and the Gaussian law of the Brownian motion.

3. Additional Note for the Proof of Theorem 2

If and solveand , note that satisfieswhere for all . Consider the following:where is the fundamental solution of

As and we conclude that if . As for all this condition reduces to the monotonicity condition . The truth of the latter monotonicity condition for the value function can be proved using the adjoint using the same trick as in the preceding section.

Remark 7. These notes are from my lecture notes “Die Fundamentallösung Parabolischer Gleichungen und Schwache Schemata Höherer Ordnung für Stochastische Diffusionsprozesse” of WS 2005/2006 in Heidelberg, which are not published. The argument given there is published now upon request, as research is going on concerning applications of comparison principles. Originally the relevance of stochastic comparison results was pointed out to the author by P. Laurence and V. Henderson. The main theorems proved here are stated essentially in the conference notes in [6, 7] but were not strictly proved there. In these notes applications to American options and to passport options are considered. For example, explicit solutions for optimal strategies related to the optimal control problem of passport options and the dependence of that strategy on correlations between assets can be obtained. The proof given here can be applied in the univariate case as well and recovers the result of Hajek in [1] using the result of [2].

Competing Interests

The author of this paper declares that there are no competing interests regarding the publication of this paper.