Abstract

We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.

1. Introduction

Protection against undesirable weather events, like, for instance, hurricanes and droughts, has been offered by insurance companies. In the recent decades, the securitization of such weather insurances has taken place, and nowadays one can trade weather derivative contracts at the Chicago Mercantile Exchange (CME), or more specialized weather contracts in the OTC market. At the CME, one finds derivatives written on temperature and snowfall indices, measured at different locations worldwide. The market for weather derivatives is emerging and gaining importance as a tool for hedging weather risk.

In this paper we propose a new model for the dynamics of temperature which forms the basis for pricing weather derivatives. The model generalizes the continuous-time autoregressive models suggested in Benth et al. [1] (see also Dornier and Querel [2] and Benth et al. [3]), to allow for stochastic volatility effects. Signs of stochastic volatility in the temperature variations have been detected in many data studies, for example, in an extensive study of daily US temperature series by Campbell and Diebold [4] and in Norwegian data by Benth and Šaltytė-Benth [5]. Campbell and Diebold suggested a seasonal generalized autoregressive conditional heteroskedasticity (GARCH) model to explain their observations, while here in this paper we suggest to model the variations by the stochastic volatility model of Barndorff-Nielsen and Shephard [6].

As it turns out, one may derive reasonably explicit expressions for futures prices of commonly traded contracts at the CME. Our proposed model is flexible in capturing the stylized features of temperature data, as well as being analytically tractable. To account for seasonality, we introduce a multiplicative structure, which is much simpler to analyse than the complex model suggested by Campbell and Diebold [4], where the seasonality comes in additively in the GARCH dynamics. One of the main advantages with our model is that it is set in a continuous-time framework, accommodating for a simple application of the theory of derivatives pricing. For example, we find that the volatility of futures prices on the cumulative average temperature is given by the temperature volatility, modified by a Samuelson effect inherited from the autoregressive structure of the temperature dynamics.

The risk premium in derivatives prices is parametrized by a time-dependent market price of risk. In addition, we include a market price of volatility risk. Thus, the risk loading in derivatives prices, interpreted as the risk premium in the financial setting, includes both temperature and volatility. In mathematical terms, the pricing measure is obtained by a combination of a Girsanov and Esscher transform.

We discuss the estimation of the proposed temperature model on data from Stockholm, Sweden. The purpose of the study is not to give a full-blown fitting of the model, but to outline the steps and to justify the model. Next, we discuss some issues related to the mean reversion of the model, where in particular we derive the so-called half-life of our temperature model. The importance of the various terms in the model is discussed.

We present our findings as follows. In the next section we review the basics of the CME market for weather derivatives. Next, in Section 3, our temperature model with seasonal stochastic volatility is defined and analysed. Section 4 contains results on the futures price dynamics for some commonly traded products at the CME. Some empirical considerations of our model and its applications to weather derivatives are presented in Section 5.

2. The Market for Temperature Derivatives

The organized market for temperature derivatives at the CME offers trade in futures contracts "delivering" various temperature indices measured at different locations in the world, including the US, Canada, Australia, Japan, and some countries in Europe. In addition, there are contracts written on snowfall in New York and frost conditions at Schiphol airport in Amsterdam. On the exchange one may also buy plain vanilla European call and put options on the futures.

The main temperature indices are so-called cooling-degree days (CDDs), heating-degree days (HDDs), and cumulative average temperature (CAT). The CAT index is for European cities mainly. For a measurement period [𝑇1,𝑇2], 𝑇1<𝑇2, the CDD index is measuring the demand for cooling and is defined as the cumulative amount of degrees above a threshold 𝑐. Mathematically we express it as 𝑇2𝑡=𝑇1max(𝑇(𝑡)𝑐,0),(2.1) with 𝑇(𝑡) being the daily average temperature on day 𝑡, and the average is computed as the mean of the daily maximum and minimum observations. The threshold 𝑐 is in the market given as 18°C, or 65°F and is the trigger point for when air-conditioning is switched on. The measurement periods are set to weeks, months, or seasons consisting of two or more months, within the warmer parts of the year. A futures written on the CDD pays out an amount of money to the buyer proportional to the index, in US dollars for the American market, and in Euro for the European one (except London, where British Pounds is the currency). By entering such a contract, one essentially exchanges a fixed CDD against a floating, and thereby one may view these futures as swaps.

An HDD index is defined similarly to the CDD as the cumulative amount of temperatures below a threshold 𝑐 over a measurement period [𝑇1,𝑇2], that is, 𝑇2𝑡=𝑇1max(𝑐𝑇(𝑡),0).(2.2) The index reflects the demand for heating at a certain location, and measurement periods are typically weeks, months, or seasons in the cold period (winter).

The CAT index is simply the accumulated average temperature over the measurement period defined as 𝑇2𝑡=𝑇1𝑇(𝑡).(2.3) This index is used for European and Canadian cities at CME. CME also operates with a daily average index called the Pacific Rim measured in several Japanese cities. The Pacific Rim is simply the average of daily temperatures over a measurement period, with the daily temperature defined as the mean of the hourly observed temperatures.

Temperature futures may be used by energy producers to hedge their demand risk (see Perez-Gonzales and Yun [7] for a study of weather hedging of US energy utilities). Other typical users of such futures contracts for hedging purposes are electricity retailers, holiday resorts, producers of soft drinks and beer, or even organisers of outdoor sport events, and fairs. Also investors have seen the potential in weather derivatives as a tool for portfolio diversification, since the derivatives are not expected to correlate significantly with the financial markets (see Brockett et al. [8]).

For mathematical convenience, we will use integration rather than summation and define the three indices CDD, HDD, and CAT over a measurement period [𝑇1,𝑇2] by 𝑇CDD1,𝑇2=𝑇2𝑇1𝑇max(𝑇(𝑡)𝑐,0)𝑑𝑡,(2.4)HDD1,𝑇2=𝑇2𝑇1𝑇max(𝑐𝑇(𝑡),0)𝑑𝑡,(2.5)CAT1,𝑇2=𝑇2𝑇1𝑇(𝑡)𝑑𝑡,(2.6) respectively.

In this paper we are concerned with modelling the temperature dynamics 𝑇(𝑡) accurately in continuous time. Moreover, we aim at deriving futures prices based on our proposed dynamics. In the literature, one finds several streams for pricing weather derivatives. The main approach is to model the stochastic temperature evolution and price by conditional risk-adjusted expectation (see Benth et al. [3]). This is motivated from fixed-income theory, where zero-coupon bonds are priced using a model for the short rate of interest and conditional risk-adjusted expectation (see, e.g., Duffie [9]). Another popular way to price derivatives is using the so-called burn analysis (see Jewson and Brix [10]). There, one computes the historical average of the index in question and prices by this figure. One may add a risk loading to this price. A more sophisticated pricing method is the so-called index modelling approach (see Jewson and Brix [10]), where one is modelling the historical index value by a probability distribution. Index modelling was applied by Dorfleitner and Wimmer [11] in an extensive empirical study of US futures prices at the CME. The drawback with these two ways of pricing is that you do not get any dynamics for the futures price. If one wants to derive option prices, one must have accessible the volatility of the futures price as well as the price itself. Although applying the burn analysis (index modelling) gives an accurate estimate on the historical index value (distribution), it is expected that a precise model of the temperature dynamics explains close to equally well the historical average (distribution). In addition, one obtains the stochasticity of the temperature evolution, which enables us to find the futures price dynamics and to price options on such futures.

Brockett et al. [8] applies the indifference pricing approach in a mean-variance setting to valuate weather derivatives. Davis [12] introduces the marginal utility approach to pricing of weather derivatives, which is based on hedging in correlated assets and modelling of the index as a geometric Brownian motion. Platen and West [13] suggest using a world index as a benchmark for weather derivatives pricing. Equilibrium theory has also been a popular approach for pricing weather derivatives. Cao and Wei [14] apply this to analyse US temperature futures (see also Hamisultane [15] for a more recent study).

3. A Model for the Temperature Dynamics with Seasonal Stochastic Volatility

Let (Ω,,{}0𝑡𝜏,𝑃) be a complete filtered probability space with 𝜏< being a finite time horizon for which all the relevant financial assets in question exist.

We suppose that the temperature 𝑇(𝑡) at time 𝑡0 is defined as 𝑇(𝑡)=Λ(𝑡)+𝑌(𝑡),(3.1) where Λ(𝑡) is the deterministic seasonal mean function and 𝑌(𝑡) is a stochastic process modelling the random fluctuations around the mean. In other words, 𝑌(𝑡)=𝑇(𝑡)Λ(𝑡) is the deseasonalized temperatures.

The seasonal mean function Λ(𝑡) is assumed to be real-valued continuous function. To capture the seasonal variations due to cold and warm periods of the year, and possible increase in temperatures due to global warming and urbanization, say, a possible structure of Λ(𝑡) could be Λ(𝑡)=𝑎+𝑏𝑡+𝑐sin2𝜋(𝑡𝑑)365,(3.2) for constants 𝑎, 𝑏, 𝑐, and 𝑑, and with time 𝑡 measured in days. Such a specification of the seasonality function was suggested by Alaton et al. [16] in their analysis of temperatures observed in Bromma, Sweden. Benth and Šaltytė Benth [5] applied this seasonality function when modelling Norwegian temperature data and later used it for data measured in Lithuania and Sweden in the papers [1, 17, 18]. Mraoua and Bari [19] apply such a seasonal function on Moroccan data. Other, more sophisticated seasonal functions may of course be used (see Härdle and Lopez Cabrera [20] and Härdle et al. [21]).

Empirical analysis of temperature data in Norway, Sweden, and Lithuania (see Benth et al. [1, 5, 17, 18]), and US and Asian cities (see Cao and Wei [14] and Härdle and Lopez Cabrera [20]), suggest that the deseasonalized temperatures 𝑌(𝑡) have an autoregressive structure on a daily scale. This motivates the use of a continuous-time autoregressive (CAR) model for 𝑌(𝑡) (see Brockwell [22] for a general account on such processes).

For 𝑝1, a CAR(𝑝)-dynamics is defined as follows. Define the 𝑝×𝑝-matrix 𝐴 by 𝐴=0𝐼𝛼𝑝𝛼1,(3.3) with 𝛼𝑖 being positive constants for 𝑖=1,,𝑝, and 𝐼 the (𝑝1)×(𝑝1) identity matrix. Observe that the determinant of 𝐴 is equal to 𝛼𝑝>0, and hence 𝐴 is invertible. For a one-dimensional Brownian motion 𝐵(𝑡), we define the 𝑝-dimensional Ornstein-Uhlenbeck process 𝐗(𝑡) as 𝑑𝐗(𝑡)=𝐴𝐗(𝑡)𝑑𝑡+𝜎𝐞𝑝𝑑𝐵(𝑡),(3.4) where 𝐞𝑖, 𝑖=1,,𝑝 is the canonical 𝑖th unit vector in 𝑝 and 𝜎>0 constant. A CAR(𝑝) process is defined as 𝑌(𝑡)=𝐞1𝐗(𝑡), with 𝐱 being the transpose of a vector 𝐱. It seems that a CAR(3) is the most reasonable choice from an empirical point of view (see, e.g., Benth et al. [1] or Härdle and Lopez Cabrera [20], where the volatility 𝜎 is allowed to be time-dependent to allow for seasonality effects). In passing we note that a similar model has been used for wind speed modelling for New York and Lithuania, resulting in a CAR(4) process for deseasonalized data as the statistically optimal choice (see Šaltytė Benth and Benth [23] and Šaltytė Benth and Šaltytė [24]).

In this paper we are concerned with an extension of the CAR(𝑝) model defined (3.4) to explain seasonality and stochastic volatility effects in the temperature variations. In Benth and Šaltytė Benth [5], it was observed that the residuals of temperature after explaining the autoregressive effects had a clear seasonality and signs of stochastic volatility (from a study of Norwegian temperatures). This seasonal structure can be partly explained by a seasonal variance. However, one is still left with signs of stochastic volatility observed as an exponentially decaying autocorrelation function for squared residuals. Similar observations were made by Campbell and Diebold [4] on US data. They suggest a GARCH volatility structure where a seasonal level component is added to model this. However, from a modelling and empirical point of view, it seems much more reasonable to consider a multiplicative structure between the seasonality and the stochasticity of the volatility. It is a rather standard approach in time series analysis to apply multiplicative models when dealing with phenomena having positive values. Using a multiplicative structure makes it easy to preserve the natural condition of positivity of the overall volatility. Also, empirically it becomes simple to estimate the seasonality and the stochastic volatility contributions in a stepwise procedure. We remark in passing that the seasonal GARCH model of Campbell and Diebold [4] may lead to negative values of the volatility. Our proposed stochastic volatility model will lend itself to analytical tractability when it comes to pricing of weather derivatives, which seems to become very complicated when using the seasonal GARCH model of Campbell and Diebold [4].

Hence, we propose the following CAR(𝑝) model with seasonal stochastic volatility. Suppose that 𝐗(𝑡) follows the dynamics 𝑑𝐗(𝑡)=𝐴𝐗(𝑡)𝑑𝑡+𝜙(𝑡)𝐞𝑝𝑑𝐵(𝑡),(3.5) where 𝜙(𝑡)=𝜁(𝑡)×𝜎(𝑡),(3.6) for a bounded positive continuous function 𝜁(𝑡), and 𝜎(𝑡) being a stochastic process assumed to the positive. We assume that 𝜁 is strictly bounded away from zero, that is, there exists a constant 𝛿>0 such that 𝜁(𝑡)𝛿 for all 𝑡𝜏.

In Benth et al. [1], the model (3.5) was considered with 𝜎(𝑡)=1, that is, with no stochastic volatility effects. The function 𝜁(𝑡) was estimated on data from Stockholm, Sweden, to be a truncated Fourier series of order four, having a yearly seasonality, that is, 𝜁(𝑡+𝑘×365)=𝜁(𝑡) for 𝑘=1,2,, 𝑡0, with time being measured in days. This seasonal volatility has been observed and modelled for data in Sweden and Lithuania (see Benth et al. [1, 18]), and US and Asian cities (see Cao and Wei [14], Härdle and Lopez Cabrera [20] and Benth et al. [25]). Recently, Härdle et al. [21] proposed and applied a powerful local adaptive modelling approach to optimally estimate the volatility.

A class of stochastic volatility processes providing a great deal of flexibility in precise modelling of residual characteristics is given by the Barndorff-Nielsen and Shephard [6] (BNS) model. In mathematical terms, we have 𝜎2(𝑡)𝑉(𝑡),(3.7) with 𝑑𝑉(𝑡)=𝜆𝑉(𝑡)𝑑𝑡+𝑑𝐿(𝑡).(3.8) Here, 𝜆>0 is a constant measuring the speed of mean reversion for the volatility process 𝑉(𝑡), which reverts to zero. The process 𝐿(𝑡) is assumed to be a subordinator independent of 𝐵, the Brownian motion, meaning a Lévy process with increasing paths. In this way one is ensured that 𝑉(𝑡) is positive.

In the BNS model a dependency on 𝜆 in the subordinator is introduced. We choose a subordinator 𝑈(𝑡) and define 𝐿(𝑡)=𝑈(𝜆𝑡), which then again becomes a subordinator. This is convenient when estimating such a stochastic volatility model. In fact, one may get relatively explicit distributions for the "deseasonalized" residuals 𝑉(𝑡)𝑑𝐵(𝑡), which become conditionally normally distributed, with mean zero and variance 𝑉(𝑡). In stationarity of 𝑉, this distribution becomes independent of 𝜆, and therefore one may separate modelling of the distribution of these residuals from the dependency structure in the paths. For this stochastic volatility model, the squared residuals will have an exponentially decaying autocorrelation function, with decay rate 𝜆. By superposition of such 𝑉's, the autocorrelation function may decay as a sum of exponentials. We refer to Barndorff-Nielsen and Shephard [6] for an extensive analysis of this class of stochastic volatility models. We remark that the stochastic BNS volatility does not become a GARCH dynamics in discrete time, but shares some similar properties.

In the rest of this paper we suppose that the deseasonalized temperature 𝑌(𝑡) is given by 𝑌(𝑡)=𝐞1𝐗(𝑡),(3.9) where 𝐗(𝑡) is defined in (3.5) and 𝜎(𝑡) defined in (3.7) and (3.8).

4. Temperatures Futures Pricing

In this section we derive the futures price dynamics based on the CDD/HDD and CAT indices. This will be done with respect to some pricing measure 𝑄 specified in a moment.

We use the intrinsic notation TI(𝑇1,𝑇2) for a temperature index over the measurement period [𝑇1,𝑇2], where TI = HDD/CDD or CAT. Let 𝐹TI(𝑡,𝑇1,𝑇2) be the futures price for a contract returning the index TI(𝑇1,𝑇2). The futures contracts are settled at the end of measurement period, time 𝑇2, and the time 𝑡 value of a long position in the contract is 𝑇exp𝑟2𝑇𝑡TI1,𝑇2𝐹TI𝑡,𝑇1,𝑇2,(4.1) where the constant 𝑟>0 is the risk-free interest rate. Thus, if 𝑄 is a probability equivalent to 𝑃, we can define the futures price as 𝐹TI𝑡,𝑇1,𝑇2=𝔼𝑄𝑇TI1,𝑇2𝑡(4.2) by using that the contract is costless to enter and assuming that the futures price is adapted to the information at time 𝑡, 𝑡.

One may ask why we do not consider stochastic interest rates in this setup. In fact, it would not be any problem to do so, by, for example, assuming a risk-free spot rate 𝑟(𝑡). However, there is no reason why this should depend on the temperature, and a reasonable model would state independence between 𝑟(𝑡) and 𝑇(𝑡). Thus, from the properties of conditional expectations, we would end up with the same pricing relations as in (4.2).

Note that the pricing measure 𝑄 is only equivalent to 𝑃 and not assumed to be a martingale measure. If temperature would be a tradeable asset, then the no-arbitrage theory of asset pricing (see, e.g., Duffie [9]) would demand that 𝑄 is an equivalent martingale measure, that is, a probability equivalent to 𝑃 under which the discounted temperature is a 𝑄-martingale. Temperature is obviously not a tradeable asset, and the condition on being a martingale measure is not effective.

To restrict the class of pricing probabilities, we consider a deterministic combination of a Girsanov and Esscher transform. The Girsanov transform introduces a market price of risk with respect to the Brownian motion, while the Esscher transform will model a market price of volatility risk. In mathematical terms, we define the probability 𝑄 as 𝑄=𝑄𝐵×𝑄𝐿,(4.3) with 𝑑𝑄𝐵||||𝑑𝑃𝑡=exp𝑡0𝜃𝐵(𝑠)1𝜙(𝑠)𝑑𝐵(𝑠)2𝑡0𝜃2𝐵(𝑠)𝜙2,(𝑠)𝑑𝑠𝑑𝑄𝐿||||𝑑𝑃𝑡=exp𝑡0𝜃𝐿(𝑠)𝑑𝐿(𝑠)𝑡0𝜓𝐿𝜃𝐿,(𝑠)𝑑𝑠(4.4) for 𝜃𝐵, 𝜃𝐿 being two bounded continuous functions. Here, 𝜓𝐿 is the log-moment generating function of 𝐿, that is, 𝜓𝐿(𝜃)=ln𝔼exp(𝜃𝐿(1)).(4.5) Note that the Novikov condition is satisfied for the Girsanov change of measure since 𝜎2(𝑠)𝜎2(0)exp(𝜆𝑠), 𝜁(𝑡)𝛿 and 𝜃𝐵 being bounded. Furthermore, we assume exponential integrability of 𝐿, that is, there exists a constant 𝑘 such that 𝔼exp(𝑘𝐿(1))<.(4.6) Then, the Esscher transform is well defined for all functions 𝜃𝐿 such that sup0𝑡𝜏|𝜃𝐿(𝑡)|𝑘. We restrict our attention to this class of market prices of volatility risk. We find that 𝜃𝑑𝑊(𝑡)=𝑑𝐵(𝑡)𝐵(𝑡)𝜙(𝑡)𝑑𝑡,(4.7) is a 𝑄-Brownian motion, independent of 𝐿. Moreover, following the analysis in Benth et al. [3], it holds that 𝐿 is an independent increment process under 𝑄, and, in the case where 𝜃𝐿 is a constant, a Lévy process.

There has been some work trying to reveal the existence and structure of the risk premium, or the market price of risk, for temperature derivatives. The theoretical benchmark approach by Platen and West [13] strongly argues against the existence of a risk premium in weather markets. The econometric study of US futures prices at CME performed by Dorfleitner and Wimmer [11] shows that the index modelling approach gives a very good prediction of prices, pointing towards a zero market price of risk. The results of Cao and Wei [14] is more inconclusive, as they are able to detect a significant (although small) market price of risk based on their equilibrium pricing approach. More recently, Hamisultane [15] showed using New York data that the estimates become very unstable within this pricing framework. Härdle and Lopez Cabrera [20] analyse various specifications of the market price of risk in an extensive empirical study of German futures prices. They find signs of a significant market price of risk for these contracts. Based on the abovementioned studies, one may choose the market price of risk 𝜃𝐵 as zero. As far as we know, there exists no empirical investigations on the volatility risk premium in weather markets. One may suspect that this is zero, but we include 𝜃𝐿 here for generality. Using 𝜃𝑏=𝜃𝐿=0 would correspond to 𝑄=𝑃 and be a choice following the indications of the study in Dorfleitner and Wimmer [11].

In the following proposition we compute the CAT futures price.

Proposition 4.1. The CAT futures price at time 𝑡 for a contract with measurement period [𝑇1,𝑇2], 𝑡𝑇1, is 𝐹CAT𝑡,𝑇1,𝑇2=𝑇2𝑇1𝑇Λ(𝑢)𝑑𝑢+𝐚2𝑡,𝑇1+𝑡𝐗(𝑡)𝑇1𝑡𝐚𝑇2𝑠,𝑇1𝐞𝑠𝑝𝜃𝐵+(𝑠)𝑑𝑠𝑇2𝑇1𝐚𝑇2𝐞𝑠,0𝑝𝜃𝐵(𝑠)𝑑𝑠,(4.8) where 𝐚(𝑢,𝑣)=𝐞1𝐴1(exp(𝐴𝑢)exp(𝐴𝑣)).(4.9)

Proof. We know that the 𝑄-dynamics of 𝐗(𝑡) becomes 𝑑𝐗(𝑡)=𝐴𝐗(𝑡)𝑑𝑡+𝜃𝐵(𝑡)𝐞𝑝𝑑𝑡+𝐞𝑝𝜙(𝑡)𝑑𝑊(𝑡),(4.10) for the 𝑄-Brownian motion 𝑊. Thus, from the multidimensional Itô formula, it holds for 𝑢𝑡𝐗(𝑢)=exp(𝐴(𝑢𝑡))𝐗(𝑡)+𝑢𝑡exp(𝐴(𝑢𝑠))𝐞𝑝𝜃𝐵(𝑠)𝑑𝑠+𝑢𝑡exp(𝐴(𝑢𝑠))𝐞𝑝𝜙(𝑠)𝑑𝑊(𝑠).(4.11) It follows that 𝔼𝑄𝑇(𝑢)𝑡=Λ(𝑢)+𝔼𝑄𝐞1𝐗(𝑢)𝑡=Λ(𝑢)+𝐞1exp(𝐴(𝑢𝑡))𝐗(𝑡)+𝑢𝑡𝐞1exp(𝐴(𝑢𝑠))𝐞𝑝𝜃𝐵(𝑠)𝑑𝑠+𝔼𝑄𝑢𝑡𝐞1exp(𝐴(𝑢𝑠))𝐞𝑝𝜁(𝑠)𝜎(𝑠)𝑑𝑊(𝑠)𝑡.(4.12) By conditioning on the 𝜎-algebra generated by the paths of 𝜎(𝑠), 0𝑠𝑡 and 𝑡, and using the tower property of conditional expectation, we find that the last term above is equal to zero from the independent increment property of 𝑊. Finally integrating with respect to 𝑢 yields the desired result.

Observe that the market price of volatility risk does not enter the CAT futures price explicitly, only the market price of risk 𝜃𝐵. We derive the dynamics of the CAT futures price in the following proposition.

Proposition 4.2. The 𝑄-dynamics of the CAT futures price is 𝑑𝐹CAT𝑡,𝑇1,𝑇2𝑇=𝐚2𝑡,𝑇1𝐞𝑡𝑝𝜙(𝑡)𝑑𝑊(𝑡),(4.13) whereas the 𝑃-dynamics becomes 𝑑𝐹CAT𝑡,𝑇1,𝑇2𝑇=𝐚2𝑡,𝑇1𝐞𝑡𝑝𝜃𝐵𝑇(𝑡)𝑑𝑡+𝐚2𝑡,𝑇1𝐞𝑡𝑝𝜙(𝑡)𝑑𝐵(𝑡),(4.14) with 𝐚(𝑢,𝑣) defined in Proposition 4.1.

Proof. This follows by a straightforward application of the multidimensional Itô Formula, where the observation that 𝐹CAT(𝑡,𝑇1,𝑇2) is a 𝑄-martingale simplify the derivations considerably.

As is evident from this dynamics, the CAT futures price will have a seasonal stochastic volatility 𝜙(𝑡)=𝜁(𝑡)𝜎(𝑡), dampened by the factor 𝐚(𝑇2𝑡,𝑇1𝑡)𝐞𝑝. This factor can be viewed as the integral of 𝐞1exp(𝐴(𝑠𝑡))𝐞𝑝 over the measurement period [𝑇1,𝑇2]. Observe that for the case of a CAR(1) process, which is nothing but an ordinary Ornstein-Uhlenbeck process, this term collapses into exp(𝛼1(𝑢𝑡)), an exponential damping of the volatility of temperature as a function of "time-to-maturity" 𝑢𝑡. This is known as the Samuelson effect in commodity markets. In the current context, we have a term 𝐚(𝑇2𝑡,𝑇1)𝐞𝑝 which can be interpreted as the aggregation of the Samuelson effect over the measurement period [𝑇1,𝑇2] (see Benth et al. [3] for a thorough analysis on the Samuelson effect for higher-order autoregressive models). The drift in the 𝑃-dynamics of 𝐹CAT(𝑡,𝑇1,𝑇2) will depend explicitly on the parameter 𝜃𝐵(𝑡), defending the name market price of risk.

We next move our attention to CDD futures, which will become nonlinearly dependent on the temperature. Thus, we will not get as explicit results on their prices as for the CAT futures. It turns out to be useful to apply Fourier transform techniques (see Carr and Madan [26]).

Recall from Folland [27] that for a function 𝑓𝐿1(), we have 1𝑓(𝑥)=2𝜋𝑓(𝑦)ei𝑥𝑦𝑑𝑦,(4.15) where 𝑓 is the Fourier transform of 𝑓 defined by 𝑓(𝑦)=𝑓(𝑥)ei𝑥𝑦𝑑𝑥.(4.16) Note the signs in the exponentials here, which are not following the standard definition.

Relevant to the analysis of CDD futures, consider the function 𝑓(𝑥)=max(𝑥,0), which is not in 𝐿1(), but we can dampen it by an exponential function and get the following lemma.

Lemma 4.3. For any 𝜉>0, the Fourier transform of the function 𝑓𝜉(𝑥)=exp(𝜉𝑥)max(𝑥,0),(4.17) is 𝑓𝜉1(𝑦)=(𝜉+i𝑦)2.(4.18)

Proof. We have 𝑓𝜉(𝑦)=𝑓𝜉(𝑥)ei𝑥𝑦𝑑𝑥=𝑥e(𝜉+i𝑦)𝑥𝑑𝑥.(4.19) The result follows by straightforward integration.

We apply this in our further analysis of the CDD futures price.

Proposition 4.4. The CDD futures price at time 𝑡 for a contract with measurement period [𝑇1,𝑇2], 𝑡𝑇1, is 𝐹CDD𝑡,𝑇1,𝑇2=12𝜋𝑓𝜉(𝑦)𝑇2𝑇1Ψ𝑡,𝑠,𝐗(𝑡),𝜎2(𝑡),𝑦𝑑𝑠𝑑𝑦,(4.20) with 𝑓𝜉(𝑦) given in Lemma 4.3 and Ψ(𝑡,𝑠,𝐱,𝜎2,𝑦) given by lnΨ𝑡,𝑠,𝐱,𝜎21,𝑦=(𝜉+i𝑦)𝑚(𝑡,𝑠,𝐱)+2(𝜉+i𝑦)2𝑠𝑡𝐞1exp(𝐴(𝑠𝑢))𝐞𝑝2𝜁2(𝑢)e𝜆(𝑢𝑡)𝑑𝑢𝜎2+𝑠𝑡𝜓𝐿12(𝜉+i𝑦)2𝑠𝑣𝐞1exp(𝐴(𝑠𝑢))𝐞𝑝2𝜁2(𝑢)e𝜆(𝑢𝑣)𝑑𝑢+𝜃𝐿(𝑣)𝜓𝐿𝜃𝐿(𝑣)𝑑𝑣,(4.21) where 𝑚(𝑡,𝑠,𝐱)=Λ(𝑠)+𝐞1exp(𝐴(𝑠𝑡))𝐱+𝑠𝑡𝐞1exp(𝐴(𝑠𝑢))𝐞𝑝𝜃𝐵(𝑢)𝑑𝑢𝑐.(4.22) The constant 𝜉>0 can be arbitrarily chosen.

Proof. First, from the definition of the CDD index and the 𝑄-dynamics of 𝐗(𝑡) we find Λmax(𝑇(𝑠)𝑐,0)=max(𝑠)+𝐞1𝐗(𝑠)𝑐,0=max𝑚(𝑡,𝑠,𝐗(𝑡))+𝑠𝑡,𝑔(𝑠𝑢)𝜙(𝑢)𝑑𝑊(𝑢),0(4.23) where we have introduced the short-hand notation 𝑔(𝑢)=𝐞1exp(𝐴𝑢)𝐞𝑝. From Lemma 4.3 we find that (after commuting integration and expectation by appealing to the Fubini-Tonelli Theorem) 𝔼𝑄max(𝑇(𝑠)𝑐,0)𝑡=12𝜋𝑓𝜉(𝑦)𝔼𝑄+exp((𝜉+i𝑦)(𝑚(𝑡,𝑠,𝐗(𝑡))𝑠𝑡𝑔(𝑠𝑢)𝜁(𝑢)𝜎(𝑢)𝑑𝑊(𝑢)𝑡=1𝑑𝑦2𝜋𝑓𝜉(𝑦)exp((𝜉+i𝑦)𝑚(𝑡,𝑠,𝐗(𝑡)))×𝔼𝑄exp(𝜉+i𝑦)𝑠𝑡𝑔(𝑠𝑢)𝜁(𝑢)𝜎(𝑢)𝑑𝑊(𝑢)𝑡𝑑𝑦.(4.24) The last equality follows from the 𝑡-adaptedness of 𝐗(𝑡). To compute the conditional expectation in the last expression, we apply the tower property of conditional expectations with respect to the filtration 𝜎𝑡 generated by the paths of 𝜎(𝑢), 0𝑢𝜏 and 𝑡. This yields from the independence between 𝜎(𝑡) and 𝐵(𝑡) and the independent increment property of Brownian motion, 𝔼𝑄exp(𝜉+i𝑦)𝑠𝑡𝑔(𝑠𝑢)𝜁(𝑢)𝜎(𝑢)𝑑𝑊(𝑢)𝑡=𝔼𝑄𝔼𝑄exp(𝜉+i𝑦)𝑠𝑡𝑔(𝑠𝑢)𝜁(𝑢)𝜎(𝑢)𝑑𝑊(𝑢)𝜎𝑡𝑡=𝔼𝑄𝐿1exp2(𝜉+i𝑦)2𝑠𝑡𝑔2(𝑠𝑢)𝜁2(𝑢)𝜎2(𝑢)𝑑𝑢𝑡.(4.25) We have 𝜎2(𝑢)=𝑉(𝑢)=e𝜆(𝑢𝑡)𝜎2(𝑡)+𝑢𝑡e𝜆(𝑢𝑣)𝑑𝐿(𝑣).(4.26) Hence, by using the adaptedness of 𝜎2(𝑡) to 𝑡 and the independent increment property of 𝐿 under 𝑄𝐿, 𝔼𝑄𝐿1exp2(𝜉+i𝑦)2𝑠𝑡𝑔2(𝑠𝑢)𝜁2(𝑢)𝜎2(𝑢)𝑑𝑢𝑡1=exp2(𝜉+i𝑦)2𝑠𝑡𝑔2(𝑠𝑢)𝜁2(𝑢)e𝜆(𝑢𝑡)𝑑𝑢𝜎2(𝑡)×𝔼𝑄𝐿1exp2(𝜉+i𝑦)2𝑠𝑡𝑔2(𝑠𝑢)𝜁2(𝑢)𝑢𝑡e𝜆(𝑢𝑣).𝑑𝐿(𝑣)𝑑𝑢(4.27) Invoking the stochastic Fubini theorem (see Protter [28]) gives 𝑠𝑡𝑔2(𝑠𝑢)𝜁2(𝑢)𝑢𝑡e𝜆(𝑢𝑣)𝑑𝐿(𝑣)𝑑𝑢=𝑠𝑡𝑠𝑣𝑔2(𝑠𝑢)𝜁2(𝑢)e𝜆(𝑢𝑣)𝑑𝑢𝑑𝐿(𝑣).(4.28) Finally, we compute the expectation by using the Radon-Nikodym derivative of 𝑄𝐿 with respect to 𝑃 and the log-moment generating function of 𝐿. This proves the proposition.

Although seemingly very complex, the price dynamics of a CDD futures can be simulated by using the fast Fourier transform (FFT) technique (see Carr and Madan [26] for a discussion of this in the context of options). In fact, given 𝐗(𝑡) and 𝜎2(𝑡), one uses FFT to compute 𝐹CDD(𝑡,𝑇1,𝑇2). This procedure will involve numerical integration over the measurement period and in the expression for Ψ(𝑡,𝑠,𝐗(𝑡),𝜎2(𝑡),𝑦). Since FFT is rather efficient, this seems to be preferable to a direct Monte Carlo simulation of 𝐹CDD(𝑡,𝑇1,𝑇2).

The CDD futures price depends explicitly on the current states of the volatility 𝜎2(𝑡) and CAR process 𝐗(𝑡). Since we define the deseasonalized temperature as the first coordinate of 𝐗(𝑡), this is observable from today's (time moment 𝑡) temperature after removing the value of the seasonal function Λ(𝑡). However, the remaining 𝑝1 states of 𝐗(𝑡) must be filtered from deseasonalized temperature observations. One may appeal to the Kalman filter for doing this, although the seasonality and stochastic volatility complicates this procedure. A tempting alternative is to use an Euler discretization of the CAR dynamics as in Benth et al. [3] to relate the coordinates of 𝐗(𝑡) to lagged observations of temperature. (In Benth et al. [3], the Euler discretization is performed for a time continuous, but deterministic, volatility. Note that there is no change in the steps for developing the same identification when having a stochastic volatility.) It is easily seen from the definition of 𝐗(𝑡) that the quadratic variation process of the last coordinate, 𝑋𝑝(𝑡)=𝐞𝑝𝐗(𝑡), is equal to 𝑑𝑋𝑝𝑡=𝜙2(𝑡)𝑑𝑡,(4.29) and thus we may apply this to recover 𝜎2(𝑡) from observing the path of 𝑋𝑝(𝑡).

Since 𝐹CDD(𝑡,𝑇1,𝑇2) depends explicitly on 𝜎2(𝑡), we get the interesting consequence that even though the temperature dynamics has continuous paths, the CDD futures dynamics will jump when 𝜎2(𝑡) jumps. Thus, the futures price dynamics will have discontinuous paths. See Benth [29] for a similar result in the context of commodity and power markets using the BNS stochastic volatility model. Note also that the market price of volatility risk appears explicitly in the price, contrary to CAT futures.

The HDD futures price dynamics 𝐹HDD(𝑡,𝑇1,𝑇2) can be computed analogously to the 𝐹CDD(𝑡,𝑇1,𝑇2) price. However, we may also resort to the so-called HDD-CDD parity which we collect from Benth et al. [1, Proposition 10.1]. It holds that 𝐹HDD𝑡,𝑇1,𝑇2=𝐹CDD𝑡,𝑇1,𝑇2𝑇+𝑐2𝑇1𝐹CAT𝑡,𝑇1,𝑇2.(4.30) We have formulas for both the CDD and CAT futures prices, and thus 𝐹HDD(𝑡,𝑇1,𝑇2) readily follows from (4.30).

4.1. A Discussion on Pricing of Options on Futures

The technique using Fourier transform is well adapted for option pricing as well, and we will briefly discuss this in connection with European call and put options written on CAT and CDD futures.

We start with a call option on a CAT futures, with exercise time 𝜏 and strike price 𝐾, on a futures with measurement period [𝑇1,𝑇2], 𝜏𝑇1. The price of this call option at time 𝑡𝜏 is given by the no-arbitrage theory as 𝐶CAT𝑡,𝜏,𝑇1,𝑇2=e𝑟(𝜏𝑡)𝔼𝑄𝐹maxCAT𝜏,𝑇1,𝑇2𝐾,0𝑡.(4.31) As it turns out, the derivation of 𝐶CAT using Fourier techniques follows more or less exactly the same steps as computing the CDD futures price. In fact, we observe from Proposition 4.1, that 𝐹CAT𝜏,𝑇1,𝑇2=𝐹CAT𝑡,𝑇1,𝑇2+𝜏𝑡𝐚𝑇2𝑠,𝑇1𝐞𝑠𝑝𝜙(𝑠)𝑑𝑊(𝑠).(4.32) Moreover, the payoff function of the call option is the same as the CDD index on a given day. By conditioning on the paths of 𝜎(𝑡), we can compute the option price step by step as in the proof of Proposition 4.4. This results in the following proposition.

Proposition 4.5. The price at time 𝑡 of a call option on a CAT futures with measurement period [𝑇1,𝑇2], exercise time 𝜏𝑡, 𝜏𝑇1, and strike 𝐾, is 𝐶CAT𝑡,𝜏,𝑇1,𝑇2=12𝜋𝑓𝜉(𝑦)Φ𝑡,𝜏,𝐹CAT𝑡,𝑇1,𝑇2,𝜎2(𝑡),𝑦𝑑𝑦,(4.33) with 𝑓𝜉(𝑦) given in Lemma 4.3 and lnΦ𝑡,𝜏,𝐹,𝜎21,𝑦=(𝜉+i𝑦)(𝐹𝐾)+2(𝜉+i𝑦)2×𝜏𝑡𝐚𝑇2𝑠,𝑇1𝐞𝑠𝑝2𝜁2(𝑠)e𝜆(𝑠𝑡)𝑑𝑠𝜎2+𝜏𝑡𝜓𝐿12(𝜉+i𝑦)2𝜏𝑢𝐚𝑇2𝑠,𝑇1𝐞𝑠𝑝2𝜁2(𝑠)e𝜆(𝑠𝑢)𝑑𝑠+𝜃𝐿(𝑢)𝜓𝐿𝜃𝐿(𝑢)𝑑𝑢,(4.34) and 𝐚(𝑢,𝑣) defined in Proposition 4.1.

Options on CDD futures are far more technically complicated to valuate. However, we may here as well resort to Fourier techniques and in principle obtain transparent price formulas. Considering a call option at time 𝑡, with strike price 𝐾 and exercise time 𝜏𝑡, we have the price as in (4.31), except that we substitute "CAT" with "CDD". Recalling from Proposition 4.4, we have 𝐹CDD𝜏,𝑇1,𝑇2=12𝜋𝑓𝜉(𝑦)𝑇2𝑇1Ψ𝜏,𝑠,𝐗(𝜏),𝜎2(𝜏),𝑦𝑑𝑠𝑑𝑦.(4.35) Furthermore, from the dynamics of 𝐗(𝑠) and 𝜎2(𝑠), we can express both in terms of their current states 𝐗(𝑡) and 𝜎2(𝑡), in addition to integrals with respect to 𝑊 and 𝐿 from 𝑡 to 𝜏. Similar considerations as for the CAT futures option can then be made, however, leading to a technically complex formula. We leave the details to the reader.

5. Some Empirical Considerations

We refer to an empirical estimation which can be found in Benth et al. [3] as a starting point for our discussion. There a data set of daily average temperatures measured in degrees Celsius in Stockholm, Sweden, is studied. The data are recorded from January 1, 1961 to May 25, 2006, altogether 16,570 data points after observations on February 29 in every leap year were removed. The parameters of the seasonal function Λ(𝑡) as defined in (3.2) were estimated by least squares approach to be 𝑎=6.37, 𝑏=0.0001, 𝑐=10.44, and 𝑑=161.17. This means that there is a small increase in average temperature over the considered measurement period. The overall average temperature is around 6.3°C. The amplitude is ±10.44, which means that on average in the winter temperatures go down to around −4°C, while summer temperatures reach above 16.7°C. After removing the seasonal function from the data, one finds based on the autocorrelation structure of the residuals that a CAR(3) process fits the deseasonalized data very well. The estimated parameters in the 𝐴 matrix are 𝛼1=2.04, 𝛼2=1.34 and 𝛼3=0.18, leading to eigenvalues with negative real part and thus 𝐴 leads to a stationary CAR dynamics.

Our main concern in this paper is the precise modelling of the residuals. We have proposed a model where the volatility 𝜙(𝑡) is defined as the product of a deterministic seasonal function 𝜁(𝑡) and a stochastic volatility process 𝜎(𝑡). The natural path to follow in estimating this is to first estimate the seasonal function. We use the approach suggested in Benth and Šaltytė Benth [5, 17], where a Fourier series of order four is chosen for 𝜁2(𝑡), and estimated on daily empirical variances. Next, the residual data are divided by the estimated 𝜁(𝑡) function, in order to deseasonalize them.

The residuals after removing the effect of the function 𝜁(𝑡) and the autoregressive effects, are close to normally distributed (see Figure 10.11 in Benth et al. [3]). However, there are signs of heteroskedasticity, in particular, a normality plot shows signs of a heavy left tail in the distribution (see Figure 10.12 in Benth et al. [3]). Also, the 𝑃 value of 0.002 of the Kolmogorov-Smirnov test suggests to reject the normality hypothesis of the residuals. We remark in passing that recently Härdle et al. [21] have proposed a local adaptive approach to find an optimal smoothing parameter to locally estimate the seasonality and volatility. The approach aims at correcting the nonnormalities in the residuals. Their approach improves the normal distribution fit to residuals for many cities, but there are still signs of heavy tails (see in particular the QQ-plot of Berlin data in Figure 11 of Härdle et al. [21]). We also remark that in Benth and Šaltytė Benth [5], the empirical seasonal volatility was used as the estimate of 𝜁(𝑡), but still a clear sign of stochastic volatility were present when analysing the autocorrelation function of the squared residuals.

In Figure 1 we show the autocorrelation function of the squares of the deseasonalized residuals on a log-scale. Characteristically, the autocorrelation function decays with the lags, and eventually wiggle around zero. As mentioned earlier, the stationary autocorrelation function of 𝜎2(𝑡)=𝑉(𝑡) is an exponential function with decay rate equal to the speed of mean reversion 𝜆 of 𝑉(𝑡), meaning a linearly decaying autocorrelation function on a log scale. We fit a linear function to the first 10 lags, with the estimate ̂𝜆=0.21. In Figure 1 the estimated line is shown together with the empirical logarithmic autocorrelation function.

Admittedly, the linear fit to the log-autocorrelation function in Figure 1 is very rough. Since the line does not intersect at zero, it cannot be related directly to an exponential autocorrelation function. One may mend this deviancy by considering a sum of exponentials. From Figure 1 we may assume a very steeply decaying autocorrelation function for small lags (up to 2), and thereafter decaying according the fitted line shown in the plot. Even more, after approximately 10 lags the autocorrelation function seems to flatten out, and a new line with a different slope would fit better. This could easily be incorporated into our framework by considering a superpositioning of three processes of the type 𝑉(𝑡) modelling the volatility. In that case, we would get a theoretical autocorrelation function being the sum of three exponentials, which would be approximately represented as a piecewise linear function on log scale if the mean reversion speeds 𝜆1, 𝜆2, and 𝜆3 are sufficiently distinct. This could be attributed to a fast reversion of big volatility changes, whereas the medium to smaller variations in volatility are slowly reverting, say. The estimated ̂𝜆 would approximately correspond the the mean reversion of medium volatility changes. In order to estimate such a superposition, one must resort to filtering techniques, which are discussed in detail in Barndorff-Nielsen and Shephard [6].

The final step in estimating our temperature model is to fit a subordinator process 𝐿(𝑡). This is done by appealing to the ingenious parametrization of 𝐿(𝑡) by Barndorff-Nielsen and Shephard [6]. By using 𝐿(𝑡)=𝑈(𝜆𝑡) for a subordinator 𝑈 in the model for 𝑉(𝑡)=𝜎2(𝑡), they show that the stationary distribution of 𝑉(𝑡) is independent of 𝜆. Thus, the model for the residuals will become a variance mixture model, being a conditionally normally distributed with mean zero and variance equal to the stationary distribution of 𝜎2(𝑡). For example, choosing the stationary distribution of 𝜎2(𝑡) to be generalized inverse Gaussian, the variance mixture model of the residuals becomes generalized hyperbolic distributed (see Barndorff-Nielsen and Shephard [6]) for discussion on this).

Based on this, we first select and fit a distribution 𝐹 to the deseasonalized residuals, and secondly construct the Lévy process 𝐿 which has the required stationary distribution 𝐺. The process 𝐿 is called the background driving Lévy process, and Barndorff-Nielsen and Shephard [6] show that the distribution 𝐺 must be self-decomposable in order for 𝐿 to exist. In Benth and Šaltytė Benth [5], a generalized hyperbolic distribution was successfully fitted to the deseasonalized temperature residuals for data collected in several Norwegian cities. In this case, there exists a subordinator 𝐿 such that the stationary distribution of 𝜎2(𝑡) becomes generalized inverse Gaussian. The Lévy measure of 𝐿 can be explicitly characterised in terms of the selected distribution for the residuals.

One may ask whether some or all of the parameters in the specified temperature model may be time dependent. In our empirical analysis of Stockholm data, we have not detected any structural changes in the seasonality function Λ(𝑡). Furthermore, in an AR(1)-specification of the temperature model for Stockholm, we have investigated if there are any seasonality in the speed of mean reversion (study not reported). Looking at the estimates for particular months over the year, we did not find any pattern defending a seasonality in the speed of mean reversion (or, in the specification of the AR-matrix 𝐴 in our context). However, we refer to the paper by Zapranis and Alexandridis [30] for an analysis of temperature data in Paris, where the authors detect and model a seasonal mean reversion. We have not investigated if any of the other parameters in our model may vary with time.

To understand how fast the temperature dynamics is reverting back to its long-term average Λ(𝑡), we discuss the notion of half-life of the stochastic process 𝑌(𝑡). In Clewlow and Strickland [31] the half-life is defined to be the expected time it takes before the process is returned half way back to its mean from any position. We express this mathematically as the smallest time 𝜏>0 so that 𝔼[]=1𝑌(𝜏)2𝑌(0).(5.1) For an Ornstein-Uhlenbeck process (where 𝑝=1, and 𝜎(𝑡)=1), Clewlow and Strickland [31] derives that 𝜏=ln2/𝛼. In the next Lemma we derive the half-life of 𝑌(𝑡).

Lemma 5.1. The half-life of 𝑌(𝑡) defined in (3.9) is given by the solution of the equation 𝐞1exp(𝐴𝜏)𝐞1=12,(5.2) when assuming that 𝐞𝑘𝐗(0)=0 for 𝑘=2,,𝑝.

Proof. First, we have after using the tower property of conditional expectations that 𝔼[𝑌](𝜏)=𝐞1exp(𝐴𝜏)𝐗(0).(5.3) Putting equal to 𝑌(0)/2 yields 𝐞11exp(𝐴𝜏)𝐗(0)=2𝐞1𝐗(0).(5.4) Assuming 𝐞𝑘𝐗(0)=0 for 𝑘=2,,𝑝, we find that 𝐗(0)=𝐞1𝑌(0), and hence the result follows.

Note that the first coordinate of 𝐗(0) is equal to 𝑌(0) by definition. For convenience, we let the other coordinates be equal to zero in the above result. In reality, we should estimate the other states of 𝐗(0) by filtering the temperature data series in order to reveal their value at time zero. This means that the half-life becomes state dependent for higher-order autoregressive models.

For a temperature dynamics based on the Stockholm estimates referred to above, we find the solution to the half-life equation in Lemma 5.1 to be 𝜏=5.94, that is, it takes on average slightly less than six days for today's temperature to revert half-way back to its long-term level.

Let us investigate the contribution from the various terms in the CAT futures price dynamics in order to get a feeling for their relative importance in the context of weather derivatives. For illustrative purposes, we choose a June contract, which starts measurement at time 𝑇1=151 and ends at time 𝑇2=181 (supposing time is measured from January 1). Recall the CAT futures price in Proposition 4.1. The first term with the seasonal function is equal to 191.5. Suppose now that current time is one week prior to June 1, that is, 𝑡=144, and let 𝐞𝑘𝐗(𝑡)=0 for 𝑘=2,3, but recalling 𝐞1𝐗(𝑡)=𝑌(𝑡). We set 𝑌(𝑡)=5, meaning that the temperature is 5°C above the long-term mean level for that particular day. This leads to a value of 11.8 of the second term, which is around 5% of the value of the mean. By moving time 𝑡 to start of measurement, 𝑡=151, we get the value 37.6 of the second term instead, which is close to 20% of the contribution of the long-term function to the CAT-futures price. This shows that the mean reversion of the temperature dynamics towards the seasonal mean function kills the effect of daily temperature variations quickly, and when being far from start of measurement the CAT futures price will be essentially constant (and equal to the seasonal mean for the measurement period). However, as we get closer to the start of measurement, the effect of the temperature variations become gradually more important and will impact the CAT futures price significantly. In Dorfleitner and Wimmer [11], an example of a seasonal futures contract (Chicago HDD winter contract) is presented, where the price is constant except from the last week prior to measurement. Then the observed futures price starts to wiggle. The seasonal function plays a dominating role of setting the level of the price. It is worthwhile to emphasize that the length of measurement period is of importance here. The shorter the measurement period will be, the smaller will the damping factor 𝐚(𝑡,𝜏1,𝜏2) become, and the more sensitive the CAT futures price will become to variations in temperature. This effect is even more emphasized by the fact that the seasonal term becomes smaller with a smaller measurement period, increasing the relative importance of the second term. With the introduction of weekly futures contracts at the CME, this is an important observation. Obviously, the two last terms with the market price of risk can contribute arbitrarily high or little to the CAT futures price, by adjusting the level of 𝜃𝐵. In a concrete application, one would estimate the 𝜃𝐵 from historical CAT futures prices. Referring to the studies in Benth et al. [25] and Härdle and Lopez Cabrera [20], we find that 𝜃𝐵 of the size 0.2 is reasonable. Thus, the terms involving 𝜃𝐵 will contribute very little compared to the seasonal function and the second term.

Let us investigate the stochastic volatility contribution to the CDD price. As we see from the pricing formula in Proposition 4.4, the influence of the stochastic volatility 𝜎(𝑡) appears in the expression of Ψ(𝑡,𝑠,𝐱,𝜎2,𝑦), both explicity as 𝜎2(𝑡), and implicitly via the characteristic function of the subordinator 𝐿 driving the dynamics of the volatility process. If we consider a model with no stochastic volatility, it is natural to suppose that 𝜎2(𝑡)=1. The function Ψ takes the form lnΨconst1(𝑡,𝑠,𝐱,𝑦)=(𝜉+i𝑦)𝑚(𝑡,𝑠,𝐱)+2(𝜉+i𝑦)2𝑠𝑡𝐞1exp(𝐴(𝑠𝑢))𝐞𝑝2𝜁2(𝑢)𝑑𝑢,(5.5) with 𝑚(𝑡,𝑠,𝐱) defined in Proposition 4.4. Then, we find the difference between the stochastic volatility and constant cases as lnΨ𝑡,𝑠,𝐱,𝜎2,𝑦lnΨconst=1(𝑡,𝑠,𝐱,𝑦)2(𝜉+i𝑦)2𝑠𝑡𝐞1exp(𝐴(𝑠𝑢))𝐞𝑝2𝜁2𝜎(𝑢)𝑑𝑢2+(𝑡)1𝑠𝑡𝜓𝐿12(𝜉+i𝑦)2𝑠𝑣𝐞1exp(𝐴(𝑠𝑢))𝐞𝑝2𝜁2(𝑢)e𝜆(𝑢𝑣)𝑑𝑢+𝜃𝐿(𝑣)𝜓𝐿𝜃𝐿(𝑣)𝑑𝑣.(5.6) The first term is stochastically varying with the volatility around one, while the second term is determined by the market price of volatility risk through the characteristic function of 𝐿. If 𝜎2(𝑡) is slowly varying around its mean, which naturally should be equal to one, then this first term will not contribute significantly. But due to potential jumps in the volatility, we may experience values of 𝜎(𝑡) significantly larger than one, and thus impacting the futures price. The contribution of the second term depends on how big the market price of volatility risk is.

We are not aware of any empirical studies of the potential existence of a market price of volatility risk. One may estimate 𝜃𝐿 in the same way as 𝜃𝐵, by for example, inferring it from minimizing the distance between theoretical and observed futures prices. Motivated by the studies of the market price of risk mentioned above (see Cao and Wei [14], Dorfleitner and Wimmer [11], Hamisultane [15], and Härdle and Lopez Cabrera [20]), one may suspect that the market price of volatility risk will be small and maybe not significant. It is to be noted, however, that even small values of 𝜃𝐿 may be big relative to the volatility, and thus modify significantly the effect of stochastic volatility viewed under the pricing measure 𝑄.

In view of the existence of European-style call and put options written on the temperature futures, precise knowledge of the volatility is important. The volatility of the underlying temperature futures will determine the price of the option and play a crucial role in a hedging strategy of the option. In particular, the Samuelson effect of the volatility of the temperature futures makes the options particularly sensitive to the volatility close to measurement. Hence, a precise model for the stochastic volatility is important. The nonlinearity in the payoff of options also makes second-order effects in the underlying dynamics more pronounced, and thus even small stochastic volatility variations may become significant in the option dynamics.

Acknowledgments

The authors are grateful for the comments and critics from two anonymous referees, which led to a significant improvement of the original paper. Fred Espen Benth acknowledges financial support from the project Energy markets: modelling, optimization and simulation (emmos), funded by the Norwegian Research Council under Grant 205328/v30.