#### Abstract

We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.

#### 1. Introduction

Protection against undesirable weather events, like, for instance, hurricanes and droughts, has been offered by insurance companies. In the recent decades, the securitization of such weather insurances has taken place, and nowadays one can trade weather derivative contracts at the Chicago Mercantile Exchange (CME), or more specialized weather contracts in the OTC market. At the CME, one finds derivatives written on temperature and snowfall indices, measured at different locations worldwide. The market for weather derivatives is emerging and gaining importance as a tool for hedging weather risk.

In this paper we propose a new model for the dynamics of temperature which forms the basis for pricing weather derivatives. The model generalizes the continuous-time autoregressive models suggested in Benth et al. [1] (see also Dornier and Querel [2] and Benth et al. [3]), to allow for stochastic volatility effects. Signs of stochastic volatility in the temperature variations have been detected in many data studies, for example, in an extensive study of daily US temperature series by Campbell and Diebold [4] and in Norwegian data by Benth and Šaltytė-Benth [5]. Campbell and Diebold suggested a seasonal generalized autoregressive conditional heteroskedasticity (GARCH) model to explain their observations, while here in this paper we suggest to model the variations by the stochastic volatility model of Barndorff-Nielsen and Shephard [6].

As it turns out, one may derive reasonably explicit expressions for futures prices of commonly traded contracts at the CME. Our proposed model is flexible in capturing the stylized features of temperature data, as well as being analytically tractable. To account for seasonality, we introduce a multiplicative structure, which is much simpler to analyse than the complex model suggested by Campbell and Diebold [4], where the seasonality comes in additively in the GARCH dynamics. One of the main advantages with our model is that it is set in a continuous-time framework, accommodating for a simple application of the theory of derivatives pricing. For example, we find that the volatility of futures prices on the cumulative average temperature is given by the temperature volatility, modified by a Samuelson effect inherited from the autoregressive structure of the temperature dynamics.

The risk premium in derivatives prices is parametrized by a time-dependent market price of risk. In addition, we include a market price of volatility risk. Thus, the risk loading in derivatives prices, interpreted as the risk premium in the financial setting, includes both temperature and volatility. In mathematical terms, the pricing measure is obtained by a combination of a Girsanov and Esscher transform.

We discuss the estimation of the proposed temperature model on data from Stockholm, Sweden. The purpose of the study is not to give a full-blown fitting of the model, but to outline the steps and to justify the model. Next, we discuss some issues related to the mean reversion of the model, where in particular we derive the so-called * half-life* of our temperature model. The importance of the various terms in the model is discussed.

We present our findings as follows. In the next section we review the basics of the CME market for weather derivatives. Next, in Section 3, our temperature model with seasonal stochastic volatility is defined and analysed. Section 4 contains results on the futures price dynamics for some commonly traded products at the CME. Some empirical considerations of our model and its applications to weather derivatives are presented in Section 5.

#### 2. The Market for Temperature Derivatives

The organized market for temperature derivatives at the CME offers trade in futures contracts "delivering" various temperature indices measured at different locations in the world, including the US, Canada, Australia, Japan, and some countries in Europe. In addition, there are contracts written on snowfall in New York and frost conditions at Schiphol airport in Amsterdam. On the exchange one may also buy plain vanilla European call and put options on the futures.

The main temperature indices are so-called* cooling-degree days* (CDDs), * heating-degree days* (HDDs), and* cumulative average temperature* (CAT). The CAT index is for European cities mainly. For a measurement period , , the CDD index is measuring the demand for cooling and is defined as the cumulative amount of degrees above a threshold . Mathematically we express it as
with being the daily average temperature on day , and the average is computed as the mean of the daily maximum and minimum observations. The threshold is in the market given as 18°C, or 65°F and is the trigger point for when air-conditioning is switched on. The measurement periods are set to weeks, months, or seasons consisting of two or more months, within the warmer parts of the year. A futures written on the CDD pays out an amount of money to the buyer proportional to the index, in US dollars for the American market, and in Euro for the European one (except London, where British Pounds is the currency). By entering such a contract, one essentially exchanges a fixed CDD against a floating, and thereby one may view these futures as swaps.

An HDD index is defined similarly to the CDD as the cumulative amount of temperatures below a threshold over a measurement period , that is, The index reflects the demand for heating at a certain location, and measurement periods are typically weeks, months, or seasons in the cold period (winter).

The CAT index is simply the accumulated average temperature over the measurement period defined as
This index is used for European and Canadian cities at CME. CME also operates with a daily average index called the Pacific Rim measured in several Japanese cities. The Pacific Rim is simply the average of daily temperatures over a measurement period, with the * daily temperature* defined as the mean of the hourly observed temperatures.

Temperature futures may be used by energy producers to hedge their demand risk (see Perez-Gonzales and Yun [7] for a study of weather hedging of US energy utilities). Other typical users of such futures contracts for hedging purposes are electricity retailers, holiday resorts, producers of soft drinks and beer, or even organisers of outdoor sport events, and fairs. Also investors have seen the potential in weather derivatives as a tool for portfolio diversification, since the derivatives are not expected to correlate significantly with the financial markets (see Brockett et al. [8]).

For mathematical convenience, we will use integration rather than summation and define the three indices CDD, HDD, and CAT over a measurement period by respectively.

In this paper we are concerned with modelling the temperature dynamics accurately in continuous time. Moreover, we aim at deriving futures prices based on our proposed dynamics. In the literature, one finds several streams for pricing weather derivatives. The main approach is to model the stochastic temperature evolution and price by conditional risk-adjusted expectation (see Benth et al. [3]). This is motivated from fixed-income theory, where zero-coupon bonds are priced using a model for the short rate of interest and conditional risk-adjusted expectation (see, e.g., Duffie [9]). Another popular way to price derivatives is using the so-called burn analysis (see Jewson and Brix [10]). There, one computes the historical average of the index in question and prices by this figure. One may add a risk loading to this price. A more sophisticated pricing method is the so-called index modelling approach (see Jewson and Brix [10]), where one is modelling the historical index value by a probability distribution. Index modelling was applied by Dorfleitner and Wimmer [11] in an extensive empirical study of US futures prices at the CME. The drawback with these two ways of pricing is that you do not get any dynamics for the futures price. If one wants to derive option prices, one must have accessible the volatility of the futures price as well as the price itself. Although applying the burn analysis (index modelling) gives an accurate estimate on the historical index value (distribution), it is expected that a precise model of the temperature dynamics explains close to equally well the historical average (distribution). In addition, one obtains the stochasticity of the temperature evolution, which enables us to find the futures price dynamics and to price options on such futures.

Brockett et al. [8] applies the indifference pricing approach in a mean-variance setting to valuate weather derivatives. Davis [12] introduces the marginal utility approach to pricing of weather derivatives, which is based on hedging in correlated assets and modelling of the index as a geometric Brownian motion. Platen and West [13] suggest using a world index as a benchmark for weather derivatives pricing. Equilibrium theory has also been a popular approach for pricing weather derivatives. Cao and Wei [14] apply this to analyse US temperature futures (see also Hamisultane [15] for a more recent study).

#### 3. A Model for the Temperature Dynamics with Seasonal Stochastic Volatility

Let be a complete filtered probability space with being a finite time horizon for which all the relevant financial assets in question exist.

We suppose that the temperature at time is defined as
where is the deterministic seasonal mean function and is a stochastic process modelling the random fluctuations around the mean. In other words, is the * deseasonalized* temperatures.

The seasonal mean function is assumed to be real-valued continuous function. To capture the seasonal variations due to cold and warm periods of the year, and possible increase in temperatures due to global warming and urbanization, say, a possible structure of could be for constants , , , and , and with time measured in days. Such a specification of the seasonality function was suggested by Alaton et al. [16] in their analysis of temperatures observed in Bromma, Sweden. Benth and Šaltytė Benth [5] applied this seasonality function when modelling Norwegian temperature data and later used it for data measured in Lithuania and Sweden in the papers [1, 17, 18]. Mraoua and Bari [19] apply such a seasonal function on Moroccan data. Other, more sophisticated seasonal functions may of course be used (see Härdle and Lopez Cabrera [20] and Härdle et al. [21]).

Empirical analysis of temperature data in Norway, Sweden, and Lithuania (see Benth et al. [1, 5, 17, 18]), and US and Asian cities (see Cao and Wei [14] and Härdle and Lopez Cabrera [20]), suggest that the deseasonalized temperatures have an autoregressive structure on a daily scale. This motivates the use of a continuous-time autoregressive () model for (see Brockwell [22] for a general account on such processes).

For , a -dynamics is defined as follows. Define the -matrix by with being positive constants for , and the identity matrix. Observe that the determinant of is equal to , and hence is invertible. For a one-dimensional Brownian motion , we define the -dimensional Ornstein-Uhlenbeck process as where , is the canonical th unit vector in and constant. A process is defined as , with being the transpose of a vector . It seems that a is the most reasonable choice from an empirical point of view (see, e.g., Benth et al. [1] or Härdle and Lopez Cabrera [20], where the volatility is allowed to be time-dependent to allow for seasonality effects). In passing we note that a similar model has been used for wind speed modelling for New York and Lithuania, resulting in a process for deseasonalized data as the statistically optimal choice (see Šaltytė Benth and Benth [23] and Šaltytė Benth and Šaltytė [24]).

In this paper we are concerned with an extension of the model defined (3.4) to explain seasonality and stochastic volatility effects in the temperature variations. In Benth and Šaltytė Benth [5], it was observed that the residuals of temperature after explaining the autoregressive effects had a clear seasonality and signs of stochastic volatility (from a study of Norwegian temperatures). This seasonal structure can be partly explained by a seasonal variance. However, one is still left with signs of stochastic volatility observed as an exponentially decaying autocorrelation function for squared residuals. Similar observations were made by Campbell and Diebold [4] on US data. They suggest a GARCH volatility structure where a seasonal level component is added to model this. However, from a modelling and empirical point of view, it seems much more reasonable to consider a multiplicative structure between the seasonality and the stochasticity of the volatility. It is a rather standard approach in time series analysis to apply multiplicative models when dealing with phenomena having positive values. Using a multiplicative structure makes it easy to preserve the natural condition of positivity of the overall volatility. Also, empirically it becomes simple to estimate the seasonality and the stochastic volatility contributions in a stepwise procedure. We remark in passing that the seasonal GARCH model of Campbell and Diebold [4] may lead to negative values of the volatility. Our proposed stochastic volatility model will lend itself to analytical tractability when it comes to pricing of weather derivatives, which seems to become very complicated when using the seasonal GARCH model of Campbell and Diebold [4].

Hence, we propose the following model with seasonal stochastic volatility. Suppose that follows the dynamics where for a bounded positive continuous function , and being a stochastic process assumed to the positive. We assume that is strictly bounded away from zero, that is, there exists a constant such that for all .

In Benth et al. [1], the model (3.5) was considered with , that is, with no stochastic volatility effects. The function was estimated on data from Stockholm, Sweden, to be a truncated Fourier series of order four, having a yearly seasonality, that is, for , , with time being measured in days. This seasonal volatility has been observed and modelled for data in Sweden and Lithuania (see Benth et al. [1, 18]), and US and Asian cities (see Cao and Wei [14], Härdle and Lopez Cabrera [20] and Benth et al. [25]). Recently, Härdle et al. [21] proposed and applied a powerful local adaptive modelling approach to optimally estimate the volatility.

A class of stochastic volatility processes providing a great deal of flexibility in precise modelling of residual characteristics is given by the Barndorff-Nielsen and Shephard [6] (BNS) model. In mathematical terms, we have
with
Here, is a constant measuring the speed of mean reversion for the volatility process , which reverts to zero. The process is assumed to be a * subordinator* independent of , the Brownian motion, meaning a Lévy process with increasing paths. In this way one is ensured that is positive.

In the BNS model a dependency on in the subordinator is introduced. We choose a subordinator and define , which then again becomes a subordinator. This is convenient when estimating such a stochastic volatility model. In fact, one may get relatively explicit distributions for the "deseasonalized" residuals , which become conditionally normally distributed, with mean zero and variance . In stationarity of , this distribution becomes independent of , and therefore one may separate modelling of the distribution of these residuals from the dependency structure in the paths. For this stochastic volatility model, the squared residuals will have an exponentially decaying autocorrelation function, with decay rate . By superposition of such 's, the autocorrelation function may decay as a sum of exponentials. We refer to Barndorff-Nielsen and Shephard [6] for an extensive analysis of this class of stochastic volatility models. We remark that the stochastic BNS volatility does not become a GARCH dynamics in discrete time, but shares some similar properties.

In the rest of this paper we suppose that the deseasonalized temperature is given by where is defined in (3.5) and defined in (3.7) and (3.8).

#### 4. Temperatures Futures Pricing

In this section we derive the futures price dynamics based on the CDD/HDD and CAT indices. This will be done with respect to some pricing measure specified in a moment.

We use the intrinsic notation for a temperature index over the measurement period , where TI = HDD/CDD or CAT. Let be the futures price for a contract returning the index . The futures contracts are settled at the end of measurement period, time , and the time value of a long position in the contract is where the constant is the risk-free interest rate. Thus, if is a probability equivalent to , we can define the futures price as by using that the contract is costless to enter and assuming that the futures price is adapted to the information at time , .

One may ask why we do not consider stochastic interest rates in this setup. In fact, it would not be any problem to do so, by, for example, assuming a risk-free spot rate . However, there is no reason why this should depend on the temperature, and a reasonable model would state independence between and . Thus, from the properties of conditional expectations, we would end up with the same pricing relations as in (4.2).

Note that the pricing measure is * only* equivalent to and not assumed to be a martingale measure. If temperature would be a tradeable asset, then the no-arbitrage theory of asset pricing (see, e.g., Duffie [9]) would demand that is an equivalent martingale measure, that is, a probability equivalent to under which the discounted temperature is a -martingale. Temperature is obviously not a tradeable asset, and the condition on being a martingale measure is not effective.

To restrict the class of pricing probabilities, we consider a deterministic combination of a Girsanov and Esscher transform. The Girsanov transform introduces a market price of risk with respect to the Brownian motion, while the Esscher transform will model a market price of volatility risk. In mathematical terms, we define the probability as with for , being two bounded continuous functions. Here, is the log-moment generating function of , that is, Note that the Novikov condition is satisfied for the Girsanov change of measure since , and being bounded. Furthermore, we assume exponential integrability of , that is, there exists a constant such that Then, the Esscher transform is well defined for all functions such that . We restrict our attention to this class of market prices of volatility risk. We find that is a -Brownian motion, independent of . Moreover, following the analysis in Benth et al. [3], it holds that is an independent increment process under , and, in the case where is a constant, a Lévy process.

There has been some work trying to reveal the existence and structure of the risk premium, or the market price of risk, for temperature derivatives. The theoretical benchmark approach by Platen and West [13] strongly argues against the existence of a risk premium in weather markets. The econometric study of US futures prices at CME performed by Dorfleitner and Wimmer [11] shows that the index modelling approach gives a very good prediction of prices, pointing towards a zero market price of risk. The results of Cao and Wei [14] is more inconclusive, as they are able to detect a significant (although small) market price of risk based on their equilibrium pricing approach. More recently, Hamisultane [15] showed using New York data that the estimates become very unstable within this pricing framework. Härdle and Lopez Cabrera [20] analyse various specifications of the market price of risk in an extensive empirical study of German futures prices. They find signs of a significant market price of risk for these contracts. Based on the abovementioned studies, one may choose the market price of risk as zero. As far as we know, there exists no empirical investigations on the volatility risk premium in weather markets. One may suspect that this is zero, but we include here for generality. Using would correspond to and be a choice following the indications of the study in Dorfleitner and Wimmer [11].

In the following proposition we compute the CAT futures price.

Proposition 4.1. *The CAT futures price at time for a contract with measurement period , , is
**
where
*

*Proof. *We know that the -dynamics of becomes
for the -Brownian motion . Thus, from the multidimensional Itô formula, it holds for
It follows that
By conditioning on the -algebra generated by the paths of , and , and using the tower property of conditional expectation, we find that the last term above is equal to zero from the independent increment property of . Finally integrating with respect to yields the desired result.

Observe that the market price of volatility risk does not enter the CAT futures price explicitly, only the market price of risk . We derive the dynamics of the CAT futures price in the following proposition.

Proposition 4.2. *The -dynamics of the CAT futures price is
**
whereas the -dynamics becomes
**
with defined in Proposition 4.1.*

*Proof. *This follows by a straightforward application of the multidimensional Itô Formula, where the observation that is a -martingale simplify the derivations considerably.

As is evident from this dynamics, the CAT futures price will have a seasonal stochastic volatility , dampened by the factor . This factor can be viewed as the integral of over the measurement period . Observe that for the case of a process, which is nothing but an ordinary Ornstein-Uhlenbeck process, this term collapses into , an exponential damping of the volatility of temperature as a function of "time-to-maturity" . This is known as the Samuelson effect in commodity markets. In the current context, we have a term which can be interpreted as the aggregation of the Samuelson effect over the measurement period (see Benth et al. [3] for a thorough analysis on the Samuelson effect for higher-order autoregressive models). The drift in the -dynamics of will depend explicitly on the parameter , defending the name market price of risk.

We next move our attention to CDD futures, which will become nonlinearly dependent on the temperature. Thus, we will not get as explicit results on their prices as for the CAT futures. It turns out to be useful to apply Fourier transform techniques (see Carr and Madan [26]).

Recall from Folland [27] that for a function , we have where is the Fourier transform of defined by Note the signs in the exponentials here, which are not following the standard definition.

Relevant to the analysis of CDD futures, consider the function , which is not in , but we can dampen it by an exponential function and get the following lemma.

Lemma 4.3. *For any , the Fourier transform of the function
**
is
*

*Proof. *We have
The result follows by straightforward integration.

We apply this in our further analysis of the CDD futures price.

Proposition 4.4. *The CDD futures price at time for a contract with measurement period , , is
**
with given in Lemma 4.3 and given by
**
where
**
The constant can be arbitrarily chosen.*

*Proof. *First, from the definition of the CDD index and the -dynamics of we find
where we have introduced the short-hand notation . From Lemma 4.3 we find that (after commuting integration and expectation by appealing to the Fubini-Tonelli Theorem)
The last equality follows from the -adaptedness of . To compute the conditional expectation in the last expression, we apply the tower property of conditional expectations with respect to the filtration generated by the paths of , and . This yields from the independence between and and the independent increment property of Brownian motion,
We have
Hence, by using the adaptedness of to and the independent increment property of under ,
Invoking the stochastic Fubini theorem (see Protter [28]) gives
Finally, we compute the expectation by using the Radon-Nikodym derivative of with respect to and the log-moment generating function of . This proves the proposition.

Although seemingly very complex, the price dynamics of a CDD futures can be simulated by using the fast Fourier transform (FFT) technique (see Carr and Madan [26] for a discussion of this in the context of options). In fact, given and , one uses FFT to compute . This procedure will involve numerical integration over the measurement period and in the expression for . Since FFT is rather efficient, this seems to be preferable to a direct Monte Carlo simulation of .

The CDD futures price depends explicitly on the current states of the volatility and process . Since we define the deseasonalized temperature as the first coordinate of , this is observable from today's (time moment ) temperature after removing the value of the seasonal function . However, the remaining states of must be filtered from deseasonalized temperature observations. One may appeal to the Kalman filter for doing this, although the seasonality and stochastic volatility complicates this procedure. A tempting alternative is to use an Euler discretization of the dynamics as in Benth et al. [3] to relate the coordinates of to lagged observations of temperature. (In Benth et al. [3], the Euler discretization is performed for a time continuous, but deterministic, volatility. Note that there is no change in the steps for developing the same identification when having a stochastic volatility.) It is easily seen from the definition of that the quadratic variation process of the last coordinate, , is equal to and thus we may apply this to recover from observing the path of .

Since depends explicitly on , we get the interesting consequence that even though the temperature dynamics has continuous paths, the CDD futures dynamics will jump when jumps. Thus, the futures price dynamics will have discontinuous paths. See Benth [29] for a similar result in the context of commodity and power markets using the BNS stochastic volatility model. Note also that the market price of volatility risk appears explicitly in the price, contrary to CAT futures.

The HDD futures price dynamics can be computed analogously to the price. However, we may also resort to the so-called HDD-CDD parity which we collect from Benth et al. [1, Proposition 10.1]. It holds that We have formulas for both the CDD and CAT futures prices, and thus readily follows from (4.30).

##### 4.1. A Discussion on Pricing of Options on Futures

The technique using Fourier transform is well adapted for option pricing as well, and we will briefly discuss this in connection with European call and put options written on CAT and CDD futures.

We start with a call option on a CAT futures, with exercise time and strike price , on a futures with measurement period , . The price of this call option at time is given by the no-arbitrage theory as As it turns out, the derivation of using Fourier techniques follows more or less exactly the same steps as computing the CDD futures price. In fact, we observe from Proposition 4.1, that Moreover, the payoff function of the call option is the same as the CDD index on a given day. By conditioning on the paths of , we can compute the option price step by step as in the proof of Proposition 4.4. This results in the following proposition.

Proposition 4.5. *The price at time of a call option on a CAT futures with measurement period , exercise time , , and strike , is
**
with given in Lemma 4.3 and
**
and defined in Proposition 4.1.*

Options on CDD futures are far more technically complicated to valuate. However, we may here as well resort to Fourier techniques and in principle obtain transparent price formulas. Considering a call option at time , with strike price and exercise time , we have the price as in (4.31), except that we substitute "CAT" with "CDD". Recalling from Proposition 4.4, we have Furthermore, from the dynamics of and , we can express both in terms of their current states and , in addition to integrals with respect to and from to . Similar considerations as for the CAT futures option can then be made, however, leading to a technically complex formula. We leave the details to the reader.

#### 5. Some Empirical Considerations

We refer to an empirical estimation which can be found in Benth et al. [3] as a starting point for our discussion. There a data set of daily average temperatures measured in degrees Celsius in Stockholm, Sweden, is studied. The data are recorded from January 1, 1961 to May 25, 2006, altogether 16,570 data points after observations on February 29 in every leap year were removed. The parameters of the seasonal function as defined in (3.2) were estimated by least squares approach to be , , , and . This means that there is a small increase in average temperature over the considered measurement period. The overall average temperature is around 6.3°C. The amplitude is ±10.44, which means that on average in the winter temperatures go down to around −4°C, while summer temperatures reach above 16.7°C. After removing the seasonal function from the data, one finds based on the autocorrelation structure of the residuals that a process fits the deseasonalized data very well. The estimated parameters in the matrix are , and , leading to eigenvalues with negative real part and thus leads to a stationary dynamics.

Our main concern in this paper is the precise modelling of the residuals. We have proposed a model where the volatility is defined as the product of a deterministic seasonal function and a stochastic volatility process . The natural path to follow in estimating this is to first estimate the seasonal function. We use the approach suggested in Benth and Šaltytė Benth [5, 17], where a Fourier series of order four is chosen for , and estimated on daily empirical variances. Next, the residual data are divided by the estimated function, in order to deseasonalize them.

The residuals after removing the effect of the function and the autoregressive effects, are close to normally distributed (see Figure 10.11 in Benth et al. [3]). However, there are signs of heteroskedasticity, in particular, a normality plot shows signs of a heavy left tail in the distribution (see Figure 10.12 in Benth et al. [3]). Also, the value of 0.002 of the Kolmogorov-Smirnov test suggests to reject the normality hypothesis of the residuals. We remark in passing that recently Härdle et al. [21] have proposed a local adaptive approach to find an optimal smoothing parameter to locally estimate the seasonality and volatility. The approach aims at correcting the nonnormalities in the residuals. Their approach improves the normal distribution fit to residuals for many cities, but there are still signs of heavy tails (see in particular the QQ-plot of Berlin data in Figure 11 of Härdle et al. [21]). We also remark that in Benth and Šaltytė Benth [5], the empirical seasonal volatility was used as the estimate of , but still a clear sign of stochastic volatility were present when analysing the autocorrelation function of the squared residuals.

In Figure 1 we show the autocorrelation function of the squares of the deseasonalized residuals on a log-scale. Characteristically, the autocorrelation function decays with the lags, and eventually wiggle around zero. As mentioned earlier, the stationary autocorrelation function of is an exponential function with decay rate equal to the speed of mean reversion of , meaning a linearly decaying autocorrelation function on a log scale. We fit a linear function to the first 10 lags, with the estimate . In Figure 1 the estimated line is shown together with the empirical logarithmic autocorrelation function.

Admittedly, the linear fit to the log-autocorrelation function in Figure 1 is very rough. Since the line does not intersect at zero, it cannot be related directly to an exponential autocorrelation function. One may mend this deviancy by considering a sum of exponentials. From Figure 1 we may assume a very steeply decaying autocorrelation function for small lags (up to 2), and thereafter decaying according the fitted line shown in the plot. Even more, after approximately 10 lags the autocorrelation function seems to flatten out, and a new line with a different slope would fit better. This could easily be incorporated into our framework by considering a superpositioning of three processes of the type modelling the volatility. In that case, we would get a theoretical autocorrelation function being the sum of three exponentials, which would be approximately represented as a piecewise linear function on log scale if the mean reversion speeds , , and are sufficiently distinct. This could be attributed to a fast reversion of big volatility changes, whereas the medium to smaller variations in volatility are slowly reverting, say. The estimated would approximately correspond the the mean reversion of medium volatility changes. In order to estimate such a superposition, one must resort to filtering techniques, which are discussed in detail in Barndorff-Nielsen and Shephard [6].

The final step in estimating our temperature model is to fit a subordinator process . This is done by appealing to the ingenious parametrization of by Barndorff-Nielsen and Shephard [6]. By using for a subordinator in the model for , they show that the stationary distribution of is independent of . Thus, the model for the residuals will become a variance mixture model, being a conditionally normally distributed with mean zero and variance equal to the stationary distribution of . For example, choosing the stationary distribution of to be generalized inverse Gaussian, the variance mixture model of the residuals becomes generalized hyperbolic distributed (see Barndorff-Nielsen and Shephard [6]) for discussion on this).

Based on this, we * first* select and fit a distribution to the deseasonalized residuals, and * secondly* construct the Lévy process which has the required stationary distribution . The process is called *the background driving Lévy process*, and Barndorff-Nielsen and Shephard [6] show that the distribution must be self-decomposable in order for to exist. In Benth and Šaltytė Benth [5], a generalized hyperbolic distribution was successfully fitted to the deseasonalized temperature residuals for data collected in several Norwegian cities. In this case, there exists a subordinator such that the stationary distribution of becomes generalized inverse Gaussian. The Lévy measure of can be explicitly characterised in terms of the selected distribution for the residuals.

One may ask whether some or all of the parameters in the specified temperature model may be time dependent. In our empirical analysis of Stockholm data, we have not detected any structural changes in the seasonality function . Furthermore, in an AR(1)-specification of the temperature model for Stockholm, we have investigated if there are any seasonality in the speed of mean reversion (study not reported). Looking at the estimates for particular months over the year, we did not find any pattern defending a seasonality in the speed of mean reversion (or, in the specification of the AR-matrix in our context). However, we refer to the paper by Zapranis and Alexandridis [30] for an analysis of temperature data in Paris, where the authors detect and model a seasonal mean reversion. We have not investigated if any of the other parameters in our model may vary with time.

To understand how fast the temperature dynamics is reverting back to its long-term average , we discuss the notion of * half-life* of the stochastic process . In Clewlow and Strickland [31] the half-life is defined to be the expected time it takes before the process is returned half way back to its mean from any position. We express this mathematically as the smallest time so that
For an Ornstein-Uhlenbeck process (where , and ), Clewlow and Strickland [31] derives that . In the next Lemma we derive the half-life of .

Lemma 5.1. *The half-life of defined in (3.9) is given by the solution of the equation
**
when assuming that for .*

*Proof. *First, we have after using the tower property of conditional expectations that
Putting equal to yields
Assuming for , we find that , and hence the result follows.

Note that the first coordinate of is equal to by definition. For convenience, we let the other coordinates be equal to zero in the above result. In reality, we should estimate the other states of by filtering the temperature data series in order to reveal their value at time zero. This means that the half-life becomes state dependent for higher-order autoregressive models.

For a temperature dynamics based on the Stockholm estimates referred to above, we find the solution to the half-life equation in Lemma 5.1 to be , that is, it takes on average slightly less than six days for today's temperature to revert half-way back to its long-term level.

Let us investigate the contribution from the various terms in the CAT futures price dynamics in order to get a feeling for their relative importance in the context of weather derivatives. For illustrative purposes, we choose a June contract, which starts measurement at time and ends at time (supposing time is measured from January 1). Recall the CAT futures price in Proposition 4.1. The first term with the seasonal function is equal to 191.5. Suppose now that current time is one week prior to June 1, that is, , and let for , but recalling . We set , meaning that the temperature is 5°C above the long-term mean level for that particular day. This leads to a value of 11.8 of the second term, which is around 5% of the value of the mean. By moving time to start of measurement, , we get the value 37.6 of the second term instead, which is close to 20% of the contribution of the long-term function to the CAT-futures price. This shows that the mean reversion of the temperature dynamics towards the seasonal mean function kills the effect of daily temperature variations quickly, and when being far from start of measurement the CAT futures price will be essentially constant (and equal to the seasonal mean for the measurement period). However, as we get closer to the start of measurement, the effect of the temperature variations become gradually more important and will impact the CAT futures price significantly. In Dorfleitner and Wimmer [11], an example of a seasonal futures contract (Chicago HDD winter contract) is presented, where the price is constant except from the last week prior to measurement. Then the observed futures price starts to wiggle. The seasonal function plays a dominating role of setting the level of the price. It is worthwhile to emphasize that the length of measurement period is of importance here. The shorter the measurement period will be, the smaller will the damping factor become, and the more sensitive the CAT futures price will become to variations in temperature. This effect is even more emphasized by the fact that the seasonal term becomes smaller with a smaller measurement period, increasing the relative importance of the second term. With the introduction of weekly futures contracts at the CME, this is an important observation. Obviously, the two last terms with the market price of risk can contribute arbitrarily high or little to the CAT futures price, by adjusting the level of . In a concrete application, one would estimate the from historical CAT futures prices. Referring to the studies in Benth et al. [25] and Härdle and Lopez Cabrera [20], we find that of the size 0.2 is reasonable. Thus, the terms involving will contribute very little compared to the seasonal function and the second term.

Let us investigate the stochastic volatility contribution to the CDD price. As we see from the pricing formula in Proposition 4.4, the influence of the stochastic volatility appears in the expression of , both explicity as , and implicitly via the characteristic function of the subordinator driving the dynamics of the volatility process. If we consider a model with no stochastic volatility, it is natural to suppose that . The function takes the form with defined in Proposition 4.4. Then, we find the difference between the stochastic volatility and constant cases as The first term is stochastically varying with the volatility around one, while the second term is determined by the market price of volatility risk through the characteristic function of . If is slowly varying around its mean, which naturally should be equal to one, then this first term will not contribute significantly. But due to potential jumps in the volatility, we may experience values of significantly larger than one, and thus impacting the futures price. The contribution of the second term depends on how big the market price of volatility risk is.

We are not aware of any empirical studies of the potential existence of a market price of volatility risk. One may estimate in the same way as , by for example, inferring it from minimizing the distance between theoretical and observed futures prices. Motivated by the studies of the market price of risk mentioned above (see Cao and Wei [14], Dorfleitner and Wimmer [11], Hamisultane [15], and Härdle and Lopez Cabrera [20]), one may suspect that the market price of volatility risk will be small and maybe not significant. It is to be noted, however, that even small values of may be big relative to the volatility, and thus modify significantly the effect of stochastic volatility viewed under the pricing measure .

In view of the existence of European-style call and put options written on the temperature futures, precise knowledge of the volatility is important. The volatility of the underlying temperature futures will determine the price of the option and play a crucial role in a hedging strategy of the option. In particular, the Samuelson effect of the volatility of the temperature futures makes the options particularly sensitive to the volatility close to measurement. Hence, a precise model for the stochastic volatility is important. The nonlinearity in the payoff of options also makes second-order effects in the underlying dynamics more pronounced, and thus even small stochastic volatility variations may become significant in the option dynamics.

#### Acknowledgments

The authors are grateful for the comments and critics from two anonymous referees, which led to a significant improvement of the original paper. Fred Espen Benth acknowledges financial support from the project Energy markets: modelling, optimization and simulation (emmos), funded by the Norwegian Research Council under Grant 205328/v30.