Abstract

We study how to design an optimal contract which provides incentives for agent to put forth the desired effort in a continuous time dynamic moral hazard model with linear marginal productivity. Using exponential utility and linear production, three different information structures, full information, hidden actions and hidden savings, are considered in the principal-agent model. Applying the stochastic maximum principle, we solve the model explicitly, where the agent’s optimization problem becomes the principal’s problem of choosing an optimal contract. The explicit solutions to our model allow us to analyze the distortion of allocations. The main effect of hidden actions is a reduction of effort, but the a smaller effect is on the consumption allocation. In the hidden saving case, the consumption distortion almost vanishes but the effort distortion is expanded. In our setting, the agent’s optimal effort is also reduced with the decline of marginal productivity.

1. Introduction

Private information is a significant feature in many economic environments, which leads to natural questions as to how to provide incentives for the agent in a dynamic setting. Therefore, the design of optimality contract of employment deserves to be investigated. However, the analysis of the model rapidly becomes complex as the dynamic model includes hidden information and relevant state variables. In this paper, we illustrate that the analysis of models with hidden actions and hidden savings can be simplified by taking advantage of continuous time methods. Using the assumption of linear production and exponential utility, we study a continuous time contracting model with linear marginal productivity in which optimal contract can be solved in a closed form. In addition to the works of Holmstrom and Milgrom [1], the exponential-linear structure is used in Fudenberg et al. [2], Mitchell and Zhang [3], Cvitanic and Zhang [4], and Williams [5]. The explicit solutions allow us to derive a simple implementable contract and illustrate the effects of information frictions. In particular, the model studied in this article is an extension of Holmstrom and Milgrom [1] and Williams [6].

We study three different information structures: full information, hidden actions, and hidden savings. As in Wang et al. [7], the principal can observe the agent’s effort, consumption, and wealth in full information case. Thus the agent’s participation is the only constraint which should be considered in the contract. We turn to the hidden action case, a classic moral hazard model in which the agent’s effort cannot be observed by principal but the agent’s consumption is also observable. As the principal cannot distinguish between adverse shock or low effort to output in hidden action case, the contract must provide incentives for the agent to put forth desired effort. As in Williams [5] and Cvitanic and Zhang [4], we derive our results by applying a stochastic maximum principle. Williams [8] developed the approach independently of Cvitanic and Zhang [4] and the work of Williams [5] builds on Williams [8]. We rely on some results of Bismut [9] and take a change of variable into consideration as it is considered in Bismut [10]. In this environment, the first-order approach is valid to design an optimal contract. That is, the incentive constraints can be characterized by the first-order conditions for agent’s effort choice when the agent faces a given contract. In a static setting, Rogerson [11], Jewitt [12], and Mirrlees [13] give different conditions which insure the validity of the first-order approach. Facing a given contract, the agent participates and puts forth principal’s desired effort, and thus the set of implementable contracts can be fully characterized. Implementable contracts are history-dependent where the first-order conditions are based on agent’s promised utility under the contract. This form of history dependence starts with Abreu et al. [14, 15] and is involved in many related literatures. Sannikov [16] and Meng et al. [17] studied the related continuous time models and Sannikov [18] gave an overview of the related literatures. Similar to Williams [6] and Su et al. [19], the assets and consumption payments occur continuously throughout the contract. The situation where the agent receives a single payment from principal is considered by Holmstrom and Milgrom [1], Schattler and Sung [20], and Cvitanic et al. [21]. Although the case of dynamic payments is taken into account by Sannikov [16], there are no state variables other than promised utility. We then turn to the hidden saving case in which the agent is able to borrow or save in an account that the principal cannot monitor. In this environment, besides the agent’s effort, the agent’s consumption, and wealth cannot be monitored by principal. As in Williams [5], when the model includes a hidden state variable, we need an additional state variable which gives a brief statement of “shadow value” of the state to capture history dependence in the contract. In the case with hidden savings, the agent’s current marginal utility of consumption characterizes the shadow value of the additional wealth. We apply a first-order approach to derive an optimal contract, which is similar to the approach of Werning [22], Abraham and Pavoni [23], and Pavan et al. [24] in discrete time dynamic moral hazard models. Garrett and Pavan [25] captures the dynamics of average distortions under optimal contract and deals directly with the full program by using an alternative approach of variational techniques. Unlike the hidden action case, with hidden savings, the validity of the first-order approach cannot be guaranteed by useful conditions. Therefore, we derive a candidate optimal contract by solving the agent’s optimality problem which provides necessary conditions for optimal contracts, and then verify ex post that the contract is indeed implementable. Applying different methods, Mitchell and Zhang [3] and Edmans et al. [26] show that their contracts are incentive and compatible in hidden saving case.

Our contribution is that we find the explicit solutions of optimal contracts with linear marginal productivity in a fully dynamic environment. We work with a generalized model where the marginal productivity is linear and decreasing, while the marginal productivity of the model in Williams [6] is a constant. Similar to Holmstrom and Milgrom [1], the optimal contract is also linear in our completely dynamic setting with linear marginal productivity and exponential utility. Facing a given contract, the agent’s optimal effort is a constant, but decreasing with the reduction of marginal productivity. However, the payment is linear in the logarithm of agent’s promised utility and the effective rate of return under the contract. The principal’s consumption is also proportional to the assets , the logarithm of promised utility and some time-dependent functions. After solving the optimal contracts, we make a comparison among three different information structures and study their implications. We also show the impact of information frictions and marginal productivity by analyzing the explicit results.

The rest of this paper proceeds as follows. In Section 2, we introduce a basic model including linear marginal productivity and provide a terminal condition which helps us to discuss problems from a finite to an infinite horizon. We introduce the implementability of contracts and show details of the change of variables. In Section 3, we consider the optimal contract with full information which serves as an efficient benchmark to compare with the private information models. Section 4 derives the solutions to the optimality contract with hidden actions by maximizing the agent’s expected utility and the principal’s expected utility. Using an additional state variable which is in fact redundant, the optimality contract with hidden savings is presented in Section 5. In Section 6, we compare three different cases and show the analytic results in figures. Section 7 gives a brief conclusion and finally we provide an Appendix which contains some details of the change of measure and proofs of our main results.

2. The Model

We consider a model where a principal hires an agent to manage a risky project and the principal’s output affected by agent’s effort choice. A related model in which output is i.i.d. and both the payments and the consumption occur only at the end of the contract was studied by Holmstrom and Milgrom [1]. As in Williams [6], our model includes intermediate consumption by both the agent and principal. We also consider the situation of hidden savings, where the agent can borrow or save with a constant rate of return in a risk-free asset. Much of the difficulty of hidden savings comes from the interaction of incentive constraints and wealth effects. Unlike Williams [6], who deals with the problem under the environment of constant productivity, we work with the marginal productivity which varies linearly with effort. Since the size of economy (assets) will affect the marginal productivity, it is natural and realistic to characterize the marginal productivity by using nonlinear function of effort. Under the circumstance of nonlinear marginal productivity, the first conditions derived from Hamilton-Jacobi-Bellman (HJB) equations and Hamiltonian functions are high-order equations. Consequently, it is very difficult to find explicit solutions of optimal contracts, even when the marginal productivity is quadratic function of effort. For convenience of capturing more features of real productivity and obtaining explicit solutions, linear marginal productivity is a better choice. It should be pointed out that we obtain similar results as in Williams [6]. We show that the agent’s optimal effort choice changes with marginal productivity of effort.

2.1. The Model

The environment in our model is a continuous time stochastic setting. Let be an underlying probability space supporting related standard Brownian motion . The evolution of information through time is represented by a filtration , which is generated by a Brownian motion . We consider a finite horizon and may be arbitrarily large. In order to extend a finite to an infinite horizon, we let .

At time 0, the principal contracts with the agent to manage a risky projects, whose cumulative output proceeds evolve on as follows:where is the agent’s effort choice, are constants and satisfyso that the marginal productivity is positive. In addition, is the volatility that represents an additional shock due to the increment of Brownian motion to the output. The proceeds of output add to the principal’s assets , which earn a risk-free return , and out of which the principal withdraws his own dividend or consumption and pays the agent . Thus the process of principal’s assets evolves asWe do not restrict to be nonnegative when working with exponential-linear models; thus their interpretations may be a bit flawed when they take negative values. We usually refer to as output, since it captures the same information as from the principal’s point of view. In addition, the agent has his own wealth , which earns the same rate of return , out of which the agent consumes from his assets and receives income flows due to his payment . Thus the agent’s wealth evolves asAt the terminal time , the agent gets a final payment and chooses his consumption which depends on his terminal wealth and the payment . We will show more details about the terminal date below.

Full information is self-explanatory and serves as a benchmark, while in the environment of hidden actions the principal cannot observe the agent’s effort or the shocks but only observe the assets . A classic moral hazard problem occurs, where the principal cannot distinguish low output caused by low effort or by a negative shock. Since the principal cannot examine if the agents are to deviate the desired effort , the principal should design the payment to provide incentives for agent to achieve his effort target. In the environment of both full information and hidden actions, the principal has the information of agent’s wealth and equivalently has the information of consumption . Since the allocation is only determined by total assets , the agent’s saving is redundant. As in Cole and Kocherlakota [27], without loss of generality, we assume that and the principal does all the saving and . The final information type is hidden savings, where the principal cannot observe the agent’s wealth or consumption for but only get the information of the agent’s initial wealth . In this case, besides the desired effort the principal sets targets that and , which are not observable. Thus the payment schemes must provide incentives for agent to put forth desired effort and not to save or borrow.

As mentioned above, we concentrate on exponential preferences for both the agent and the principal, which allow us to get explicit solutions for optimality contracts. A more general method of finding optimal contracts typically requires numerical solutions. The agent has a flow exponential utility over effort and consumption, where is the coefficient of absolute risk aversion, and a terminal utility over the final wealth and the terminal paymentwhere represents the agent’s discount rate and denotes the expected utility over effort and consumption . The principal has a flow exponential utility over his own consumption or dividend, where he has the same parameters and as the agents and has a terminal utility over the terminal asset and the terminal payment:The assumption of common exponential utility implies risk aversion.

2.2. The Terminal Date

In order to extend from a finite to an infinite horizon and to find explicit solutions, it is crucial to make some particular assumptions about what happens at the terminal date. At the terminal date , we particularly assume that the principal pays the agent , keeping for himself. Then from time , there is no more production, and both the agent and the principal live off their assets which only earn the same constant rate of return for the infinite future. Thus we solve savings-consumption problems of the following form:where is a given level of assets and satisfiesFor the principal, we point out that and . For the agent, we point out that and .

According to the principle of optimality in dynamic programming, HJB equation for the savings-consumption problem (7) evolves asThrough simple calculations, we verify that the solution and optimal policy areTherefore, for the principal and the agent, we set the terminal utility function in the following form:

2.3. Contracts and Implementability

We now formally introduce contracts and explain the meaning of implementability. Let be the space of continuous functions mapping into . It is convenient for us to let a bar over a variable, which implies an entire path on . Under hidden actions or hidden savings, we define the principal’s observation path to be the time path of output which is a random element of . We define the filtration to be the collection of -algebra generated by at time . A contract must include a payment and a set of recommended actions for all , which are functions of the relevant history. In the environment of hidden savings, these recommended actions are truly adopted by the agent, while, under full information, we have and , and, under hidden actions, we have . The set of admissible contracts is constructed by the set of -predictable functions . The payment and recommendations in the contract depend on the whole past history of the observation and the present time , but not on the future. Then in the case of a given contract, the agent will make his own choices of consumption and effort. Under full information, the agent’s all actions are observed by principal and the agent has no choices to make, while under hidden savings the agent chooses effort and consumption and under hidden actions the agent chooses effort. Thus the agent’s set of admissible controls includes -predictable functions . According to maximum principle results, we suppose that the set can be written as the countable union of compact sets. If the agent accepts the contract at time 0 and always chooses the recommended actions: during the contract period, this contract is called implementable. Under full information, the contract is implementable if the participation holds, while, under hidden actions with , those contracts satisfying are implementable.

2.4. A Change of Variables

Under hidden actions or hidden savings, in order to derive the conditions of incentive compatibility, we should take the agent’s decision problem into consideration when the agent faces a given contract. Generally, for any time , the agent’s payment is a function of the entire past history of output. Because of history dependence, which means that the function of the entire past history would be a state variable, we cannot directly solve the agent’s problem. As in Bismut [10], for convenience, the density of output process is taken as the key state variable rather than the output process itself. Particularly, let be a Brownian motion on , which represents the distribution of output resulting from an effort strategy and the output is a martingale. The distribution of output changes with the agent’s different effort choices. Thus different probability measure over output indicates different effort choices that is made by the agent, and we take the relative density to be the key state variable. We will show details of the change of measure in Appendix A.1, where the density evolves as follows:with .

In the environment of hidden states, the covariance between the unobservable and observable states plays a key role in the model. For convenience, we use the density-weighted wealth as the relevant unobservable state. After simple calculations from (4) and (12), we obtain its evolution:with .

Through changing variables from to , the stochastic differential equation (SDE) with random coefficients represents the state evolution. The coefficients of the transformed state evolution simply depend on which is a fixed, but random, element of the probability space rather than the key state directly depending on the entire past history. That is, we cannot directly take to be the state variable, when analyzing the agent’s problem. Instead, we fixed an output path and believe that it is true that the agent’s effort choices affect the output . Changing variables in this way is useful for us to deal with the history, as the relevant state variable depending on the whole past history of output can be replaced with the current density .

3. The Full Information Problem

We first discuss the full information problem. As the agent’s effort and consumption can be observed by principal, it becomes relatively easy to analyze the full information problem and the principal specifies their values about payment and effort directly in the contract. At time 0, the only requirement in designing the contract is to ensure the agent to participate, after which both the principal and the agent comply with the contract. We assume that the agent’s outside reservation utility is , and thus the participation constraint on the contract is .

As the participation constraint should be satisfied only at time 0, we obtain the optimal contract by using standard Lagrangian methods. However, as in Spear and Srivastava [28], we know that the contract should consider agent’s promised utility under the situation of dynamic moral hazard. It is natural to introduce the promised utility, which plays a key role in the private information cases, as a way of imposing the participation constraint. Accordingly, we define the agent’s promised utility as the expected discounted utility remaining in the contract from time forward:Using the martingale representation theorem, we obtain its evolution as follows:where represents the sensitivity of the agent’s promised utility to the external shocks, which is crucial for providing incentives under moral hazard. Since there is no initial value except a specified terminal condition, it is obvious that the promised utility is a backward stochastic differential equation (BSDE). In the full information case, can be chosen freely by principal in the contract, as long as it is a solution of (15) which implies that participation constraint will be satisfied.

In full information, in order to maximize the principal’s utility (6), the optimal contract problem is to choose for all and the terminal payment which is subject to the evolution of assets (3) and promised utility (15) and satisfies the participation constraint. We denote the principal’s value function for maximizing utility (6) as , which captures the principal’s expected discounted value when the principal’s assets are and the agent’s promised utility is . Using the principle of optimality in dynamic programming, we obtain the HJB equation for :with the terminal condition and . After calculations, we derive the following first-order conditions for : Combining the equations about conditions for , we have . Thus the optimal effort required by the principal in the contract is a constant, which is equal to his marginal productivity. This result implies the standard efficiency condition that the agent’s marginal rate of substitution between consumption and effort equals the marginal productivity of effort. Furthermore, from the expression of effort, the effort is advanced with the growth of and .

Due to the advantages of exponential preferences and linear evolution, we obtain explicit solution for the optimal contract. Taking the terminal condition and the specification of terminal preferences (10), we havewhere . Therefore we acquireThe solution of (16) for any time can be written as follows:for function . Substituting (20) into (16), we obtain the ODE as follows:whereand . The solution of (21) isTherefore, the optimal policies areThe principal and the agent share the production risk, as both of them are risk averse. Instead of the agent’s consumption directly depending on output, it is linear in the logarithm of utility process and changes with different values of and . With different parameters and , the agent’s optimal effort is different. The principal’s consumption is proportional to the current output and the logarithm of agent’s utility process. We also know that only the principal’s consumption is a time-dependent function, which captures the finite horizon feature of the problem.

In order to facilitate comparison among different cases, we extend the contract from a finite to an infinite horizon. Under full information, we directly solve the infinite horizon. However, in the private information models, our results are only appropriate for finite horizon cases, so we show the limit of these solutions as . Through calculations, we get and one can verify that is the solution of the infinite horizon problem. The state variables for the infinite horizon evolve in the following form:The process of principal’s assets follows an arithmetic Brownian motion with constant volatility and constant drift, while the agent’s promised utility follows a geometric Brownian motion, which is the same as the results obtained in Williams [6]. The expected growth rate of the agent’s promised utility is equal to the difference between the discount rate and the rate of return .

4. The Hidden Action Case

We consider private information model where the agent’s effort or shocks is unobservable and the assets can be observed by principal. We assume that the principal can observe the agent’s consumption and wealth, and thus we set and . In order to make the agent put forth the desired amount of effort, the principal must provide incentives. When the agent faces a given history-dependent contract, we derive the agent’s optimality conditions by solving the dynamic moral hazard problem, then we show that the agent’s first-order conditions are sufficient to ensure implementability. Consequently, we have explicit solution for the optimal contract.

4.1. The Agent’s Problem

It is crucial to see what effort level the agent would choose, when the principal designs an incentive compatible contract. However, as discussed in Section 2.4, changing of variables which depend on the history of output plays a key role, so we take density process (12) to be the relevant state variable. Using , the agent’s preference can be written as follows:Using the measure over output which is induced by agent’s effort policy , we obtain the first equation which represents the expectation of discount utility, as discussed in Appendix A.1. Using the density process defined above, we have the second equation which implies that the state dependence variables inherited from contract are replaced with the relevant state variables. The agent’s problem is to solvesubject to (12) for a given .

After changing the variables, the agent’s problem becomes one of control programs which has random coefficients. According to the method in Bismut [9], we apply the stochastic maximum principle to characterize the agent’s optimality conditions as it is used in Williams [5]. Similar to the deterministic Pontryagin maximum principle, we should define a Hamiltonian, then express optimality conditions by differentiating Hamiltonian, and derive adjoint or “costate" variables, when using the stochastic maximum principle. Since the state variable is stochastic, there is a pair of processes in the adjoint variable, one of which multiplies the diffusion of the state and another the drift. The pair of adjoint process solves a BSDE. With state and adjoint , we define the agent’s Hamiltonian asSince there is no drift in , the Hamiltonian does not include the level of the adjoint . Furthermore, since , instead of the Hamiltonian , we define the conditions on the reduced Hamiltonian .

Similar to the deterministic optimal control theory, the task of optimal control is to maximize the Hamiltonian and the differentials of Hamiltonian govern the evolution of the adjoint or costate variables. Particularly, the drift of the costate consists of and a term which reflects the discounting. Then we add a diffusion term and the evolution of the adjoint variable can be written asHere through the change of measure, we carry out the second equation, which shows that the costate associated with the relative density is promised utility in (15). In the hidden action case, using the relative density is not just for convenience as in the full information case. Here it becomes an element of agent’s optimality conditions, which captures the features of the change of shadow value in the likelihood of different output processes.

In the following, we require a process if . Then we give necessary conditions for optimality. We show the details of proofs of all results in Appendix A.2.

Proposition 1. Let be an optimal control-state pair. Then there exists an -adapted process in that satisfies (29) with . Moreover, for almost every , the optimal choice satisfiesIf is convex, for all , an optimal control satisfies

Although the results are similar to that of Williams [5], our reduced Hamiltonian is different from that of Williams [6]. Since the result is only necessary conditions for the agent’s problem, the set of contracts characterized by the agent’s first-order conditions alone may be larger than that of implementable contracts. We will show that the first-order approach is accessible to define the optimal contract in the next subsection.

4.2. Implementable Contracts

In order to induce the agent to provide an interior target effort , we build in the incentive constraints by using the first-order condition (31), which can be reduced to an equality at :Using (32), we define the target volatility , which can be represented by the target effort and the consumption. As we know , so that promised utility is advanced with a positive shock which means that the Brownian motion has a positive increment.

The contract is called locally incentive compatible if it satisfies the first-order condition (32). We now characterize the set of implementable contracts. When an agent faces a contract with consumption , target effort , and associated volatility , the condition (32) ensures that the agent’s optimal response is to provide the target effort which impliesIn hidden action case, the reduced Hamiltonian is concave with respect to , so the condition (32) is necessary and sufficient for local optimality of the contract. We define a contract as promise-keeping if it implies a solution to (29). Since the terminal condition may not be satisfied, not all contracts satisfy this condition. As mentioned in full information case, if this solution has , we may say that the contract satisfies participation constraint. So the set of implementable contracts can be characterized by these conditions as in next result.

Proposition 2. A contract is implementable in the hidden action case if and only if it satisfies (i) the participation, (ii) promise-keeping, and (iii) locally incentive compatible constraints.

The proof of Proposition 2 is in Appendix A.2. As discussed above, we establish the validity of first-order approach. Therefore, in the setting of this paper, the global optimality conditions are equivalent to the local first-order condition.

4.3. The Optimal Contract

We now show how to get an explicit solution of a contract, which implies the principal’s choice in the contract. As discussed above, we define the value function as . We know that the contract provided by principal should satisfy participation, promise-keeping, and incentive constraints. Instead of choosing the volatility variable freely in full information case, here the volatility variable is a function of consumption and effort. The principal’s HJB equation for can be written aswith the terminal condition and . According to the form of in (32), we suppress function arguments and obtain first-order conditions for as follows:Under the exponential-linear environment, with hidden actions the value functions and the optimal policies in the contract have the same form as with full information if we ignore different constants. Particularly, the same form of value function isfor some function with respect to . After some calculations, the optimal policies arewhere both and are constants. Using the fact that and substituting (36) into the first-order conditions for , we see that and satisfySolving (38), we haveConsequently, we give an implicit expression for by substituting (39) into the first equation in (38).

Substituting the optimal policies into (34), we obtain the function which satisfies an ODE of the same form as in full information case:whereand the terminal condition is . The solution of (40) is

While the policy function both in the full information case andi n hidden action case has the same form, their constants are different. Through a simple numerical task, the values of the constants can be solved but we do not have access to get explicit analytic expressions. When , the policies in hidden action case degenerate into the full information solution with and , which is obvious as there is no private information when the output is not affected by shocks. For small , we obtain and . Therefore, we have that , which implies that the agent’s utility in the case of moral hazard is more responsive to incoming information than that under full information.

In order to have a good understanding of optimal contract, we expand and when is near to zero. According to (38) and (39), we have the following approximations:Substituting into , we haveSo the information frictions reduce the amount of effort provided by the agent, but almost no change in consumption. In the contract, is the agent’s effective rate of return, which can be regarded as an after-tax return on savings. From the approximation of , this return decreases with the growth of volatility. Furthermore, effort varies with the parameters in a simple way: a greater rate of risk aversion parameter or return parameter or smaller parameters and parameter leads to larger reductions in effort. Below we show that the exact solutions for the model with specific parameters are in accordance with these approximations. Therefore, the information friction has little effect on consumption but leads to a reduction in effort.

As discussed in full information, these results are only appropriate for finite horizon cases. We know that the limit of the finite horizon solution is well defined. As , we have . In the infinite horizon limit, the evolution of state variables can be written asCompared with full information, the expected growth rate of promised utility in moral hazard grows larger, as for small , and the volatility of promised utility also grows larger as .

5. The Hidden Savings Case

We now consider the case that the principal cannot monitor the account in which the agent is able to save and borrow. As discussed above, in hidden actions, the agent’s wealth satisfies (4) but the principal can no longer guarantee that the agent’s consumption is equal to his payment. In order to eliminate savings and achieve the targets and , the principal must design a contract which makes the agent have no inventive to save. Similar to the case of hidden action, we derive the agent’s optimality conditions when he faces a given history-dependent contract. However in the hidden saving case, the set of implementable contracts can no longer be characterized by agent’s optimality conditions. Using the necessary optimality conditions, we get a candidate optimal contract. Then we demonstrate that the contract is truly incentive compatible and implementable. The same approach is used in Farhi and Werning [29]. Generally, designing the contract needs an additional endogenous state variable to capture the shadow value (in terms of the agent’s marginal utility) of the hidden state. Below we show that the additional variable is redundant in our setting.

5.1. The Agent’s Problem

Similar to the hidden actions, we derive the agent’s first-order optimality conditions. The agent should choose effort as well as his consumption. Thus the problem is to solvesubject to (12) and (13) for a given . According to stochastic maximum principle, the additional state variable requires an additional adjoint denoted by . On the basis of states and adjoints and , we define the agent’s Hamiltonian as withwhere we use (13). We obtain the evolution of costates by differentiating the Hamiltonian. Therefore, the promised utility with terminal condition follows (15), and its sensitivity is . The sensitivity of costate variable which is associated with the wealth is ; thus the evolution of adjoint can be written byAs discussed in the hidden action case, (48) is derived by differentiating with respect to and changing measure. Using the stochastic maximum principle, we get necessary conditions for agent’s optimal choice.

Proposition 3. Let be an optimal control-state pair. Then there exist -adapted processes and in that satisfy (15) and (48) with . Moreover, for almost every , the optimal control satisfiesAssume that is convex; then, for all , a pair of optimal controls satisfies

We provide the proof of Proposition 3 in Appendix A.2. Differentiating the reduced Hamiltonian, we obtain the first-order conditions:It is obvious that , the agent’s marginal utility of consumption, which means that the agent could increase his current marginal consumption as a little wealth changes. The solution of (48) can be written in the following form:According to the law of iterated expectations, for , the standard consumption Euler equation holdsThus the agent’s optimality conditions can be expressed as the expectation of marginal utility of consumption, and the volatility of marginal utility is affected by the contract.

5.2. Necessary Conditions for Implementability

In order to achieve the target policy and , both marginal utility and promised utility are crucial for an implementable contract. In our setting, since the utility and the marginal utility are exponential preferences, the additional costate variable is redundant. We will verify this condition below. At terminal time , we haveSo is proportional to . Combining and (52), we obtainTherefore, for all and the additional costate variable is redundant, which implies that provides no more information than that of promised utility .

Substituting and into the first-order conditions, we getAs in the hidden action case, a contract is called locally incentive compatibility if these conditions are satisfied. In other words, the target control would satisfy the agent’s optimality conditions if the agents were to have no wealth . However, when the agent chooses a different amount of wealth and different actions, the contract may not be full incentive compatibility. Thus we should rule these cases out. As discussed in Williams [8], the concavity of Hamiltonian in is a sufficient condition for implementability. This is similar to the sufficiency condition for maximum principle in Zhou [30]. Unfortunately, it is difficult to verify this concavity assumption in our model. Instead of establishing the implementable contract directly, we find a candidate optimal contract by using necessary conditions for implementability and then verify that the contract is indeed incentive compatible.

Proposition 4. An implementable contract in the hidden saving case with target satisfies (i) the participation, (ii) promise-keeping, and (iii) locally incentive compatible constraint.

We give the proof of Proposition 4 in Appendix A.2.

5.3. The Optimal Contract

In order to choose a contract that satisfies necessary conditions, we define value function as . In the hidden saving case, according to the consumption and the volatility , the principal’s HJB equation for iswhere the terminal condition is and . Thus the evolutions of first-order conditions for areIgnoring different constants, the optimal policies and value function have the same form as those of the previous cases. In fact, the form of value function iswith some function . The optimal policies arewhere is a constant. The agent’s optimality conditions determine the policies and when . Substituting (59) into (57), we obtain an ODE for whereand the terminal condition is . The solution of (61) is

Similar to the optimal effort choice in the hidden action case, we haveAll of these results in the hidden saving case are in agreement with those in the hidden action case if . When the agent has access to assets which cannot be monitored by principal and yield the risk-free return , it is impossible for principal to make the effective return by distorting the allocation intertemporally. Thus the principal’s ability to provide intertemporal incentives is limited by hidden savings.

In the infinite horizon limit, the evolutions of state variable can be written byThese evolutions of output and promised utility under hidden savings are the same as those of hidden actions if . Since the principal is not able to affect the agent’s intertemporal incentives in the hidden saving case, the expected growth rate of promised utility under hidden savings is smaller than that of hidden actions and is the same as that of full information.

5.4. Verifying Incentive Compatible

Using the agent’s necessary optimality conditions, we obtain a candidate optimal contract which may not be incentive compatible. In order to show that the contract is indeed implementable, we now explicitly solve the agent’s problem when he faces a given contract.

Under the contract, the agent’s payment depends on promised utility . For the agent, evolves aswith , where Brownian motion represents the principal’s information set under the optimal contract, is the real effort provided by agent, and is the optimal effort included in optimality contract. The expected growth rate of promised utility increases if we let , and this result can be interpreted by a negative shock. According to the terminal condition , we know that the terminal payment is the inverse function of . Without loss of generality, we assume that the agent’s initial wealth is , as initial wealth is observable and can be taxed away by principal. The evolution of wealth can be written byAs discussed above, we define the agent’s value function as . Then the HJB equation iswith the terminal condition

We can verify that is the agent’s value function for all . Substituting into (68), we obtain first-order conditions for :Through simple calculations, these two first conditions give , which implies that the target effort level is achieved. The incentive problems induced by the deviation of effort and hidden savings are of no existence here, as effort is independent of . The optimal condition for isCombining (67) and (71) gives . Under the assumption of , the agent always keeps his wealth level at zero and consumes the optimal amount which is given by the contract. Therefore the candidate optimal contract is in fact implementable.

6. Comparing the Different Cases

Table 1 gives an outline of optimal policies of the contracts under full information, hidden actions, and hidden savings. In terms of forms of the policy functions, we find that they are similar to each other. In each case, the agent’s effort is constant, the agent’s consumption is proportional to the , and the principal’s dividend is proportional to the and the current assets . Both of their consumption functions are also negatively correlated with the agent’s effective rate of return on assets. The effective rate of return is under full information and hidden savings and the effective rate of return is under hidden actions. Compared with full information, the main effect under hidden actions is the reduction of the agent’s effort. The effective rate of return falls to and , which offsets at least partially the effects of reduction of effort, while the effort falls. Thus there is almost no effect on consumption. The agent’s effective rate of return on savings leads to the main difference between the hidden action and hidden saving cases. The optimal policies in the hidden action case would coincide with those in the hidden saving case if the effective return on savings is . But under hidden savings, it is limited for principal to provide intertemporal incentives. According to the derivative of effort with respect to , the effort is decreasing in if . As discussed in the hidden action case, it will certainly hold for small shocks that is just smaller than . Since both and are reduced with the growth of and , the agent provides less effort and has less consumption in hidden saving case than that of hidden actions.

In the full information, hidden action, and hidden saving cases, we set , , , , , , , and then the functions for effort and consumption are shown in Figure 1. As we have seen, the left diagram plots effort functions for varying , while the right diagram plots consumption functions for varying . Since the information friction vanishes as , it is obvious that the results of all cases will coincide with each other. Compared with full information, the effort in both hidden action and hidden saving cases falls more rapidly with the growth of . Moreover, for small , the consumption of hidden savings falls sharply but that of hidden actions is relatively unaffected. However, both effort and consumption with hidden savings fall further more than that with hidden actions. As we have already seen, these efforts are all monotone in .

Since the ability to manage risk assets for different agents is usually different, it becomes meaningful for principal to consider the effect of marginal productivity. Particularly, in our model the marginal productivity is described by parameter values of and . Figure 2 shows that the functions for effort in the full information, hidden action, and hidden savings cases change with and . In the left panel, we set , , and plot effort versus . In the right panel, we set , , and plot effort versus . As we have seen, the agent’s effort increases with the growth of both and in all cases. The effort with full information is linear in and with hidden actions and hidden savings is positively correlated with but not linear. Since the size of the gap of effort between each case is affected by the shock , the difference between two cases keeps the same in the right panel, which is not affected by . Similar to the previous results, the effort with hidden actions is larger than that with hidden savings but is smaller than that with full information. If the shock , the efforts for particular parameterizations in Figure 2 would coincide.

Finally, in order to measure the cost of information frictions, we show reductions in the principal’s consumption. In three different information structure models, all dividends are affected by some functions of time . That is, for each fixed level of assets and promised utility , at each date the difference of principal’s consumption between two different informational assumptions is a constant amount depending on the information structure. Particularly, in the infinite horizon limit, the reductions of principal’s consumption in hidden action and hidden saving cases, which is relative to full information case, are shown in Figure 3. With and , the largest cost of information frictions is approximately equal to a relatively low rate of 2.8% under hidden savings and of 2.4% under hidden actions. Since the agent’s marginal productivity falls and so does the agent’s effort, the cost in our setting is smaller than that in Williams [30]. Of course with lower levels of promised utility or greater output , the proportional reduction of principal’s dividend is much lower.

7. Conclusion

In this paper, we study how to find the explicit solutions for optimal contracts under full information, hidden actions, and hidden savings in a dynamic principal-agent model. In a continuous time setting, we can expediently apply several powerful results in stochastic control. In addition, under the assumptions of exponential utility and linear production, we solve explicitly optimal contracts and give their expressions.

We show that in a fully dynamic setting with exponential utility the optimal contract is linear. The optimality efforts in all cases are constant amount but decrease with the reduction of marginal productivity. However, the agent’s payment or consumption is proportional to the logarithm of promised utility and the effective rate of return under the optimal contract. The principal’s consumption is also proportional to the assets , the logarithm of promised utility, and time-dependent functions. Moreover, we show that the main effect of hidden actions is the reduction of effort, but a smaller effect on consumption since a slight effect on the agent’s implicit rate of return occurs in this case. In hidden savings, the reductions of both effort and consumption are much more than those in hidden actions.

In real economic activities, it is reasonable and practical to use nonlinear relation to describe the economic features. A natural extension of the current model is to consider nonlinear productivity and studying nonlinear relation is our future work.

Appendix

A. The Change of Measure and Some Proofs

A.1. Details of the Change of Measure

Here we show technical details of the change of measure in Section 2.4. We assume that the underlying probability space is the space of continuous functions where the induced distribution works on. We let represent the family of coordinate functions, and let the sample space be the space , and let represent the filtration generated by . We let denote the wiener measure on and the filtration includes and the null sets of . According to the basic filtered probability space defined by these statements, we define the Brownian motion on this space. It is obvious that is continuous, invertible, and bounded as is constant. Therefore, the regularity conditions required by Elliott [10] are satisfied and these conditions guarantee that there only exists a unique strong solution to the stochastic differential equation as follows:with initial condition . This evolution of output is obtained under an effort policy which leads the drift of output to be zero at each time . Through changing the distribution over outcomes in , different effort choices lead to different evolutions of output. For a given contract with the principal’s consumption , we define the drift of output asSince the drift is linear with respect to , the assumptions of predictability, continuity, and linear growth conditions in Elliott [10] are satisfied. We define the family of -predictable process for :where for all and is a martingale. According to the Girsanov theorem, we define a new measure byand the process is defined bywhere is also a Brownian motion under . Combining the new process and the equation (A.1), it is obvious that the state of output follows the SDE:Hence different Brownian motions are induced by different effort choices . Moreover, defined above is the relative density process for the change of measure and satisfies .

A.2. Proofs of Results

Proof of Proposition 1. In order to apply the results in Bismut [9, 10] to our problems, some basic regularity conditions on the fundamentals of the problem must be satisfied. As discussed above, is continuous, invertible, and bounded since is a constant. We know that defined as the drift of is linear, continuous and thus satisfies a linear growth condition. In addition, both the period utility and terminal utility are continuously differentiable. Therefore, all the required conditions hold. We now illustrate the details of deriving the Hamiltonian and the evolution of adjoint equations (29) by applying the maximum principle to the system with as state variables. We note from (12) that satisfiesAs in Bismut [10], the Hamiltonian for the problem iswhereAs , we have the same optimality condition (30) with and . Then the adjoint variable follows the BSDE:Let be an optimal control-state pair. Carrying out the differentiation and simplifying, we haveThus there exists a adapted process in that satisfies (29) with .

Proof of Proposition 2. The result is an extension of Theorem in Schattler and Sung [20]. If the contract is implementable, then the participation constraint must be satisfied. By Proposition 1, the implementable contract must satisfy promise-keeping and be locally incentive compatible. Hence the necessity of conditions holds. To show the sufficiency of the conditions, we need to verify that is an optimal control when the agent faces the contract . For the following the expected utility is given by , and when any contract satisfies promise-keeping and has a sensitivity , it has the following representation:which integrates (29) forward. For any , the following facts are true, namely,Using (A.12), the first equality holds. According to the definition of the change of measure between and , the second equality can be obtained. Through the definition of Hamiltonian function and simple calculations, the third equality also holds. The inequality follows from the local incentive constraint. Since the stochastic integral is a martingale due to the square integrability of the process, we have the final equality. Here is arbitrary and the utility is the best level the agent can achieve, which is greater than the agent’s reservation level by assumption. Thus is an optimal control and so the contract is implementable.

Proof of Proposition 3. We let be the drift of . It is obvious that the basic regular conditions required in Bismut [9, 10] are satisfied. Applying the maximum principle to the system with state variables and , we show how to obtain the Hamiltonian and the evolutions of adjoint (29) and (48). Then note from (13) that satisfiesAs in Bismut [10], the Hamiltonian for the problem iswhere . Using the same method as the proof of Proposition 1, the adjoint variables follow the BSDE:where is the optimal control. Thus we arrive at (29) and (48).

Proof of Proposition 4. The necessity of the conditions follows directly as in Proposition 2 above, as it is a consequence of Proposition 3.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.