Abstract

This paper is concerned with the nonlinear model predictive control of the harvesting effort of a renewable resource system, namely, sustainable seafood (farmed fish), with a nonlinear state equation. A solution approach is proposed and discussed, and numerical illustrations are provided.

1. Introduction

A lot of the resources we use daily cannot be recovered once they are gone. This type of resources is called nonrenewable resources. For the reason of the overconsumption of nonrenewable resources due to the exponential evolution of the industry, we have to turn to another type of resources which can replenish with the passage of time, either through biological reproduction or other naturally recurring processes. This type of resources is called renewable resources, such as biomass resources (fish, trees, food crops, algae, agricultural and forestry byproducts, and even methane fumes from landfills), water, wind, and solar. Renewable resources may be the source of many types of power for renewable energy. However, if the exploitation rate or consumption rate of the renewable resource exceeds its renewal rate, the renewal will not be ensured. Then, we could run out of renewable resources, and for this reason, we should turn to sustainable fishing. Solutions to this problem involve several plans of action not only by governments but also by consumers and communities; everyone must play a larger role in trying to achieve better management of renewable resources. Mathematicians can also participate in this effort by mathematically modeling the phenomenon as an optimal control problem and solving it for the optimal harvesting rate which will ensure sustainable fishing.

The traditional optimal control theory has been used extensively to effectively and efficiently manage the fishery resource. A large number of papers are available. For an account on the theory and applications, the reader is referred to the books by Clark [1, 2].

Model predictive control (MPC) is usually formulated as the repeated solution of an optimal control problem. Based on measurements obtained at time , the controller determines the input such that a predetermined objective function is minimized over a control horizon . The optimal input is implemented only until the next sampling instant . Using the new system state at the new sampling time, the procedure is repeated. Allgöwer et al. [3] summarizes a standard NMPC scheme work as follows:(1)Obtain estimates of the states of the system(2)Calculate an optimal input minimizing the desired cost function over the prediction horizon using the system model for prediction(3)Implement the first part of the optimal input until the next sampling instant(4)Continue with (2)

It is customary to distinguish between linear MPC (LMPC) used when the system dynamics are linear models and its extension, nonlinear MPC (NMPC) used when the system dynamics are nonlinear models.

MPC has received a great interest during the last decades in the process industries, especially in chemical processes. Goodwin et al. [4] reported more than 2000 applications of MPC. There are many excellent reviews of MPC. We cite the most recent one, Qin and Badgwell [5], which cites the others, and also reports more than 4,500 applications.

In the present paper, we aim at using the nonlinear model predictive control, an optimal control approach that we have not seen applied in the context of nonrenewable resources. We consider a resource biomass subject to harvesting. The evolution of the resource biomass is described by a nonlinear differential equation. The optimal harvesting effort is obtained in the closed form. The model considered is nonlinear, and hence, NMPC is used to study a tracking problem. We hope to contribute to increase the knowledge about NMPC to stimulate new research in this promising field.

We formulate the model in Section 2 and develop the solution procedure in Section 3. Simulation results are presented in Section 4. Section 5 discusses the importance of our results, and Section 6 concludes the paper.

2. Model Formulation

We consider a resource biomass of density and denote by the harvesting effort at time t. Denoting by the catchability coefficient, by the growth rate of the resource, and by its carrying capacity, the dynamics of the resource during the planning horizon are governed by the usual ordinary differential equation (see, for instance, Dubey et al. [6] and the references therein):

Define the targets (goal density of resources biomass) and (goal harvesting effort) and introduce the shifted variables:

It is desired to have the state variable converge toward its goal and the control variable toward its goal . Note that, in order to have , one has to choose the goal harvesting effort:

Penalties and are incurred for each variable to deviate from its goal so that at any instant of time and given a prediction horizon of length (with ), the controller seeks to determine the harvesting rate over that minimizes the following objective function:where represents the penalty of the terminal state.

3. Model Analysis

It is well known that the main drawback of MPC is that the computational burden involved in the optimization stage is fairly expensive. This has limited its practical use for nonlinear systems. Various techniques have been used to alleviate this burden. In practice, successive linearizations which result in simple quadratic optimization are used, for example, Lawrynczuk [7].

It is widely accepted that approximate solutions are preferred to exact ones to remove the computational encumbrance from the online implementation. Our approach consists in using the trapezoidal rule to approximate the definite integral in (4). To this end, putand divide the prediction horizon into m subintervals of equal length . Using the trapezoid formula for m intervals, the objective function (4) becomeswhere, as in (5),

To deal with the term in (7), we write the linear approximation of the state variable as follows:

For simplicity, let

Then,

Taking the square, we havewhere

Let

After some lengthy computations, we put the objective function (6) under its final form:where

Note that is independent of the control and that the diagonal matrix is positive definite and invertible provided . We derive the optimal control in the following two cases:

Case 1 (p > 0). Clearly, the minimum of is reached at such thatAs mentioned in Introduction, in the receding horizon, only the first component of the vector is implemented until the next sampling instant and then the whole process is repeated. From (16), we obtain explicitly aswhereSince our previous analysis is valid for any , we substitute the expression in the state equation (1) to obtain the following differential equation:which can be solved numerically on the interval with the initial value , for any .

Case 2 (p = 0). In this case, the matrix does not exist; however, the objective function has the following simpler form:whereIn this case, the minimum of is reached atwhich yields after computations:Similarly to the previous case, the state equation is used to find the state variable over the interval as follows:whereObserve that and so which means that the previous differential equation is a Riccati differential equation and has an explicit solution. To find that solution, we compute , and we find out that , and hence, the solution has the formwhere and .

4. Numerical Simulations

We implement the results obtained in the previous section for the case , in which the differential equations are solved numerically. The case is much easier due to the explicit form of the solutions of the differential equations.

4.1. Base Case

For a given time horizon , we take for all the following simulations an instant time and a prediction horizon of length .

Consider the following values of the parameters: , , , and . Figures 1 and 2 show that the solutions (the blue continuous curve) and (the blue discontinuous curves) converge to their corresponding targets (the multicolored line) and (the golden line) over the interval , respectively. Figure 3 shows that the objective function goes to zero, as we expected, at the end of the prediction interval. We point out that the value of J is computed at any time by substituting the expression of the optimal control (16) in equation (14), and we obtain

In this case, we found .

4.2. Sensitivity Analysis

For the study of sensitivity of the model, we varied three parameters , and for the other parameters, the same study can be conducted. Consider the following values of the parameters:

Our numerical results are presented in Figures 46. Figure 4 shows that the objective function increases when the parameter r increases from 0.1 to 0.4. Similarly, Figures 5 and 6 show that J increases when the values of the parameters k and increase, respectively, from 1 to 20 and 1 to 60. These results are exactly what we expected.

5. Discussion

The growing human needs for more food and energy have led to increased exploitation of several biological resources. Humans today extract and use around 50% more natural resources than only 30 years ago, at about 70 billion tons of raw materials a year. On the contrary, there is a global concern to protect the ecosystem at large. A sustainable resource use is required: high resource consumption is not a requirement for high quality of life.

To avoid overexploitation, and hence possibly extinction, of a resource, policies have to be devised to restrict the amount of fish caught. Many alternatives have been proposed, e.g., Behringer and Upmann [8]:(i)The fisher undertakes several fishing journeys within the fixed time horizon(ii)The fisher is granted a residual or salvage value for the stock of the resource remaining at the end of the fishing period(iii)The establishment of harvesting quotas or catch shares(iv)The specification of a minimum mesh size

We propose in this paper the alternative of specifying a goal density of resources biomass . This could be, for example, a fraction of the carrying capacity K. This will automatically generate a goal harvesting effort by (3). A regulating agency imposes penalties when either the density of resources biomass or harvesting effort deviates from its goal. This approach of specifying a target is widely used in other areas such as engineering (see, for example, Udwadia [9] and the references therein, Çimen and Banks [10], Tan et al. [11], and Hedjar and Bounkhel [12]) and production planning (see Hedjar et al. [13], Hedjar et al. [14], Tadj et al. [15], and Bounkhel et al. [16]).

A novel and effective use of the MPC method is successfully demonstrated in the numerical simulation section. Contrary to the maximum principle which determines at time zero the optimal harvesting rate for the whole planning interval , MPC achieves a better and more flexible control by determining the optimal harvesting rate at any time on successive small prediction intervals , until the end of the planning horizon. On each small interval, the state and control variables are tracked toward their respective goal.

6. Summary and Future Research

We have considered in this paper a resource biomass subject to harvesting. It is sought to maintain the density of the resource biomass and the harvesting effort at some desired levels. Since the dynamics of the system are nonlinear, a nonlinear model predictive approach is applied. The optimal harvesting effort is obtained in closed form. When the penalty for the harvesting effort to deviate from the desired harvesting rate is zero, the optimal density of the resource biomass is obtained explicitly too. When this penalty is not zero, the optimal density of the resource biomass is obtained as the numerical solution of a differential equation. In this case, we have provided some numerical example to show how the optimal state and control variables are obtained. We have also calculated the optimal objective function value and numerically shown the effect of the penalties on this optimal cost.

This first application of nonlinear model predictive control to renewable resources can be further investigated in different ways. We propose the following possible extensions that may be worth investigating:(i)The catchability coefficient q is defined as the average proportion of a stock that is taken by each unit of fishing effort. We have assumed here that it is constant. However, q may be dynamic (increase or decrease over time) due to many factors such as changes in fishing methods or technology or gain of experience and skill over time. Alternatively, q may be state dependent instead of time dependent.(ii)When writing the dynamics of the model, we have assumed in (1) that the harvesting rate function at any time t is given by . Other harvesting rate functions such as and , where a and b are positive constants, have been taken by other researchers.(iii)We have mentioned in this paper that the goal density of the resource biomass should be a fraction of the environment capacity K. An optimal estimation of could be an interesting research question.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors extend their appreciations to the Deanship of Scientific Research at King Saud University for funding the work through the research group Project no. RGP-024.