Research Article  Open Access
Xiangyong Chen, Jianlong Qiu, "Differential Game for a Class of Warfare Dynamic Systems with Reinforcement Based on Lanchester Equation", Abstract and Applied Analysis, vol. 2014, Article ID 837431, 8 pages, 2014. https://doi.org/10.1155/2014/837431
Differential Game for a Class of Warfare Dynamic Systems with Reinforcement Based on Lanchester Equation
Abstract
This paper concerns the optimal reinforcement game problem between two opposing forces in military conflicts. With some moderate assumptions, we employ Lanchester equation and differential game theory to develop a corresponding optimization game model. After that, we establish the optimum condition for the differential game problem and give an algorithm to obtain the optimal reinforcement strategies. Furthermore, we also discuss the convergence of the algorithm. Finally, a numerical example illustrates the effectiveness of the presented optimal schemes. Our proposed results provide a theoretical guide for both making warfare command decision and assessing military actions.
1. Introduction
The reinforcement problem in military conflicts is a complex issue on military system science and engineering [1]. It plays a key role in the decisionmaking tactical of the longtime battles and multiple battle field. This is verified in the analysis of the capture of Iwo Jima [2]. Now, the design of the reinforcement schemes has become a very interesting topic since it is important for drawing up and evaluating warfare plans and decisionmaking scheme of military actions.
In 1954, Engel firstly presented a generalized Lanchester model with the reinforcement rates [3], which was efficiently applied to analyze the battle of Iwo Jima. Since then, lots of researchers focused on using Lanchester equation [4–11] to solve this kind of inherent problem for determining the optimal reinforcement schemes. Helmbold [12] discussed the direct and inverse solutions of the Lanchester square law equations with general reinforcement schedules. His works provided reliable mathematic basis for the design of the optimal reinforcement schemes of military actions. Sha and Zeng [13] and Zeng and Sha [14] presented the basic frame and a series of key technologies to solve the optimal reinforcement problem in multiple battle field based on optimal control theory and Lanchester equation; their work validates the reasonability and adaptability of the optimal reinforcement problem. Chen [15, 16] investigated the optimal control problem for the Lanchester model with the reinforcement by utilizing the iterative regularization method. Chen et al. [17, 18] studied the optimal reinforcement problem for winning in military conflicts based on Lanchetser equation and nonlinear optimization technology; their work provides the basis of solving the optimal reinforcement problem. It is easy to note that Lanchester equation has become a powerful quantitative analysis tool to cope with the reinforcement problem.
However, most of work mainly discusses the optimal reinforcement problem of the unilateral decision maker and analyzes the influence of the reinforcement for the outcome of the battle. Very little effort has been devoted to determine the optimal reinforcement of two opposing sides in military conflicts for such dynamic game problem [19, 20]. Since battle process is a countermeasure process between two belligerent parties, it is necessary to investigate the game problem of warfare dynamic systems with reinforcement.
Li et al. [21, 22] carried out a study on the optimal reinforcement of the two fighting parties. They presented a support differential game optimization model and gave the solution of this game problem. However, they mainly discussed the case that the attrition coefficients are same. The authors [23, 24] discussed the reinforcement game problem that the attrition coefficients are different. However, they did not consider the influence of the reinforcement rates on both sides, and the objective function they used was relatively simple.
Motivated by the aforementioned discussions, the main aim of this paper is to investigate the optimal reinforcement problem by constructing a corresponding differential game model based on Lanchester equation. We mainly focus on analyzing the optimum conditions and designing a solving algorithm. Meanwhile, the convergence analysis for the algorithm is discussed.
The rest of this paper is organized as follows. In Section 2, the differential game model in determining the optimal reinforcement is presented with some moderate assumptions. In Section 3, the optimum conditions are established, and an algorithm is developed to obtain the optimal reinforcement strategies. Furthermore, the convergence of this method is also discussed. In Section 4, an example is provided to illustrate our theoretical results. Finally, Section 5 presents some concluding remarks.
2. Description of the Game Model with the Reinforcement
In this section, we consider the warfare dynamic system model with the reinforcement rates that is described by where and are the strengths of two opposing forces surviving at time in conflict, and are the constant attrition coefficients that reflect the effectiveness of forces in the unit time, and and are the reinforcement rates.
We associate (1) with the following objective function: where and are the residual of strengths on both sides in the terminal time .
Now, we consider a military conflict between two opposing forces, and let be the attacking side, and let be the defending side. In the game, selects the optimal strategy to maximize the objective function , and selects the optimal strategy to minimize . That is, if there exist and such that, for any and , the values of the objective function (2) satisfy then and are the optimal strategies of differential game.
Moreover, we present some assumptions which will be used in this paper.(A_{1})The reinforcement rates are and , where and are constants.(A_{2})Denote and to be the total amount of the reinforcements of two opposing sides. And the reinforcement from the initial time to the terminal time satisfies (A_{3})The reinforcement rates cannot keep the maximum values and during the whole battle process; that is, the following conditions are satisfied:
Remark 1. The assumption (A_{2}) implies that the total amount of the two opposing sides cannot pass the and ; that is,
hold. The assumption (A_{3}) represents the limit of the force strengths complementary on both sides. If there is not the condition (A_{3}), the optimal strategies of game problem are and , and the game problem in this paper is insignificant.
After giving the above auxiliary statements, we are at the point to investigate the condition for the existence of the optimal reinforcement strategies and develop a procedure of designing the optimal strategies for differential game.
3. Optimum Condition and the Solutions of the Game Problem
Considering the existence of the integral inequalities (4) and (5), we cannot use the classic solving theory of differential game to solve the reinforcement game problem. Motivated by [25, 26], a new solving theory and algorithm for this reinforcement game problem should be investigated.
We present the solutions of warfare system (1) as follows: Then, the objective function is where Denote and then with (7) and the fact that , , and are the known constants, we declare that if there exist and such that then (3) is satisfied, and and are the optimal strategies of differential game (1) and (2).
By constructing Lagrange function we obtain the following theorem about the optimum condition for the existence of the optimal reinforcement strategies of differential game.
Theorem 2. If there exist constants , and the reinforcement rates and , which satisfy such that the inequality holds, then the inequalities (3) and (11) hold and and are the optimal reinforcement strategies of differential game (1) and (2).
Proof. We rewrite (12) as
With (14) and (15), we have
Then, we conclude that the following results about the optimal strategies hold.
(1) If , the optimal reinforcement rate is the upper bound of . Otherwise, we assume that () is another optimal strategy and get
So
However, we know that
It follows that
From (16) and (20), it is easy to verify that (17) does not hold. That is, when , we get that and
(2) Similarly, if , the optimal reinforcement rate is the lower bound of ; that is, and (21) holds.
On the other hand, for any reinforcement strategy , it follows from (14) that
Similarly, we get the following results about the optimal strategies .
(3) If , the optimal reinforcement rate is the upper bound of . That is, and
(4) If , the optimal reinforcement rate is the lower bound of . That is, and (23) holds.
After preparing the above, we integrate both sides of (21) and (23) from 0 to and get
From (13), we have
With the help of the above inequalities, we declare that the inequalities (3) and (11) hold. That is, and are the optimal reinforcement strategies of differential game (1) and (2). This completes the proof.
For the fact that is the separable function, it is not difficult to note that we cannot use the partial derivative technique to obtain the optimal strategies. According to Theorem 2, the optimal strategies and are obtained as
Let and ; it is easy to get Rewrite (26) as the following form:
Remark 3. The tactical significance of (28) is that employs the maximum reinforcement rate to support the troop if . Otherwise, the minimum reinforcement rate is employed. In addition, employs the maximum reinforcement rate to support the troop if and employs the minimum reinforcement rate if .
Since the parameters and are unknown, we note that and cannot be directly calculated and the optimal strategies and cannot be directly determined. Then, an obvious optimization algorithm for determining the optimal reinforcement strategies is presented as follows.
Step 1. Set and and choose the initial values and such that
Step 2. From (26) and (28), we get , , , , and
Step 3. Set the confidence intervals of and to be Thus, the calculation errors of the total of the reinforcements satisfy If holds, then and are the optimal reinforcement strategies; else go to Step 4.
Step 4. Adjust the new values and , set and , then and , where is the step length, and go back to Step 2.
Remark 4. Using the above algorithm, we obtain a feasible solution of the reinforcement problem. From the fact that the calculation cost depends largely on the initial values set, we know that if we select initial values , , , , , and properly, the calculation cost can be decreased and more accurate solution can be obtained.
In the rest of this section, we analyze the convergence of the above algorithm. According to (A_{2}), (31), and (32), we know that the desired optimal reinforcement rates and satisfy
So, the desired values and are
Meanwhile, the sequence of functions and is obtained based on the above steps computation process.
Next, we prove that and converge to the desired values and in finite steps, respectively.
Theorem 5. Choose an initial value and the step length ; the sequence of function is convergent to the limit value after finite steps.
Proof. It is easy to get Setting , we rewrite (36) as For all and the step satisfies we have where , , and . That is, converges to the limit value after steps.
Remark 6. We note that the step number in Theorem 5 is not given explicitly, and it should be selected as follows: where
Theorem 7. Choose an initial value and the step length ; the sequence of function converges to the limit value after finite steps.
Proof. Since the proof is similar to Theorem 5, we omit it.
Remark 8. In Theorem 7, is selected as where
4. Numerical Example
Since Lanchester equation is a powerful tool for analyzing real wars quantitatively and determining tactics in combat simulations, itproduces reasonably good predictions. We confirm that our presented game model based on Lanchester equation is very useful to copy with some specific practical military problems when the corresponding parameter values in military conflicts are obtained.
Thus, in this section, we present a numerical example to illustrate the effectiveness of our theoretical results. The initial force strengths and the total reinforcements are and the battle terminal time is . From (A_{1}), we chose that , , , and . In the proposed solving algorithm, we set the initial values and and the step length . Let the confidence intervals of , and the calculation errors of the total reinforcements satisfy Solving the differential game problem by MATLAB Toolbox yields the feasible solutions as , , , and . From (28), we get the optimal reinforcement rates and as The total reinforcement on both sides is Furthermore, we get the optimal object function value .
Figure 1 shows the time behavior for state variables of warfare dynamic system with the optimal strategies and . We note that the state values and changed at and . Figures 2 and 3 show the changes of with the different and . It is clear that (3) holds, which implies that (46) is the optimal reinforcement strategies of the game problem in this example.
We also analyze the feasible solutions , , , and by choosing the different parameters , , , , and . Table 1 gives the feasible solutions of the game problem when setting the different values of , , and and keeping and fixed. Table 2 demonstrates the computation results when and change with, , . It can be determined from Tables 1 and 2 that when (45) is satisfied, the solving algorithm proposed in this paper is practicable and valid.


5. Conclusions
This paper discusses a differential game problem of warfare dynamic system with the reinforcement. An optimization game model is established based on Lanchester equation and differential game theory. Then the optimum condition and the solving method about the game problem are given. Simulation results illustrate the effectiveness of proposed optimal strategies. This is of great significance in analyzing quantitatively military actions. Moreover, employing advanced control techniques [24, 27] to investigate warfare command game problem is our future research directions.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to express their deepest gratitude to Dr. Jianwei Zhou for his helpful suggestions on the English writing of the revised version. This work was supported in part by the National Natural Science Foundation of China, under Grants 61273012, 11301252, and 11201212, a Project of Shandong Province Higher Educational Science and Technology Program under Grants J13LI11 and J12LI58, and the Applied Mathematics Enhancement Program (AMEP) of Linyi University.
References
 D. H. Wagner, M. W. Charles, and T. J. Sanders, Naval Operations Analysis, Naval Institute Press, Annapolis, Md, USA, 1999.
 R. Andrew, Battle Story: Iwo Jima 1945, The History Press, Stroud, UK, 2012.
 J. H. Engel, “A verification of Lanchester's law,” Journal of the Operations Research Society of America, vol. 2, no. 2, pp. 163–171, 1954. View at: Publisher Site  Google Scholar
 F. W. Lanchester, Aircraft in Warfare: the Dawn of the Fourth Arm, Constable, London, UK, 1916, and Lanchester Press, 1999.
 J. G. Taylor, Lanchester Models of Warfare, Volumes 2: Operations Research Society of America, Military Applications Section, Arlington, Va, USA, 1983.
 J. C. Sha, Mathematic Tactics, Science Press, Beijing, China, 2003.
 J. G. Taylor, “Lanchestertype models of warfare and optimal control,” Naval Research Logistics, vol. 21, no. 1, pp. 79–106, 1974. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 T. Keane, “Combat modelling with partial differential equations,” Applied Mathematical Modelling, vol. 35, no. 6, pp. 2723–2735, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. González and M. Villena, “Spatial Lanchester models,” European Journal of Operational Research, vol. 210, no. 3, pp. 706–715, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 I. R. Johnson and N. J. MacKay, “Lanchester models and the battle of Britain,” Naval Research Logistics, vol. 58, no. 3, pp. 210–222, 2011. View at: Publisher Site  Google Scholar
 J. Zhang, C. Xu, L. Gong, and D. Yuan, “The mathematical model based on the battle of Berlin,” in Proceedings of the 8th International Conference on Fuzzy Systems and Knowledge Discovery, pp. 2133–2136, IEEE Press, July 2011. View at: Publisher Site  Google Scholar
 R. L. Helmbold, “Direct and inverse solution of the Lanchester square law with general reinforcement schedules,” European Journal of Operational Research, vol. 77, no. 3, pp. 486–495, 1994. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. C. Sha and A. J. Zeng, “Research on the warfare theory of Lanchester and tactics,” in Proceedings of the Control and Decision Conference of China, pp. 1134–1136, Xiamen, China, 1994. View at: Google Scholar
 A. J. Zeng and J. C. Sha, “The optimal reinforcement problem in military conficts,” in Proceedings of the Control and Decision Conference of China, pp. 1142–1145, Xiamen, China, 1994. View at: Google Scholar
 H. Chen, “An optimal control problem in determining the optimal reinforcement schedules for the Lanchester equations,” Computers and Operations Research, vol. 30, no. 7, pp. 1051–1066, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 H. Chen, “A nonlinear inverse Lanchester square law problem in estimating the forcedependent attrition coefficients,” European Journal of Operational Research, vol. 182, no. 2, pp. 911–922, 2007. View at: Publisher Site  Google Scholar
 X. Chen, Y. Jing, C. Li, and X. Liu, “Optimal strategies for winning in military conflicts based on Lanchester equation,” Control and Decision, vol. 26, no. 6, pp. 946–950, 2011. View at: Google Scholar
 X. Chen, Y. Jing, C. Li, and M. Li, “Warfare command stratagem analysis for winning based on Lanchester attrition models,” Journal of Systems Science and Systems Engineering, vol. 21, no. 1, pp. 94–105, 2012. View at: Publisher Site  Google Scholar
 R. Isaacs, Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization, Wiley, New York Press, 1965.
 J. G. Taylor, “Differentialgame examination of optimal timesequential firesupport strategies,” Naval Research Logistics Quarterly, vol. 25, no. 2, pp. 322–355, 1983. View at: Google Scholar
 D. F. Li, A. S. Tan, and F. Luo, “Optimization model of reinforcements based on differential game and its solving method,” Operations Research and Management Science, vol. 11, no. 4, pp. 16–20, 2002. View at: Google Scholar
 D. F. Li and Q. H. Chen, “Troops support differential game optimization model and solution,” Fire Control and Command Control, vol. 29, no. 1, pp. 41–43, 2004. View at: Google Scholar
 X. Y. Chen, Y. W. Jing, C. J. Li et al., “Differential game model and its solutions for force resource complementary via Lanchester square law equation,” in Proceedings of the 18th IFAC World Congress, pp. 1024–1030, Milano, Italy, 2011. View at: Google Scholar
 X. Y. Chen and A. C. Zhang, “Modeling and optimal control of a class of warfare hybrid dynamic systems based on Lanchester (n, 1) attrition model,” Mathematical Problems in Engineering, vol. 2014, Article ID 481347, 7 pages, 2014. View at: Publisher Site  Google Scholar
 Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “A new computational method for a class of free terminal time optimal control problems,” Pacific Journal of Optimization, vol. 7, no. 1, pp. 63–81, 2011. View at: Google Scholar  Zentralblatt MATH
 Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “Optimal control computation for nonlinear systems with statedependent stopping criteria,” Automatica, vol. 48, no. 9, pp. 2116–2129, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 Q. Lin, R. Loxton, and K. L. Teo, “The control parameterization method for nonlinear optimal control: a survey,” Journal of Industrial and Management Optimization, vol. 10, no. 1, pp. 275–309, 2014. View at: Google Scholar  Zentralblatt MATH
Copyright
Copyright © 2014 Xiangyong Chen and Jianlong Qiu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.