Abstract
In order to solve the problem of maximizing the utilization of resources through reasonable deployment under limited resources, this paper studies from two aspects: one is to establish the mathematical model of maximum coverage of space detection, and the other is to improve the harmony algorithm. The exploration performance and convergence performance of the harmony search algorithm are analyzed theoretically, and the more general formulas of exploration performance and convergence performance are proved. Based on theoretical analysis, the algorithm called opposition-based learning parameter adjusting harmony search is proposed. By using the algorithm to test the functions of different properties, the value range of key control parameters of the algorithm are given. The proposed algorithm is applied to optimize the problem of radar deployment. This paper takes a certain area of the Shandong Peninsula as the deployment scope. The simulation results show that the proposed algorithm is effective and practical. Although there is a large amount of calculation, it provides ideas and ways for other problems, such as the site selection of new observation and communication post, the deployment of maneuvering radar stations, and the track planning of fleet.
1. Introduction
With the rapid change of the battlefield environment, more and more radars and other equipment are used to detect targets through networking [1, 2]. Under the threats and challenges of enemy invasion defense, the optimal deployment can make limited radar resources reasonably allocated to form a tight and reliable radar surveillance network and achieve the best combat effectiveness [3, 4]. More and more algorithms are applied into the radar optimal deployment. A radar deployment problem was translated into a graph theory problem by some preprocessing operations, like discretization, to solve a multiobjective optimization problem for radar deployment based on graph theory knowledge [5]. An optimal placement method about perimeter barrier coverage was introduced to solve the optimal placement issue for multistatic radar on the perimeter curve [6]. In view of existing problems in the deployment of traffic surveillance radars in civil aviation, an adaptive genetic algorithm with elite selection strategies was proposed to optimize the deployment of surveillance radars [7]. An optimal deployment method adopting unequal barrier coverage is proposed to solve the problem of optimizing employment for multistatic radar [8]. Under a certain sea and air battlefield background, the positioning accuracy of the multimachine passive positioning system can be effectively improved by adjusting the positioning layout of each machine in the system, particle swarm optimization (PSO) is proposed to find the optimal station of multimachine passive positioning system [9]. Mixed culture and a genetic algorithm were adopted to establish two spaces to find the optimal solution of radar network deployment [10]. Distributed multiple-input and multiple-output (MIMO) radar has a wide antenna distribution, and its performance depends largely on the deployment position of the radar antenna. GA and PSO algorithms were, respectively, proposed to solve this thesis of single objective and multiobjective function MIMO radars [11]. It can be seen from the above-mentioned studies that researchers build different deployment mathematical models from different angles and try to improve various optimization algorithms to solve the models, so as to provide a reference value for decision-makers to make final decisions.
Harmony search (HS) algorithm not only has the advantages of the meta-heuristic algorithm but also owns its characteristics [12, 13]: (a) HS algorithm integrates all the existing harmony vectors to create a new harmony vector, which is different from GA which uses two elderly vectors, PSO only considers the individual optimal position and the global optimal position of the whole population; (b) HS independently adjusts each variable via improvisation, unlike PSO deals with a solution vector by a single rule. Due to these characteristics, HS has been successfully applied to a wide variety of optimization problems, such as river ecosystem [14], selective assembly problems [15], mental health analysis [16], reservoir engineering [17], identification crack of cable tunnel [18], and power distribution network reconstruction [19]. However, it suffers from trapping in performing local searches and a crucial factor of its is that performance is key control parameters, such as harmony memory (HM), harmony memory size (HMS), harmony memory considering rate (HMCR), pitch adjusting rate(PAR), bandwidth (bw), and the number of improvisations (G). It is unfortunate that study on the mathematical theory of the underlying search mechanisms of HS is few in the open literature. S. Das et al. proposed a new algorithm called EHS after discussing the relationship between the explorative power of HS and the control parameters through analyzing the evolution of the population variance of HS [20]. But it has good exploration but poor exploitation. So this paper devoted a lot more attention to the search mechanism of HS and the parameters.
The main contributions of this paper are as follows: (a) analyze the mathematical theory of the underlying search mechanisms of HS from balancing exploration and exploitation, establish the iteration equation of HS, and propose an improved HS. (b) Establish the mathematical model of radar deployment based on maximum 3D detection coverage. (c) Make use of a proposed algorithm to optimize the deployment of radars.
The remainder of the paper is organized as follows: in Section 2, the search mechanism of HS under general interval in detail is discussed further, iterative convergence sufficiency condition is studied through balancing exploration and exploitation and constructing an iteration equation. An improved HS is proposed in Section 3. Section 4 shows a large number of experiments to investigate the effectiveness and robustness of the proposed algorithm, gives the model of radar deployment, and presents its optimal solution from many tables and figures. Finally, Section 5 gives a summary of this paper.
2. Theoretical Analysis of HS
This section analyzes the performance of the HS algorithm from two aspects: exploration performance and iterative convergence performance.
2.1. Exploration Performance
HS algorithm explores the solution space by improvisation, and its development is accomplished through updating the operation steps. In reference [20], the exploration ability of HS is measured by the evolution of expected population variance with the number of iterations. However, only symmetric intervals , are considered to analyze the evolution of group variables of HS and its exploration performance. Here, this paper will give results for asymmetric intervals, as shown in Theorem 1.
Theorem 1. Let represents the initial population of the HS algorithm, refers to the population obtained after the improvisation session. The range of the decision variable is , then
Proof. when , the result as the following equation is obtained, which had been carried out by Das et al. in literature [20].When , equations (3) and (4) are workable from the proof in literature [20].where is the mathematical expected value of the variable . In combination with the random selection operation expressed as equation (5) in improvisation, equations (3) and (4) are used to deduce the expressions of and .Let , thenwhere is the probability density function of R. And, then we get equation (8).According to equations (5) and (8), and can be obtained.From equations (3), (4), (8), and (9), it can be concluded thatLet and , equations (10) and (11) becomeAnd, because the following equations are true.Equation (14) can be deduced from equation (13).where and are independent random variables. Therefore, and are true, and then we getAccording to equations (13)–(15), the following equation can be obtained:This proof ends.
2.2. Convergence Performance
If , then the following equation can be obtained from equation (1).
According to literature [20], is obtained, so
Then, the expected variance of the population variable in the -th generation is
With increasing evolution generations, the increase of expected population variance gives the algorithm strong exploration ability. However, if the exploration ability is only considered, the HS algorithm will not converge in the last generation, that is, the HS algorithm keeps searching for new information, but does not get effective information at the end of the algorithm. Therefore, the convergence of the algorithm is a very critical problem. Next, the convergence of the HS algorithm will be analyzed from the expectation of population variance and expectation of population mean.
Lemma 1. For any initial vector , the iteration equation converges if and only if , where represents the iteration matrix and is the spectral radius of the iteration matrix in literature [21].
Theorem 2. Under the premise of Theorem 1 and , the sufficient condition for the convergence of the iterative equation is that , where
Proof. if and , then the following equation is true:It can be obtained from equations (10) and (13):Due to and , therefore, equations (22) becomes the following equation: According to equations (21) and (23), is obtained. After the evolution of g-th generation, can be obtained. Thus, the characteristic roots of the iterative equation can be obtained as and . It is clear that . And, then is true. So the iterative equation converges.
This proof ends.
From the proof of Theorem 2, we can get the following conclusions:(1)The parameter is actually the spectral radius of the iteration matrix, which directly determines the convergence of the iteration equation(2)When is close to 0 and the parameter is close to 1, the algorithm has a fast convergence rate, which verifies the rationality of (3)In order to find a balance between higher convergence accuracy and better exploration ability, the parameter can be adjusted to prevent the algorithm from falling into local optimal
3. The Opposition-Based Learning Parameter Adjusting Harmony Search (OLPDHS) Algorithm
Through the above-given analysis and discussion, on the one hand, , which is related to the original HM initialization, is the part of , so we can find a better initialization way to explore an effective offspring population for improving the algorithm’s exploration. On the other hand, adjusting key control parameters’ values is a good choice to obtain better fine-tuning properties and convergence speed.
The OLPDHS algorithm is proposed. On the one hand, the algorithm improves the initialization method; On the other hand, the fine-tuning properties and convergence speed of HS can be improved by adjusting the parameter value. What the OLPDHS algorithm has in common with the classical HS algorithm is that both of them go through four processes: initializing the parameters involved in the algorithm, initializing the harmony memory, improvising a new harmony, and updating the harmony memory. The difference between the two is the way of population initialization and the way of control parameter change.
3.1. The Opposition-Based Learning Strategy for Population Initialization
Opposition-based learning (OBL) is widely used to improve the algorithm [22]. The main idea of OBL is to obtain a better approximated solution than the current candidate solution by considering both an estimate and another estimate that is opposite to this estimate, so OBL expands the solution space and reduces the search time [23, 24].
Definition 1. Let be a point in the dimensional coordinate system, where , then, the opposite point of is completely determined according to .
In the original HS algorithm, the initial population is generated in a random way. In order to improve the quality of the initial harmonic memory warehouse, this paper proposes a variety of alternative opposite learning strategies, which includes five methods based on opposite learning strategies [25]. Method 1: , Method 2: , Method 3: , Method 4: , and Method 5: .
3.2. Parameter Adjustment Strategy
In order to balance the exploration ability and development ability of the HS algorithm, it is necessary to adjust the key control parameter dynamically during the search process.
According to Theorem 2, the parameter ( ) is dynamically adjusted as follows. To better detail , the more general expression for the average of the group is given as the following equation:Here, is the th dimension of the problem, is the maximum dimension of the problem, and represents the average value of the th dimension variable. Then, get , here are randomly selected from HM and , thus can be obtained as the following equation:
So we can get the result of the parameter .
It can be seen from equation (26) that the dynamic adjustment of the parameter mainly depends on the square root of the mean value of the harmony vectors of the population after each update. In this way, it is easier to realize the actual operation of dynamic adjustment of the parameter . When the new solution is fine-tuned according to equations (24) or (25), the information of the current population is carried out by the parameter . The fine-tuning of the new solution has global guiding significance. However, the fine-tuning solution with the fixed value of the parameter does not consider the current population search situation. Additional calculation after each evolution is required, which increases the calculation amount of the algorithm.
3.3. The Flow and Pseudocode of the OLPDHS Algorithm
To sum up, the OLPDHS algorithm is as follows: firstly, the values of the parameters of the algorithm are determined, such as HMCR, HMS, bw, PAR, and G or stopping criterion; secondly, the initial population is randomly generated and put into HM, and the poor harmony vector in HM is updated by using the OBL strategy. The specific steps of updating HM are as follows: Step 1: generate the initial harmony vectors in a random way and put them into HM and label each harmony vector, such as 1, 2, 3, … , and HMS; Step 2: use five kinds of the OBL strategies to generate the opposite vectors of the initial harmony vectors. Specific methods are as follows: take the harmonic vector whose label number is divisible by 5 and use Method 1 to generate its opposite vector; take the harmony vector with modulo 1 of label number and 5 and use Method 2 to generate its opposite vector; take the harmony vector with modulo 2 of label number and 5 and use Method 3 to generate its opposite vector; take the harmony vector with modulo 3 of label number and 5 and use Method 4 to generate its opposite vector; take the harmony vector with modulo 4 of label number and 5 and use Method 5 to generate its opposite vector. Figure 1 shows the pseudocode in detail. Step 3: compare the evaluation value of the harmony vector and its corresponding vector, keep the good harmony vector, and eliminate the bad one. Step 4: repeat steps 2 and 3 until the initial harmony vectors are updated.

And then, improvise a new harmony vector from HM. In this creation process, the only difference with the HS algorithm is that the parameter bw is dynamically adjusted instead of a fixed value; Figure 2 shows the pseudocode in detail. The fitness of the new harmony vector is compared with the fitness of the worst harmony vector in HM, so as to update HM. Finally, the above-mentioned two processes are repeated until the stop condition is reached, and the optimal solution is output.

The specific process of the OLPDHS algorithm is shown in Figure 3.

4. Simulation Analysis
In order to verify the rationality and effectiveness of the OLPDHS algorithm in this paper, a lot of simulation experiments are performed to test the influence of the parameters HMCR, HMS, and PAR on the OLPDHS algorithm. And then, Matlab is used to compare the OLPDHS algorithm with other HS algorithms from different perspectives. Finally, combined with the actual terrain, the OLPDHS algorithm is applied to the radar optimal deployment, and the simulation verification is carried out.
4.1. Influence of the Parameters HMCR, HMS, and PAR on OLPDHS Algorithm
4.1.1. Influence of the Parameters HMCR on OLPDHS Algorithm
The influence of the parameter HMCR on the performance of the OLPDHS algorithm is analyzed by simulation. The unimodal function Sphere and the multimodal function Rastrigin are used as the test functions, and the parameters are set as HMS = 5, PAR = 0.5, , , the maximum number of evolution , the dimension of the function is 30, and the HMCR changed dynamically from 0.1 to 0.99, which are 0.1, 0.25, 0.5, 0.75, 0.9, and 0.99, respectively. Matlab is used for simulation, and the relationship between the population variance expectation and the iterations number is obtained, as shown in Figure 4. It reflects that for function Sphere and function Rastrigin, when the values of parameter HMCR are different, the search capability of the OLPDHS algorithm has changed over iterations. Figure 5 shows the relation between the Sphere function value and Rastrigin function value obtained by the OLPDHS algorithm under different parameter HMCR values with the change of evaluation numbers, reflecting the influence of different parameter HMCR values on the convergence performance of the OLPDHS algorithm.

(a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)
As can be seen from Figures 4 and 5, the values of HMCR have a great impact on the expected population variance and the convergence of the OLPDHS algorithm. With the increase of the value of HMCR, the smaller the expected population variance and convergence accuracy, the larger the value of HMCR and the better the convergence rate of the OLPDHS algorithm.
In addition, when HMCR values are 0.01, 0.25, 0.5, 0.75, 0.9, 0.95, and 0.99, the mean value and variance results of the two test functions are shown in Tables 1 and 2, respectively. For the Sphere function, Table 1 shows that the smaller the value of HMCR, the worse the performance of the OLPDHS algorithm. When HMCR < 0.9, the results obtained by the OLPDHS algorithm have far deviated from the global optimal solution; when HMCR > 0.9, the OLPDHS algorithm quickly converges to or near the global optimal solution. For the multimodal function Rastrigin, Table 2 shows that the values of HMCR have a similar impact on the performance of the OLPDHS algorithm to that in the Sphere function. From Tables 1 and 2, the convergence accuracy is better with the increase of HMCR value.
4.1.2. Influence of the Parameters PAR on OLPDHS Algorithm
The influence of parameter PAR on the OLPDHS algorithm is analyzed through simulation experiments. Nine functions with different characteristics are selected from twenty-five functions to study the impact of parameters PAR on the performance of the OLPDHS algorithm.
Set the dimension of all functions to be 30, the maximum number of iterations to be 20000, and each function to run 30 times independently. Table 3 lists the mean value and standard deviation (SD) value of each function with running 30 times independently under different values of parameter PAR. Figure 6 visually shows the optimization curves of nine functions under different values of parameters PAR.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)
As you can see from Table 3, none of the PAR values performs well for all of the test functions. When the value of PAR is in the interval [0.3, 0.7], the mean and SD of all the functions except function and function are relatively good. For function , a larger value of parameter PAR results in poor performance of the OLPDHS algorithm. For function the values of PAR have little effect on the performance of the OLPDHS algorithm. By observing Figure 3 and comparing the optimization curves, it can be found that when PAR is in the interval [0.3, 0.7], the optimization curves of the OLPDHS algorithm are better. Thus, the value of PAR has an impact on the performance of the OLPDHS algorithm, and the value is determined based on the analysis of the specific optimization problem.
4.1.3. Influence of the Parameters HMS on OLPDHS Algorithm
The influence of parameter HMS on the OLPDHS algorithm is analyzed through simulation experiments. Nine functions with different characteristics presented in Section 4.1.2 are selected to study the impact of parameters HMS on the performance of the OLPDHS algorithm.
Table 4 shows the mean value and SD value of each function with running 30 times independently under different values of parameter HMS. Figure 7 visually shows the optimization curves of nine functions under different values of parameter HMS.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)
As can be seen from Table 4, except for the function (when the HMS value is large, the optimization result is better), the convergence accuracy of other functions deteriorates as the HMS value increases. As shown in Figure 4, except for function , the optimization curve obtained by other functions is better with the decrease of HMS value. Through analysis, it can be concluded that the value of HMS is appropriate in [5, 40].
Finally, the parameter value range of the OLPDHS algorithm to obtain better performance can be obtained through the above simulation analysis. The specific values of parameters should be determined by the optimization problem itself and the optimization model, which need to be tested repeatedly in practice.
4.2. Performance Comparison of OLPDHS Algorithm and Other HS Algorithms
In order to prove the efficiency of the OLPDHS algorithm, the OLPDHS algorithm is compared with the recently developed HS algorithm and its improved version (his [26], GHS [27], SAHS [28], SGHS [29], IGHS [30], NDHS [31], EHS [20], and ITHS [32]) by testing the unconstrained optimization problem. Related parameters are set as follows: (1) HMCR: SGHS is 0.98, IGHS, NDHS, EHS, and ITHS are 0.99, OLPDHS is 0.9999, and other algorithms are 0.9. (2) HMS: the value is 50 for EHS, 10 for ITHS, and 5 for other algorithms. (3) PAR: HS and EHS are set to 0.33, GHS, HIS, IGHS, and OLPDHS are set to 0.1(minimum) and 0.99(maximum), SAHS and ITHS are set to 0(minimum) and 1(maximum), NDHS is set to 0.01(maximum), 0.99(maximum), and SGHS is set to 0.9; (4) bw: HS is 0.01, SGHS is 0.0005(minimum) and t (maximum), IHS is 0.0001(minimum) and (maximum), IGHS is 0.0001 (minimum) and (maximum), and EHS is .
Among the 25 benchmark functions, are the high-dimension functions, while are the low-dimension function with many local optima. are unimodal, while are multimodal where the number of local optima increases with the function dimension.
Tables 5 and 6 give the comparison results for the 10 algorithms for with symmetric initial range. The global optimal values of , , , , , and can be obtained by using the OLPDHS algorithm, but those of and can not be got by the OLPDHS algorithm which are worse than those by the ITHS algorithm but better than the other eight algorithms. Because these eight algorithms almost trap into local optimum. The performance of the OLPDHS algorithm is best for , , , and but is worse than the IGHS algorithm which yields the best solution for , , and .
Table 7 shows the comparison results for the 10 algorithms for . Except and , the performance of the OLPDHS algorithm is better than others. Furthermore, solutions close to the global optimum of and are achieved by the OLPDHS algorithm. In short, the OLPDHS algorithm can strengthen its searching abilities and speed up its convergence speed.
The convergence curves of the 10 algorithms in most cases for with 30 dimensions are presented in Figure 8. The OLPDHS algorithm has no good convergence performance except for , , and , but SGHS, NGHS, and EHS are better. Moreover, the convergence performance of the OLPDHS algorithm is worse than the ITHS algorithm for and but is better than others. The OLPDHS algorithm outperforms the comparing algorithms in terms of convergence for over 10 out of 15 functions. For , , , , , and , the OLPDHS algorithm can get the best final accuracy and show better convergence speed.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)
The statistical significance level of difference of the means of OLPDHS and each of the other algorithms is calculated. Table 8 shows the statistic t-test results (D = 10), ‘+’ is the t value is significant at the 5% level, ‘NA’ stands for not applicable, covering cases for which the two algorithms achieve the same accuracy results.
The OLPDHS algorithm performs superior than all other algorithms in most cases in Table 8. The OLPDHS is significantly better than HS, GHS, and IGHS in 13 functions. The OLPDHS is significantly better than IHS 12 functions except , , and . The OLPDHS is significantly better than SAHS, IGHS, and EHS in 11 functions except , , , and . The OLPDHS is significantly better than NDHS in 11 functions except , , , and . The OLPDHS performs significantly better than ITHS except , , , , , and . In short, the OLPDHS algorithm has a good performance.
4.3. Simulation Analysis of Radar Optimal Deployment Based on OLPDHS Algorithm
4.3.1. The Mathematical Model of Radar Optimal Deployment
Assuming that the defense responsibility area is known, the space covered by radar is divided into warning space and core space according to the importance of detection space. Warning space refers to a space in which the detection probability of radar is not less than a constant (e.g., 0.5), represented by . Core space refers to the space in which the detection probability of radar is not less than a constant (such as 0.8), represented by . Obviously, . There are N available radars, and the detection range of each radar is different. Under the condition of satisfying the continuity and tightness of the radar coverage space, how to deploy the radar reasonably and maximize the warning space and core space detected by the radar as much as possible is the problem to be solved in the radar optimal deployment.
(1) The Principle of Optimal Coverage of Alert Space. The optimal warning space coverage means that the ratio between the detection space range of a radar system with a detection probability not less than a constant (such as 0.5) and the required warning space range reaches the maximum, which is usually expressed by warning space coverage rate .where is the detection space range of the i-th radar with detection probability not less than a constant (e.g. 0.5); represents the given warning space; is the detection space range of radars with detection probability not less than a constant (e.g. 0.5) in the given warning space, denoted by ; represents the volume of space, and . The larger the is, the larger the warning space coverage rate is.
(2) The Principle of Optimal Core Space Coverage. The optimal core space coverage means that the ratio between the detection space range of a radar system with a detection probability not less than a certain constant (such as 0.8) and the required core space range reaches the maximum, which is usually expressed by core space coverage rate .where is the detection space range of the i-th radar with detection probability not less than a constant (e.g., 0.8); represents the given core space; is the detection space range of radars with detection probability not less than a constant (e.g., 0.8) in the given core space, denoted by ; and . The larger the is, the larger the core space coverage rate is.
Obviously, radar optimal deployment is a multiobjective optimization problem. That is, in numerous radar deployment schemes, an optimal scheme is found to optimize the coverage of warning space and core space simultaneously. However, under the condition of limited radar resources, it is impossible to maximize the warning space coverage rate and the core space coverage rate at the same time. A way needs to be found to balance and , so that the detected warning space covers the given warning space as much as possible, and the detected core space covers the given core space as much as possible. Generally, the multiobjective optimization problem of radar deployment can be transformed into a single objective optimization problem by the weighting method. The problem of radar optimal deployment can be expressed as follows: for a radar system composed of N radars, the given warning space is A, the given core space is , the deployable regions of radars is , and represents the deployable region. The optimal deployment scheme can be expressed by the following mathematical model of radar optimal deployment:
It is the first time that this mathematical model is established from the perspective of spatial volume, instead of several altitude layers being superimposed by weight coefficients [10].
4.3.2. Simulation Settings
OLPDHS algorithm is analyzed by the simulation to solve the problem of radar optimal deployment. In order to verify that the proposed OLPDHS algorithm can effectively solve the problem of optimal radar deployment, the simulation area is an area of the Shandong Peninsula, with a longitude interval of E120°4′–E121°25′ and latitude interval of N36°7′–N37°42′, and the topographic data are downloaded from the geospatial data cloud website with an elevation packet with 90 m accuracy. The data packet is converted into point cloud data in XYZ format with an accuracy of 100 meters through Global Mapper14 software, and its range is expressed as 108.2 km × 81 km. In order to further simplify the calculation, this region is gridded with the size of 200 m × 200 m. The vertex in the lower left corner of this region is set as the origin of coordinates, and the X, Y, and Z axes constitute the right-handed coordinate system. The warning space region is , and the core space region is . Five radars are planned to be optimization deployment, denote by R1, R2, R3, R4, and R5. Their parameter settings are shown in Tables 9 to 13, respectively. RTP is the radar transmitting power, NB represents the noise bandwidth, BC shows Boltzmann constant, AMLDG indicates antenna main lobe direction gain, MDSNR is the minimum detectable signal to noise ratio, MBEA shows the main beam elevation angle, and TN indicates thermal noise. RCS is radar cross section, and HWW shows half wave width. The parameters of OLPDHS are set as follows: HMS = 20, HMCR = 0.99, PAR = 0.5, G = 600, , and . The test is performed on a computer with Intel (R) Core (TM) [email protected] GHz 2.60 GHz processor, 8G memory, and 64-bit operating system.
4.3.3. Simulation Analysis
The OLPDHS algorithm is used to place five radars’ stations. 20 positions are randomly generated to form a set, and each position contains the positions of five radars. Method 1 in the opposition-based learning strategy is used to generate the opposite position of the first position. In this case, the objective function is a function containing five radar parameters and five radar positions. The value of the objective function at the above two positions can be obtained by using the calculation method of the radar detection range volume under the conditions of occlusion and interference described. The corresponding position with a larger function value is put in the HM, using the method 2 in the opposite-based learning strategies to produce the opposite position of the first position, compare the fitness of equation (28) under two positions of the objective function, choose the corresponding position with larger fitness value and put it in the HM, update the 20 positions in turn, and at last there are 20 positions in HM. A new position can be obtained by harmony memory considering and pitch adjusting in the internal search space of HM with a probability of 0.99, or by random selection in the external search space of HM with a probability of 0.01. If the value of equation (30) in the new position is greater than the value of equation (30) in the original position, the new position is put into HM, the original location is deleted from HM. In this way, 20 positions in HM are updated. The above-given process is repeated 600 times to get the best solution. It took 34446.308 seconds and invoked the fitness function for 1021 times. The calculation results are shown in Table 14.
From Table 14, the optimal fitness value is 0.4293, and the corresponding radar coordinates are (54200,30400), (34200,30800), (38000,48600), (63400,50600), and (67200,33600). It can be seen that the radar optimal deployment algorithm based on OLPDHS can obtain a relatively optimal radar deployment scheme under the premise of known terrain range, core area, warning area, and radar type, but it is not necessarily the optimal deployment scheme. In the 600 iterations, the fitness value is updated 13 times. In the initial iteration, the update is fast and the optimal value changes greatly, which reflects the strong searching ability of the OLPDHS algorithm. In the latter iteration, the update is slow and the optimal value tends to be flat, indicating that the position searched by the OLPDHS algorithm is near the optimal solution. The relationship between the number of iterations and the optimal fitness value is shown in Figure 9. The abscissa represents the number of iterations (unit: generation), and the ordinate represents the corresponding optimal fitness value under the number of iterations (unit: none). The relationship between the optimal fitness value and the calculation time is shown in Figure 10. The abscissa represents the optimal fitness value (unit: none), and the ordinate represents the time for the optimal fitness value (unit: second).


As can be seen from Figure 9, in the whole iteration process, the speed of approaching the optimal value in the early stage is fast, and in the later stage, the speed gradually decreases until it flattens out. It approaches the optimal value in about 560 iterations, which is consistent with the convergence process analyzed above. It can be seen from Figure 10 that the optimal value in the early stage changes significantly. After about 32,000 seconds, the optimal fitness value does not change with the change of time, indicating that the algorithm can find the optimal value under the current conditions under the current operating environment and parameter settings.
As can be shown in Table 14, the results of the 49th iteration and the 50th iteration vary greatly, reflecting the strong searching ability of the harmony algorithm in the early stage, which is related to the population adopting the opposite learning strategy. From the 180th generation to the 293rd generation 293rd, the optimized results remain the same which shows that the algorithm falls into the local optimum. But in the 293rd iteration jump out of local optimal to a new search area, it is mainly associated with the parameter adjustment strategy of the OLPDHS algorithm. In the following evolution process, the fitness value is updated every 20 generations or so until the 423rd generation. The update rate is slow in the late stage of the algorithm, which is close to the optimal value.
In order to more intuitively observe the optimization process of the OLPDHS algorithm, radar deployment renderings under different iterations in Table 14 are drawn, respectively, as shown in Figures 11 to 17. As can be obtained from the above figures, the positions between radars change from dispersed to aggregated, the radar deployment position changes from being seriously affected by terrain occlusion to being generally affected by terrain occlusion, and the reliance on radar deployment effect shifts from the warning area to the core area, with each iteration getting closer and closer to the target to be achieved.

(a)

(b)

(a)

(b)

(a)

(b)

(a)

(b)

(a)

(b)

(a)

(b)

(a)

(b)
In 600 iterations, the effect diagram of optimal radar deployment is shown in Figure 18. From Figure 18, R1, R3, R4, and R5 layout within the scope of the core region, R1, R2, R3, R4, and R5 are all arranged within the scope of the warning region. It can be seen from the figure that each radar is blocked by the terrain, reflecting the effect of complementing each other’s blindness and reflecting the intention of key defense in the core region.

(a)

(b)
The disadvantages are as follows: first, the calculation time of the algorithm is long, which is related to the size of the cell grid of the terrain. The smaller the grid, the greater the calculation amount, and vice versa; second, the calculation accuracy is also related to the size of the topographic cell grid. The larger the grid is, the lower the calculation accuracy will be. When searching for the optimal radar deployment position, it is easy to miss the relatively optimal position points. In addition, when encountering a small mountain, the phenomenon of skipping will occur when calculating the mountain occlusion effect, which cannot fully reflect the occlusion effect of the terrain.
5. Conclusion
In this paper, compared with other heuristic algorithms, the characteristics of the harmony search algorithm are analyzed. Then, the exploration performance of the HS algorithm is analyzed and the relationship between the exploration and the key control parameters in the general interval are deduced from the mathematical point of view. Furthermore, sufficient conditions for convergence of the algorithm are proved. On the basis of the above-given analysis and derivation, the OLPDHS algorithm is proposed. A lot of simulation experiments are carried out to test the performance of the OLPDHS algorithm, the selection range of related parameters of the OLPDHS algorithm are given through testing the functions with different properties and analyzing the results. Finally, the OLPDHS algorithm is applied into multiradar optimal placement based on the maximal detecting spatial coverage, which is a good way for decision makers to make decisions about schemes, solutions, and so on. Meanwhile, it also provides some basis for decisions in the control system and also widens the design idea of radar network system and wireless communication. But because this example involves terrain data, powerful hardware is required to process terrain data and calculate the volume of detection space at the possible deployment points of each radar. Therefore, the algorithm takes a long time under the hardware conditions in this paper. It is necessary to study how to quickly and accurately calculate the detection space volume of radar at the deployment position in the future works.
Data Availability
The data used to support the study are included in the paper.
Conflicts of Interest
The authors declare that there are no conflicts of interest.