Research Article  Open Access
Orthogonal Genetic Algorithm for Planar Thinned Array Designs
Abstract
An orthogonal genetic algorithm (OGA) is applied to optimize the planar thinned array with a minimum peak sidelobe level. The method is a genetic algorithm based on orthogonal design. A crossover operator formed by the orthogonal array and the factor analysis is employed to enhance the genetic algorithm for optimization. In order to evaluate the performance of the OGA, 20×10element planar thinned arrays have been designed to minimize peak sidelobe level. The optimization results by the OGA are better than the previously published results.
1. Introduction
Aperiodic thinned array is of great importance in practical applications because of the reduction of the array costs, weight, and power consumption so that it is very active topic in electromagnetics (EM) community [1–7]. However, such advantages usually come at the cost of a loss of sidelobe level and gain compared to the filled array. In the past, several methods [1, 2] had been employed to overcome these drawbacks. However, the synthesis problem is complex and cannot be well solved with these methods. In the 1990’s, cyclic difference set (CDS) was applied to design massively thinned arrays [3]. In recently years, many global optimization algorithms have been introduced to design thinned arrays, such as genetic algorithm (GA) [4, 5], a hybrid approach based on genetic algorithm (GA) and difference sets (CDS) [6], and ant colony optimization (ACO) [7]. These evolutionary algorithms have been used to arrive at thinned arrays featuring a specified peak sidelobe level (PSLL).
Orthogonal design, an experimental design method, has been applied to enhance evolutionary algorithm for optimization [8–11]. Zhang and Leung [8] proposed an orthogonal genetic algorithm by applying orthogonal design to enhance the crossover operator in genetic algorithm for multimedia multicast routing problems. To solve multidimensional knapsack problems, Li et al. [11] present a new genetic algorithm, in which the orthogonal design with factor analysis is incorporated into the genetic algorithm. Numerical results in [8, 11] demonstrate that these methods are an efficient and effective technique for solving 01 integer programming problems.
In this paper, an orthogonal genetic algorithm (OGA) is utilized to optimize the planar thinned array with a minimum PSLL. The method is a genetic algorithm based on orthogonal design. A crossover operator formed by the orthogonal array and the factor analysis is employed to enhance the genetic algorithm for optimization. The optimization of a thinned array can be formulated as a 01 integer optimization problem. The OGA adopts binary code so that it is very suitable for solving 01 integer programming problems. 20×10element planar thinned arrays are designed to demonstrate the effectiveness of the OGA. Numerical results show that the OGA yields better results than previously published results.
2. Orthogonal Experiment Design with Factor Analysis
In this section, we briefly introduce the concept of experimental design methods. For more details, the readers can refer to [8, 9]. To aid explanation, we consider a simple example. The yield of a chemical product depends on three variables: temperature, time, and amount of catalyst at the values shown in Table 1. The temperature, time, and amount of catalyst are called the factors of the experiment. Each factor has two possible values, and we say that each factor has two levels.

To find the best combination of levels for maximum yield, we can do one experiment for every combination and then select the best one. In the above example, there are combinations, and hence there are 8 experiments. In general, when there are factor at levels, the number of combinations is . When and are small, it may be feasible to test all the combinations. When and are large, it may not be possible to test all the combinations. Therefore, it is desirable to sample a small, but representative set of combinations for experimentation. The orthogonal design was developed for this purpose.
The orthogonal design provides a series of orthogonal arrays for different and . We let denote an orthogonal array for factors and levels, where stands for Latin square and represents a combination of levels. For convenience, we denote , where the factor in the combination has level and . The following is an orthogonal array:
In , there are three factors, two levels per factor, and four combinations of levels. In the first combination, the three factors have respective levels 1, 1, 1; in the second combination, the three factors have respective levels 1, 2, 2, and so forth. Based on , we get a sample of 4 combinations out of 2^{3} combinations. In this example, there are 8 combinations of levels. We apply the orthogonal array to select four representative combinations to be tested, and these four combinations are shown in Table 2.

Twolevel orthogonal arrays are used in this study. There are factors, where is the number of design factors (variables), and each factor has two levels. To establish an orthogonal array of factors with two levels, let denote columns and individual experiments corresponding to the rows, where stands for Latin square, , positive integer , and . More details about generating a twolevel orthogonal array are available in [8, 9].
After evaluation of the combinations, the summarized data are analyzed using factor analysis. Factor analysis can evaluate the effects of individual factors on the objective (or fitness) function, rank the most effective factors, and determine the better level for each factor such that the function is optimized. In this study, the factor analysis in [10] is used. Let denote a function value of the combination , where . Define the main effect of factor with level as , where and : where if the level of factor of combination is ; otherwise, . Considering the case that the optimization objective is to be minimized, the level 1 of factor makes a better contribution to the function than level 2 of factor does when . If , level 2 is better. On the contrary, if the objective is to be maximized, level 1 (level 2) is better when . If , levels 1 and 2 have the same contribution. After the better one of two levels of each factor is determined, a reasoned combination consisting of factors with better levels can be easily derived.
3. Orthogonal Genetic Algorithm
In this section, the orthogonal genetic algorithm is introduced.
A crossover operator formed by the orthogonal array and the factor analysis is employed to enhance the performance of genetic algorithm. The application of the orthogonal experiment design and factor analysis in the crossover operator is firstly illustrated by a simple example. Here, we choose the test function , where . The test function would be minimized.
First, two individuals and , each consists of seven factors corresponding to of , are randomly chosen to execute various matrix experiments of an orthogonal array. Regarded as level 1, the values of seven factors in individual are 1, 1, 1, 1, 0, 0, and 0; those in individual are 0, 0, 0, 0, 1, 1, and 1, which are regarded as level 2. Therefore, and . The orthogonal array has been chosen, because each individual has seven variables (factors). Then, seven variables in individuals and corresponding to the factors A, B, C, D, E, F, and G, respectively, are defined in Table 3.

Next, as shown in Table 4, the values of level 1 and level 2 are assigned to the level cells in the orthogonal array . The function values of each row (offspring individual) are then calculated. Then, formulation (2) is applied in Table 4 to determine the optimum level for each factor. The function value of an optimum individual found by OGA is zeros, which is much better than its parent individuals and with and . It is obvious that instead of executing all 2^{7} combinations of factor levels, the orthogonal design with factor analysis can offer an efficient approach toward finding the optimum individual by only executing eight experiments.

The pseudocode of OGA is given as shown in Algorithm 1.

4. Problem Formulation
Consider a planar thinned array consisting of identical elements with halfwavelength spaced, which is symmetric about the axis and axis. Assume that the array elements are all isotropic. Radiation pattern of the array is calculated by the standard expression of the array factor: where and are the direction parameters, and , if an element is “ON” or “OFF” at the location .
The number of radiating elements in a thinned array depends on the array fill factor that represents the fraction of the turned ON elements in relation to the total number of elements and its value ranges between zero and one. The array fill factor is expressed as . Therefore, the array thinning factor (i.e., the fraction of the turned OFF elements in relation to the total number of elements) can be written as follows:
The peak sidelobe level of the planar thinned array can be formulated as follows: where denotes the sidelobe region and is the peak of the main beam.
To suppress PSLL in all planes, the fitness function can be defined as
In [4], the fitness function is the sum of the maximum PSLL in the and planes. That is,
According to [7], the fitness function is written by
Hence, a planar thinned array synthesis can be modeled into the following optimization problem: is solved by means of OGA to find the optimal array geometry, that is, the optimal amplitude weights .
5. Numerical Results and Comparisons
In order to assess the capabilities of the OGA, comparisons with the GA [4], MGA [5], and ACO [7] in thinning 20×10element planar arrays have been performed in this section. Parameter settings of the OGA are as follows. Throughout this paper, population size ; crossover probability ; mutation probability ; and the maximum number of generations is set to . Due to the randomness nature, the algorithm is run 20 times independently for each problem and the best results are presented in this section.
In the first case, the planar thinned array was optimized by the GA [4] and MGA [5]. Equation (7) is selected as fitness function. The fitness value of the optimal solution by the GA [4] and MGA [5] was −39.83 dB and −45.456 dB, respectively. Here, the OGA is employed to optimize the planar thinned arrays. In order to maintain a fair comparison with published result [4, 5], the array with 54% filled is constrained and the aperture size of the array is . Figure 1 shows the radiation patterns and the corresponding geometry of the optimized array by OGA. The best fitness value is −51.18 dB (PSLL = −26.09 dB in plane, and PSLL = −25.09 dB in plane). As shown in Table 5, the optimal result by OGA is 11.35 dB and 5.724 dB lower than that of the best array in [4, 5], respectively. The optimal array by OGA has a directivity of 24.66 dB. In comparison, the array optimized by the GA has a directivity of 24.64 dB and the array optimized by the MGA has a directivity of 25.07 dB. Compared with the array by MGA, the directivity of the array by OGA is reduced by 0.59 dB. The convergence curve of the algorithm is plotted in Figure 2. Figure 3 reports the resultant fitness value of 20 runs. It can be seen from Figure 3 that, with the same maximum number of iterations, the fitness values obtained by OGA vary in the range from −47.93 dB to −51.18 dB. The worstcase fitness value found by OGA is also lower than that by the GA [4] and MGA [5].
Another optimization problem is the reduction of PSLLs in all planes. To maintain a fair comparison with published result in [5], the array with 50% filled is optimized by OGA. Equation (6) is acted as the fitness function. Figure 4 shows the threedimension radiation pattern of the optimal array by OGA. Figure 5 is the radiation patterns in , , and planes and the corresponding geometry of the optimal array by OGA. The corresponding array pattern in Figure 4 has a directivity of 24.82 dB with a maximum PSLL 19.44 dB below the main beam. In comparison with the results in [5], the peak PSLL is reduced by 0.6 dB.
The second thinned planar array with published results is the same array geometry as before but with different fitness function. Equation (8) is selected as the fitness function. In order to maintain a fair comparison with published result [7], the thinning percentage of the array is not constrained. Figure 6 shows the radiation patterns and the corresponding geometry of the optimized array. The best OGA synthesized array yields PSLL = −28.34 dB in plane, and PSLL = −26.59 dB in plane, thus improving the PSLL of the best results (PSLL = −25.76 dB in plane, and PSLL = −25.67 dB in plane reported in [7]) in the literature of about 2.58 dB and 0.92 dB, respectively. The thinning percentage of the optimal array is 42%, which is better than 32% of the optimal array by the ACO [7]. The optimal array by OGA has a directivity of 25.44 dB. Compared with the array optimized by the ACO, the directivity of the optimal array is reduced by 0.55 dB. As shown in Table 6, it can be found that the results by OGA are better than that by the ACO [7] in terms of both the thinning percentage and PSLL. It can be seen from Figure 7 that the algorithm has very fast convergence speed (about 11th iteration). Moreover, Figure 8 reveals that out of 20 independent trials, there are 14 times for which OGA can find the best result, and even the worst result found by OGA is also equal to the best result of ACO [7].

6. Conclusion
In this paper, the orthogonal genetic algorithm is employed to design thinned array with a minimum PSLL. The method is a genetic algorithm based on orthogonal design. The performance of the OGA has been evaluated by synthesis 20×10element planar arrays. The experiment results show that the OGA yields lower PSLL than the results reported in literature.
Acknowledgment
This work was supported by the Fundamental Research Funds for the Central Universities (K50511020007).
References
 M. Skolnik, J. Sherman III, and F. Ogg, “Statistically designed densitytapered array,” IEEE Transactions on Antennas and Propagation, vol. 12, pp. 408–417, 1964. View at: Google Scholar
 A. Ishimaru and Y. S. Chen, “Thinning and broadbanding antenna arrays by unequal spacings,” IEEE Transactions on Antennas and Propagation, vol. 13, no. 1, pp. 34–42, 1965. View at: Google Scholar
 D. G. Leeper, “Isophoric arrays—massively thinned phased arrays with wellcontrolled sidelobes,” IEEE Transactions on Antennas and Propagation, vol. 47, no. 4, pp. 1825–1835, 1999. View at: Publisher Site  Google Scholar
 R. L. Haupt, “Thinned arrays using genetic algorithms,” IEEE Transactions on Antennas and Propagation, vol. 42, no. 7, pp. 993–999, 1994. View at: Publisher Site  Google Scholar
 K. Chen, X. Yun, Z. He, and C. Han, “Synthesis of sparse planar arrays using modified real genetic algorithm,” IEEE Transactions on Antennas and Propagation, vol. 55, no. 4, pp. 1067–1073, 2007. View at: Publisher Site  Google Scholar
 S. Caorsi, A. Lommi, A. Massa, and M. Pastorino, “Peak sidelobe level reduction with a hybrid approach based on GAs and difference sets,” IEEE Transactions on Antennas and Propagation, vol. 52, no. 4, pp. 1116–1121, 2004. View at: Publisher Site  Google Scholar
 Ó. QuevedoTeruel and E. RajoIglesias, “Ant colony optimization in thinned array synthesis with minimum sidelobe level,” IEEE Antennas and Wireless Propagation Letters, vol. 5, no. 1, pp. 349–352, 2006. View at: Publisher Site  Google Scholar
 Q. F. Zhang and Y. W. Leung, “An orthogonal genetic algorithm for multimedia multicast routing,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 1, pp. 53–62, 1999. View at: Google Scholar
 Y. W. Leung and Y. Wang, “An orthogonal genetic algorithm with quantization for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 5, no. 1, pp. 41–53, 2001. View at: Publisher Site  Google Scholar
 S. Y. Ho, L. S. Shu, and J. H. Chen, “Intelligent evolutionary algorithms for large parameter optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 6, pp. 522–541, 2004. View at: Publisher Site  Google Scholar
 H. Li, Y. C. Jiao, L. Zhang, and Z. W. Gu, “Genetic algorithm based on the orthogonal design for multidimensional knapsack problems,” in Proceedings of the 2nd Advances in Natural Computation of International Conference (ICNC '06), vol. 4221 of Lecture Notes in Computer Science, pp. 696–705, Springer, New York, NY, USA, 2006. View at: Google Scholar
Copyright
Copyright © 2012 Li Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.