Abstract

In this paper, a Taguchi method based on fitting and prediction is proposed to accelerate the optimization process in antenna array synthesis. The implementation procedure combines the normal Taguchi method and the curve fitting technique. A possible solution is determined by prediction based on fitting curves. Specifically, the fitting curves are obtained by using the dynamic points calculated and updated as the Taguchi method progress and recorded in the response table necessarily produced in the procedure. Test functions are used for conducting some confirmation experiments, and the results verify the validity of the proposed method. In order to illustrate its good practicability, two linear antenna arrays with a null controlled pattern and a flat top pattern, respectively, are successfully optimized by using both of the normal Taguchi method and the proposed one. Some comparisons and discussions of their results are given in the paper, which proves that the proposed method has a better practicability, not only because it inherits the global optimization characteristics of the normal Taguchi method but also because it accelerates the convergence process.

1. Introduction

Taguchi method, also called as Taguchi’s method, was proposed by Professor Taguchi in 1967. As a means of product quality control and optimization in the early stage, it was soon spread [15]. This method is a feasible way to avoid the full factorial experiment of testing all combinations of parameters in an experiment, which means it can obtain the excellent and stable products by using as few test times as possible, and it is conducive to improving the rate of good products in the meanwhile.

So far, many electromagnetic problems can also be solved with the Taguchi method. In 2007, Weng and others detailed the flow of the Taguchi method in electromagnetic optimization calculation and elaborated the mathematical principle of each part. They also exhibited the examples of linear antenna array synthesis and proved the feasibility of the method [6]. Since then, the Taguchi method has also been widely used in the electromagnetics community, such as null realization in the specific directions of array synthesis [7, 8], sidelobe optimization in the circular phased array [9], array synthesis of the conformal antenna [10], and performance optimization of the elements and arrays [1115]. We know that the pattern synthesis of linear arrays can be achieved by using several other optimization methods like particle swarm optimization (PSO) [16], genetic algorithm (GA) [17], convex optimization (CO) [18], and so on; therefore, comparisons between the Taguchi method and some other optimization algorithms are necessary. The comparisons in the literatures indicate that the Taguchi method has a better performance than PSO and GA in some applications of pattern synthesis [6, 19]. The same conclusion can be drawn in [6] for antenna design and in [20] for optimization of the magnetic hysteresis model. Some hybrid algorithms combined with the Taguchi method have also been studied [19, 21]. Even though the normal Taguchi algorithm has good performance in many global optimization problems, we hope to further improve its convergence speed and make it more efficient.

In this paper, the Taguchi method is developed by combining the curve fitting and the optimal value prediction. Each parameter may always change during the iteration process, and the corresponding fitness values can form a certain changing trajectory with the iteration. The changing trajectory of the undetermined parameter provides data support for the optimal value prediction. Then, the prediction data may accelerate the convergence if the value is better than that obtained by the normal Taguchi method. The details of our proposed method are introduced in Section 2, and some examples are presented and discussed in Section 3.

2. Fitting and Prediction-Based Taguchi Method

2.1. Main Flow of Taguchi Method

The Taguchi method is an iterative method that means every iteration has the same processes except for their initial data. At the beginning of the iteration, electromagnetic problems must be evaluated and initialized as needed ones. Meanwhile, an appropriate orthogonal array (OA) should be selected according to the number of parameters. The OA, which plays an essential role in the Taguchi method, is used to determine the control parameters to achieve the best results with only a few experimental tests. That is because when OA, also written as OA (N, k, s, t), is selected, it means that the optimization has at most k parameters (columns of OA) with s levels for each and there are N experiments (rows of OA) in each generation of the iteration. The strength t = 2 means when any 2 columns are picked from the OA, the number of occurrences of each different combination number is always the same [22]. When any k′ (k′ ≤ k) rows of the OA are selected to form a new subarray, it is still having the property of t-tuple equal occurrence, which ensures a balanced and fair comparison during experiments. And, this is another useful property of OAs for the Taguchi method which is mentioned in [6]. It indicates that different OAs still have the ability to perform global optimization. Therefore, the final inputs we can get with different OAs are the points much close to each other in the solution space at most of the time, with a few accidental exceptions which is another research content. In this paper, every comparison of curves is built up with the basic condition of using the same OA which avoids the effects due to different OAs, except with special illustrations.

After then, the levels of parameters in OA are transformed into the actual values according to the upper and lower limits of each parameter and the intervals are narrowed when the iteration is in progress. Due to the optimization mechanism, the optimization results will be determined after OA is selected and the upper and lower limits of parameters are set, which means the Taguchi method is with the nonstochastic property [23]. Because there is no any random variable being employed into the optimization process, the results of the experiments are only determined by their initial inputs. That is to say, the process of the normal Taguchi method itself is nonstochastic and the results are repeatable with totally the same values. However, if the experimental results are obtained by the random processes, the optimization results are changeable.

After conducting all the tests in a certain iteration step, the Taguchi method can build a response table which constructs a connection between the parameter levels and the results of experiments. A group of optimal level values can be identified by that response table and confirmed by another experiment. Then, the upper and lower limits of all parameters are adjusted based on a new reference of the last optimal parameter values according to the convergence factor initialized at the beginning of the whole process. By updating these data, a new group of test parameters is prepared for the next iteration. Finally, the iteration terminates when the optimization goal is reached or the iteration arrives at its maximum step.

The main flow of the normal Taguchi method is exhibited in Figure 1(a). More details about the principle and data processing of the method can be found in [6]. The flow chart of the proposed method is exhibited in Figure 1(b). When the fitting and prediction mechanisms are activated, the flow path 1 in the second chart ceases to work.

2.2. Strategy of Fitting and Prediction

There is an additional operation in our proposed Taguchi method, relative to the normal one. After a response table is built, we need to construct a fitting curve with the data from the result table and then forecast the next outcome with better performance. Before this, we need to discuss some details of the signal-to-noise ratio (SNR) in the response table.

After all the tests in a certain iteration, the fitness values can be obtained. Then, the SNR can be calculated by

There are cases of input for having the specified parameter with a specified value level, where N and s are mentioned above about OA (N, k, s, t). It indicates that other parameters are in a dynamic state when the value of the specified parameter is fixed. That means the fitness value of the specified parameter can be updated as a better one by using equation (1), under the condition that the influence of all the other parameters is taken into account as much as possible. All the updated fitness values are recorded, and their bottom outline can be easily extracted and also recorded. Hence, when we use the recorded data for fitting operations based on the spline method, the results are worth considering due to the comprehensive consideration of the parameters. The predicted value of parameter locates in the minimum position of the fitting curve.

Let us consider a function of the polynomial accumulation as an example (as shown in equation (2)) which aims to illustrate the process of fitting operations:where is the variable to be solved in the presentation and represents the serial number of the variables which is from 1 to . Through several steps of mathematical derivation, we can easily obtain that equation (1) has an approximate minimum value of −391.662 only when the rounded solution equals to .

The advantage of the proposed method is its fast convergence rate which benefits from the implementation of curve fitting and extreme point prediction. According to the optimization mechanism of the proposed method, as shown in Figure 1(b), two groups of results can be obtained in each iteration of optimization. One is obtained by routine optimal-seeking operation (called the routine results), that is, to make the optimal judgment by comparing the results of the average signal-to-noise ratio. The other is obtained by prediction, i.e., the proposed operation. An observation of parameter is set, and some concrete illustrations are shown in Figure 2. When the 7th iteration is completed, the updated fitness of and its fitting curve are shown in Figure 2(a). The fitting curve is formed by the spline fit method using the smallest 9 points of the bottom outline, and the position of the smallest value is selected as the prediction point (marked with a star). We can see the prediction point does not fall on the target interval around the solution , even the error is a little large. That is a failed prediction because of the insufficient number of data points and there is no clear target trend for limited data. In Figure 2(b), the fitting curve of the 8th iteration result is presented. It is obvious that the prediction point starts to aim at a new target that is close to −3 (approaching −2.9035). As the iteration proceeds, the predicted value gradually approaches the optimal value, as seen in Figures 2(c) and 2(d) referring to the 16th and 25th iterations, respectively. During the iterations, a successful prediction process refers to that the optimization process is accelerated once the recorded predicted value is better than the routine result in a certain step of iteration, and its result will be used in the next iteration. Conversely, the prediction fails when the predicted value is not as good as the routine one. At this case, the routine result is used as the final optimal result of the current iteration to carry out the next generation of experiments. In summary, we always pick the relatively better value of the two results so as to increase the convergence rate. When the prediction point does not fall on the target interval around the object solution, we keep the algorithm consistent with the operation flow of the normal Taguchi method to ensure that the optimal results can still be found at least similar to the routine one.

The convergence curve of fitness value is presented in Figure 3. Here, we focus on the convergence by comparing the two optimization processes of the normal Taguchi method and the proposed one. Throughout the optimization process, the proposed method always gives priority to achieving relatively better fitness value. It means that the Taguchi method based on fitting and prediction can obtain satisfactory results in fewer iteration steps.

2.3. More Optimization Examples of Functions

Many other examples of the seeking optimal point of the test function are presented to prove the effectiveness of the proposed algorithm. The basic characteristics of functional examples are listed in Table 1. and represent the extreme values of the objective function and the optimum solution corresponding to the extreme value, respectively. Figure 4 depicts the comparisons of convergence between the normal Taguchi method and the proposed one based on fitting and prediction. Although they are of different types and different intervals for seeking optimums, the optimization process for all functions follows the same implementation procedure, which demonstrates the validity and robustness of the Taguchi method including our proposed one. The comparison results also provide a conclusion that the proposed Taguchi method has a better convergence in those optimization problems.

3. Design of Linear Antenna Arrays

The antenna pattern synthesis has received a great attention in the electromagnetics community, and it is widely used in some applications of antenna arrays. In this section, two examples of the antenna pattern synthesis by several categories are optimized and discussed. The former one requires nulls in desired directions which is necessary and benefits for some smart antenna systems to shield interference signals. The latter one has a flat top beam and allows only a minor fluctuation within its beam coverage. These synthesized patterns are usually accomplished by using mathematical methods, such as the Schelkunoff polynomial method, the Fourier transform, and the Woodward–Lawson methods. In this section, both the normal and the proposed Taguchi methods are used to optimize the two symmetrical linear arrays with 20 elements, and the comparisons are also given.

3.1. Synthesis of Null Controlled Pattern

Firstly, we draw a target pattern constraint with controlled nulls so as to shape the pattern line in optimization, as shown with the blue dashed line in Figure 5(a). There are two nulls in the pattern with low amplitude of −55 dB ranging from 50° to 60° and 120° to 130°. The required amplitude in other directions is less than −40 dB, except for the main lobe within positive and negative 10° (centred in 90°). The main lobe also needs a half-power beam width (HPBW) of 7.4°. Besides, the OA(27, 10, 3, 2) used in this example is the same with that of [6], and it can also be found in the library online [24]. The converged value is set to 0.002, and the converged factor (RR in [6]) is set to 0.75. The fitness functions of the two methods are both set to bewhere is an integer in the interval [0, N] and represents the discrete number of the direction angle which is from 0 to 180° in this example. The is expressed aswhere and denote the upper limit and the lower limit of the desired pattern at the direction of , respectively, and the is the iterative pattern. The is a difference value that represents the cost at each direction. When the iterative pattern is falling in the value interval between and , its cost is set to 0, because the current value at satisfies the requirements of a desired one. Otherwise, the cost is defined as the absolute value of the difference between the desired pattern and the iterative pattern, as shown in (4).

It takes 23 times throughout the optimization process. The comparison of patterns, in Figure 5(a), can tell us that the proposed method has got a relatively better result, especially in the zoom-in figure of the nulls’ intervals. It can be clearly found that our results within the null regions are obviously better than those of the original Taguchi method. The other indicators are similar: the HPBW values of the two methods are both of 7.4°. The −40 dB beam widths are finally identified as 20.90° and 21.0°, and the maximum sidelobe levels are identified as −39.6 dB and −39.5 dB, corresponding to the normal Taguchi method and our proposed one, respectively.

Secondly, we take the convergence curves into consideration, as shown in Figure 5(b). Although the two curves almost coincide with a macroperspective, we can clearly find that the scheme we use converges faster and works better from the enlarged view of the marked region. Table 2 illustrates the amplitudes of optimal results. All the results demonstrate that the method we proposed improves the efficiency of optimization.

3.2. Synthesis of Flat Top Pattern

The other example, of a 20-element linear antenna array, aims to form a main lobe with flat top which ranges from 78° to 102° and requires the ripples smaller than 0.5 dB within the top region. Besides, the sidelobe region between 0° to 70° and 110° to 180° is limited to below −25 dB. The requirements are shown in Figure 6 using blue dashed lines. We set the converged factor (RR in [6]) to 0.75 and the iteration steps to 60 in this example so that the converged value can be reduced less than 0.002. The OA (81, 20, 3, 2) is formed by extracting the first 20 columns of the OA (81, 40, 3, 2) which is obtained from [24]. The fitness function can be also described as (3) and (4).

After optimization, the pattern results of both the two schemes basically meet the aims, as shown in Figure 6. The ripples in the top region are optimized to 0.5 dB and 0.48 dB, respectively. The sidelobe region fetches an optimized result of under −25 dB, and the maximum sidelobe level is −25.53 dB and −25.57 dB, respectively, by the normal method and the proposed one. The patterns have a −25 dB beam width of 40.43° and 40.07° for each line, and the proposed one is even better than the results in [6]. Furthermore, the convergence curves are taken into comparison in Figure 7. The total convergence process (Figure 7(a)) and the zoom-in views of the two intervals (Figures 7(b) and 7(c)) are given to prove that the proposed optimization method achieves a better convergence result. By focusing on observing between 10 and 30 steps of the iteration, we find it is more obvious than that in the first example since the advantage of a quick convergence in the proposed Taguchi method is exhibited throughout the iteration. The optimized result indicates that the fitting and prediction operations availably improve the optimization efficiency of the original Taguchi method. Table 3 lists the amplitudes and phases of the final optimal results.

An important comparison of the cost is listed in Table 4. We define the cost as the number of experiments which are conducted when the fitness value achieves the goal. The table shows that the proposed method can get a better fitness value even less than 10−4; however, the normal one can only be the level of 10−2. If the target of the fitness value is set to less than 100, then the normal Taguchi method needs 30 iterations to achieve the goal and the proposed method needs only 26 iterations. That means 4 generations of iterations do not need to be executed, and it also represents the exemption of 320 experiments (81 experiments per iteration and one more experiment to verify the predicted value in each iteration of using the improved method is deducted). Similarly, no matter the fitness is less than 10−1 or 10−2, the proposed method achieves the goal faster than the conventional Taguchi algorithm. The time costs are also counted in Table 4. By comparing the two methods, the proposed Taguchi method has less time cost in each fitness level. When the 51th iteration is finished, the fitness of normal optimization can reach 10−2, and almost the same time, the proposed optimization is approaching 10−3. The percentage reductions of the time costs are 14.52%, 10.80%, and 16.93%, corresponding to the fitness values below 100, 10−1, and 10−2, respectively. In order to eliminate the influence of different average single experiment time on the reduction of total calculation time, a percentage index of efficiency enhancement is defined as the ratio value of the number of the iterations reduced by the proposed method to the total number of iterations using the normal method. The efficiency enhancement indexes are 13.33%, 8.57%, and 13.73%, also corresponding to the fitness values below 100, 10−1, and 10−2, respectively. The comparison results further show that the obvious effect of the improved method is achieved against the normal one.

4. Conclusions

An improved Taguchi method based on fitting and prediction has been developed in this paper. The implementation procedure of fitting and prediction is presented, and some test functions have been used to conduct the confirmation experiments. Their contrastive convergence curves preliminarily prove the validity of our proposed method. Furthermore, two linear antenna arrays from the reference, viz., a null controlled pattern and a sector beam pattern, are implemented with the normal Taguchi method and the proposed one, respectively. Some detailed comparisons have been operated and discussed. It is clearly found that the proposed Taguchi method is conducive to accelerating the convergence of the optimization process, compared with the normal one. In one word, the proposed Taguchi method has the advantages in both the global optimization characteristics and the convergence rate, which means it has better practicability.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 61771407).