Abstract

Evolutionary algorithms play an important role in synthesizing linear array antenna. In this paper, the authors have compared quantum particle swarm optimization (QPSO) and backtracking search algorithms (BSA) in failure correction of linear antenna arrays constructed using half wavelength uniformly spaced dipoles. The QPSO algorithm is a combination of classical PSO and quantum mechanics principles to enhance the performance of PSO and BSA is considered as a modernized PSO using historical populations. These two algorithms are applied to obtain the voltage excitations of the nondefective elements in the failed antenna array such that the necessary objectives, namely, the minimization of parameters like side lobe level (SLL) and voltage standing wave ratio (VSWR), are achieved leading to their values matching closely the desired parameter values. The results of both algorithms are compared in terms of parameters used in the objective function along their statistical parameters. Moreover, in order to reduce the processing time, inverse fast Fourier transform (IFFT) is used to obtain the array factor. In this paper, an example is presented for the application of the above two algorithms for a linear array of 30 parallel half wavelength dipole antennas failed with 4 elements and they clearly show the effectiveness of both the QPSO and BSA algorithms in terms of optimized parameters, statistical values, and processing time.

1. Introduction

Any failure in the elements of the antenna array [1] will end up in corruption of radiation pattern and lead to deviation of the radiation pattern parameters, namely, side lobe level, first null beam width (FNBW), half power beam width (HPBW), and so forth, thus leading to degradation in the performance of the communication systems. Researches in the past have clearly depicted various ways of recovery [24]. A simple way to recover the radiation pattern is to manually replace the failed antenna elements, which is not quite a better solution when the deformation occurs in applications like satellites, radar, and so forth. Literature survey has revealed that the idea of adjusting the beam weights of the remaining nondefective elements in the array results in production of radiation pattern approaching the original pattern in terms of the radiation pattern parameters being extensively used. Mailloux [2] utilized the idea of replacing the signals in a digitally beam formed array in a multiple Signal environment without laying any restrictions on correlation properties of signals. A method [3] based on evolutionary algorithm, namely, genetic algorithm [35], has been utilized for failure correction by considering the beam weights as an array of complex vector numbers. This algorithm has also been used by Rodriguez et al. [4], successfully in such a way that the adjustments of excitations for minimum number of elements were needed to recover the radiation pattern. Thus, genetic algorithm has established itself as one of the primitive algorithms which enjoyed its tenure in the past and the primary reason was due to its different mating schemes, decimal crossover principles, and so forth. Following it, particle swarm optimization [6] came into picture which dealt with the intelligence and cooperation rules followed by a certain set of birds. Till then, evolutionary algorithms started occupying the prime position of providing adjustment of beam weights for the correction of radiation patterns. The beam weights can refer to only amplitude or both amplitude and phase excitations of the individual elements. Amplitude only control avoids the unnecessary use of phase shifters, thus reducing the overall complexity in feed circuitry.

Researches also have shown the validity of algorithms [7] superseding over others in terms of the values of optimized outputs, convergence time, and statistical values like mean of the fitness value, standard deviation, and so forth. Moreover, literature survey also reveals that inverse fast Fourier transform [5, 8, 9] can be used to speed up the whole process of failure correction. Here, the relationship of inverse fast Fourier transform (IFFT) with the current excitations of the antenna leading to the array factor is utilized and has been used for synthesis of low side lobe radiation patterns [9]. This relationship is primarily used here for the sole purpose of reducing the overall computing time. Wang et al. [5] incorporated the principles of IFFT in obtaining the array factor computation in a short time and verified successfully the proposed idea for both the linear and planar arrays.

In this paper, mutual coupling [8, 1012] between the elements is taken into consideration as, in practice, mutual coupling deteriorates not only the antenna radiation pattern but also the matching characteristics. In other words, it reduces the antenna efficiency and performance of antennas in both transmit and receive mode.

Here, the problem is to correct the radiation pattern of 30-element linear antenna array corrupted with 4 elements with the objective of making the SLL and VSWR approach close to the expected values. Two algorithms, namely, QPSO [13, 14] and BSA [15, 16], have been used for this purpose. Moreover, SLL refers to the ratio of the amplitude of the peak lobe to that of the side lobe of a radiation pattern. It is usually expressed in decibels. The second parameter that is used in this paper for correction, namely, VSWR, is a measure of the reflected power in decibels. It also describes how well the antenna is matched to the transmission line to which it is connected.

The rest of the paper is organized as follows. Section 2 covers the methodology used which also includes the radiation pattern equation of linear antenna with mutual coupling effects and the fitness function that has been used in this problem. Sections 3 and 4 detail the quantum particle swarm optimization and backtracking search optimization evolutionary algorithm principles. Section 5 depicts the simulated results followed by conclusion in Section 6 and references.

2. Methodology

The radiation pattern in - plane of a linear array [1, 10] of parallel dipole antennas uniformly spaced at a distance apart along the -axis is given by where is array factor, is element pattern of a vertical half wavelength dipole antenna, which is assumed to be omnidirectional, is total number of elements, is the element number, is spacing between individual elements in the array, is the wave number, is the wavelength of operation, is azimuth angle as shown in Figure 1, and refers to the amplitude of current of th dipole element. Substituting in (1) will lead to far field pattern in -domain.

Moreover, the current amplitudes are obtained by dividing the input voltage excitations of the array by the mutual impedance components stored in a mutual impedance matrix [10, 11]. The terms in this matrix , namely, self-impedances and mutual impedances of , are calculated considering the distributions of the currents on the dipole to be sinusoidal. The array pattern has been directly computed through a 4096-point IFFT [5, 8, 9] operation on the current excitation amplitudes. This reduces the time taken for completion of the operation of array factor drastically.

A measure of the voltage [10, 11] across the th dipole can be given by where is the self-impedance of dipole and is the mutual impedance between dipoles and . The active impedance [10, 11] of dipole , , is given by where is the reflection coefficient across the th dipole and is equal to and refers to the characteristic impedance and its value used in this paper is 50 Ω. The smaller the value of , the better the matching condition of antenna. Or in other words, it can be concluded as a good match between the antenna and the feed line connected to it. Failure elements are not considered for calculation of VSWR as their value of active impedance is zero. And moreover, failure of a dipole element refers to the zero value of the voltage excitation at its input. However, current flowing through such faulty element is not zero because of mutual coupling. The faulty element acts as a parasitic radiator. The fitness function has been written in order to obtain new excitation voltage amplitudes of the remaining unfailed elements (excluding the failed elements). The algorithms that are used here are QPSO and BSA ones where The coefficients and refer to the weights given to their corresponding terms associated and are made equal to unity here. and are desired (−25 dB) and obtained values of , respectively.

3. Quantum Particle Swarm Optimization

Various versions of the classical PSO [6] with Newtonian rules introduced by Kennedy and Eberhart in 1995 have proved its superiority over other algorithms like genetic algorithm in the past. The versions included the provisions of adding or tuning of parameters to guarantee the convergence, to improve convergence, and to increase in validity success of algorithms. This resulted in introduction of QPSO [13, 14] in 2004 using quantum mechanics principles which was found to be similar to PSO with less usage of control parameters and it paved its success over classical one by testing using numerous standard benchmark functions. The other main reason for its success is fast global convergence characteristics.

In classical PSO, convergence to a particular location is made possible by careful tuning of the social and cognitive factors of the algorithm and in order to confine all the particles in the population within a space or boundary as required by the objective function, proper choice of maximum velocity in each dimension is required. And moreover, the state or condition of any particle in the population is defined by its velocity and position. An additional parameter is also introduced to elevate the convergence speed. Still, the global convergence is not guaranteed.

Unlike the above, in QPSO, the particle’s state is defined by a wave function instead of position and velocity. There are no cognitive and social factors in the algorithm. This certainly reduced the total number of control parameters in this algorithm. The steps involved in this algorithm [14] are summarized as follows.(i)Based on dynamic range, initialize randomly the positions of all swarm in population.(ii)Evaluate the particles and the obtained personal best () of all the particles are compared with their current fitness value obtained from the objective function. Their original values are replaced with the current one, if current value is better than the original one.(iii)The overall mean best () position of all particles is obtained using (iv)The above procedure is adopted for whole population and current value is checked again for best fitness and designated as global best (), if it is found to be better than original fitness value.(v)The next step is to obtain the particle’s vector local focus using (7) as (vi)Update position of the th dimension of th particle using (8) and repeat steps (ii) to (vi) till global best is obtained, which is regarded as the final optimized solution. One has If , then If , where , , , , and denote the uniform random numbers between 0 and 1, the desired minimum and maximum limits are and , and refers to the particle’s local attractor with being the coefficient dealing with the convergence. Equations (9) and (10) are used to maintain the position in a particular limit, thus avoiding particles getting exploded.

4. Backtracking Search Algorithm

Backtracking search algorithm (BSA), an evolutionary algorithm, has been proposed by Civicioglu [15, 16]. Backtracking search optimization algorithm (BSA) is a population-based iterative stochastic search evolutionary algorithm that is widely used to solve nonlinear, nondifferentiable, and complex numerical optimization problems.

Unlike many search algorithms, BSA has a single control parameter called dim_rate. Moreover, BSA’s problem-solving performance is not oversensitive to the initial value of this parameter.

BSA uses three basic genetic operators such as selection, mutation, and crossover to generate trial individuals. BSA uses a nonuniform crossover strategy that is more complex than the crossover strategies used in many genetic algorithms. BSA has a random mutation strategy that uses only one direction individual for each target individual.

Step 1. The algorithm is run for times. At the start of every run, randomly generate an initial population of individuals with dimensions of every individual within the variable constraint range. Also, generate another set of population of size within the variable constraint range, called historical population. Calculate the fitness value of initial population. The initial population (IP) and historical population (HP) are different in every other run. Consider where is uniform random numbers between 0 and 1.

Step 2. Start iteration loop. The iteration number is fixed for every run. Provisions are also available for making historical population equal to initial population [15]. The order of individual in historical population (HP) is randomly changed through a random shuffling function.

Step 3. New offspring are generated from the combination of initial and historical population through mutation and crossover strategy as follows: where is binary integer-valued matrix of size . It controls the number of elements of individuals that will mutate in a trial. is a scale factor which is equal to , with being standard normal distribution.
Two predefined strategies are randomly used to define BSA’s . The first strategy uses dim_rate parameter and the second strategy allows only one randomly chosen element of an individual to mutate in each trial. BSA’s crossover process is more complex than the processes used in Differential evolution.
As shown in Algorithm 1, functionrandomly permutates all integer numbers between 1 and .
Some offspring obtained at the end of BSA’s crossover process and mutation strategy can be out of bound and therefore out of the allowed search space limits. The individuals beyond the search-space limits are restricted to lie within the upper and lower bound of every element of any individual. This is done in the following way:

;
if rand < rand,
  for  , ; ; end
else
  for  , ; end
end

Step 4. Calculate the fitness value of offspring.
Compare the fitness value of every offspring with the fitness value of initial population. If the fitness value of any offspring is better, then assign the fitness value of offspring to initial population and assign the current coordinates of offspring to initial population coordinates. Thus, a new set of population is generated. This new set of population becomes initial population in the next iteration. One has

Step 5. Determine the current best fitness value in the whole newly generated population and its coordinates. This will give global best value and its coordinates in the current iteration.

Step 6. Repeat Steps 25 until a stopping criterion, such as a maximum number of iterations being completed, is satisfied. The best scoring individual in the population is stored at the end of maximum number of iteration in every run. This will complete iteration loop in one run.

Step 7. Repeat Steps 16 until a stopping criterion, such as a sufficiently good solution being discovered or a maximum number of runs being completed, is satisfied. The best scoring individual among gbest individuals (one gbest individual in every run) considering all the runs is taken as the final answer. The mean and standard deviation of all gbest values is then calculated.

5. Results

In this paper, the authors used a linear antenna array of 30 half wavelength parallel dipoles placed along -axis with uniform spacing of between individual elements. The requirement is a broadside pattern in azimuthal plane with a of −25 dB and VSWR of 1.4. The dipole element’s radius used here is . Four elements, namely, 3, 6, 20, and 28, are chosen randomly to be failed elements ().

Three cases are discussed here.

Case 1. For generation of original pattern without any failures, QPSO is run for 100 iterations with a population of 60. A set of excitation voltage amplitudes is obtained.

Case 2. In this case, damaged pattern is obtained by considering the dipole elements 3, 6, 20, and 28 to be failed ones (randomly chosen). This is done by setting (3, 6, 20, and 28) to be equal to zero in the voltage excitations obtained using Case 1.

Case 3. For obtaining the corrected pattern, QPSO is run 5 times, each for 1000 iterations. A set of new excitation voltage amplitudes is obtained for nondefective elements at the end of five runs.
The best fitness value among five runs is taken as the final answer and regarded as global best value.
The above three cases are repeated for BSA algorithm also.

MATLAB program is written and run for the above in a PC with Intel Core Duo CPU E8400 at 2.99 GHz and 1.94 GB of RAM. Table 1 shows the voltage excitations for original, damaged, and corrected patterns (using QPSO and BSA) and Table 2 summarizes comparative results of both algorithms. Figure 2 shows the fitness values versus number of iterations for both algorithms and Figures 3 and 4 show the radiation patterns obtained.

6. Conclusions

This paper presented a comparative analysis between QPSO and BSA algorithms in terms of the expected design parameters as well as statistical ones. The tables depicting the results show that even though both algorithms suited well for correction of the damaged pattern, the processing time taken for QPSO is slightly more than BSA. The expected values of antenna parameters, namely, and , match very closely each other, with QPSO slightly edging over BSA in providing better value. The same is the case with the statistical parameters where both algorithms match very closely each other, with QPSO showing slightly better results than BSA in global best fitness value and mean. The processing time is lower in BSA. This comparative analysis can be extended to other antenna array configurations also.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.