Abstract

Evolutionary algorithms, in particular particle swarm optimization (PSO), have recently received much attention. PSO has successfully been applied to a wide range of technical optimization problems, including channel estimation. However, most publications in the area of digital communications ignore improvements developed by the PSO community. In this paper, an overview of the original PSO is given as well as improvements that are generally applicable. An extension of PSO termed cooperative PSO (CPSO) is applied for MIMO channel estimation, providing faster convergence and, thus, lower overall complexity. Instead of determining the average iterations needed empirically, a method to calculate the maximum number of iterations is developed, which enables the evaluation of the complexity for a wide range of parameters. Knowledge of the required number of iterations is essential for a practical receiver design. A detailed discussion about the complexity of the PSO algorithm and a comparison to a conventional minimum mean squared error (MMSE) estimator are given. Furthermore, Monte Carlo simulations are provided to illustrate the MSE performance compared to an MMSE estimator.

1. Introduction

Multiple-input multiple-output (MIMO) transmission is considered as a key technology to reach the challenging goals of upcoming wireless standards, such as long-term evolution advanced (LTE-A) [1]. A wide range of MIMO detectors are known in the literature offering a performance close to the channel capacity. Precise channel state information (CSI) is required to obtain this performance. Correspondingly, the performance of the detection algorithms highly depends on the accuracy of the CSI. Minimum mean squared error- (MMSE-) based channel estimation reaches optimum performance with computational cost that may become infeasible for practical implementation.

Advanced iterative receivers, which jointly detect the data symbols and estimate the channel, offer a close-to-optimum performance at often reduced computational complexity. However, the majority of joint receivers need good initial channel estimates to reach their ultimate performance. The space alternating generalized expectation (SAGE) algorithm in [2] and the graph-based iterative receiver in [3] are examples of iterative receivers which need proper initialization.

Generally, channel estimation can be seen as an optimization problem, that is, to minimize the Euclidean distance between the estimated and the true channel coefficients. The straightforward solution to this problem incorporates matrix inversion and leads to the well-known least-squares (LS) and/or MMSE estimator.

Heuristic, nature-inspired algorithms, such as particle swarm optimization (PSO) [4, 5] or genetic algorithms (GA) [6, 7], are attractive low-complexity solutions to facilitate MIMO channel estimation. PSO is a population-based heuristic global optimization algorithm, which originated in modeling the social behavior of bird flocks and fish schools. It has been applied to a variety of technical optimization problems, including channel and parameter estimation [8–13] as well as data detection [14] and multiuser detection [15]. Unfortunately, a fair evaluation of PSO is rather difficult due to the wide range of available modifications and the fact that the algorithm is often tuned to optimum performance for a specific optimization problem by empirical measures.

Genetic algorithms are inspired by natural evolution. Accordingly, population members are termed chromosomes. Based on an optimization metric, a subset of chromosomes is selected to breed a new generation, which is subsequently used to generate a new generation by means of crossover and/or mutation.

PSO and GA share many similarities as both start with a randomly initialized population and both use a fitness value to evaluate their population members. The main difference lies within the selection of leaders (in terms of PSO) or parents (in terms of GA) as well as the update of position and/or generation of new members, respectively. Population members within PSO are updated iteratively and influence themselves directly by their personal best position. On the contrary, population members in GA pass characteristic information to their children.

It is difficult to compare the performance of PSO and GA in general as both depend on the specific optimization problem. Additionally, a similar variety of possible implementations exists also for GA. However, several publications in the field of digital communications come to the conclusion that PSO is advantageous compared to GA in terms of computational complexity, convergence speed, and accuracy [16–18]. Additionally, fewer parameters need to be set for the PSO algorithm.

PSO is a viable alternative to replace the closed-form solution of standard LS/MMSE estimators if it can provide similar performance at lower complexity. Nevertheless, the overwhelming variety of implementations makes a performance/complexity analysis cumbersome. This paper evaluates the applicability of PSO for MIMO channel estimation with respect to mean squared error (MSE) performance and computational complexity. The paper comprises the following central points.(i)An overview of the original PSO is given as well as general improvements which lead to a modified update function. Although PSO is widely adopted in many fields, mainly the original version of PSO is applied. Nevertheless, in many cases the performance can be improved and/or the required number of iterations can be reduced by more advanced versions. The focus is hereby on strategies which can be applied in general and do not have to be tuned for a specific optimization problem.(ii)While PSO has already been applied to MIMO channel estimation in the literature (e.g., [9]), an extension known as cooperative PSO (CPSO) [19] is introduced in this paper. Although PSO is directly applicable for some cases, advanced strategies, such as CPSO, are necessary either to provide optimum performance and/or a lower complexity.(iii)The performance of PSO and CPSO is compared with a conventional channel estimation algorithm, namely, the optimum MMSE estimator. Additionally, PSO/CPSO and an MMSE estimator are used to provide initial channel estimates for a graph-based iterative receiver. BER results illustrate the advantage of CPSO over PSO.(iv)A general rule to determine the maximum number of iterations is developed on the basis of the generalized extreme value distribution. This approach allows the actual computation of the required number of iterations to reach a predefined target. To the authors best knowledge, a similar method to predict the required number of iterations does not yet exist, although it may be essential for the evaluation of the complexity.(v)The complexity of PSO/CPSO is discussed and compared to the conventional MMSE estimator. It is known that the complexity of PSO/CPSO per iteration is low. However, depending on the optimization problem several hundred iterations are required until convergence is achieved. Utilizing the proposed criterion to determine the maximum number of iterations, the complexity of PSO/CPSO can be evaluated for a wide range of parameter settings, and an optimum tradeoff between iterations and complexity per iteration can be determined.

The remainder of this paper is organized as follows: PSO and the extension to cooperative PSO is elaborated in Section 2. The application to MIMO channel estimation is described in Section 3. A performance as well as a complexity comparison of PSO/CPSO with an MMSE estimator is given in Sections 4 and 5, respectively. The extension to multiple objectives is discussed in Section 6. Finally, Section 7 draws the conclusion.

Throughout the paper, the following notation conventions are adopted. Bold-face capital and lower-face letters stand for matrices and vectors of appropriate dimensions, respectively. πˆπ‘π‘‡ denotes a 𝑁𝑇×𝑁𝑇 identity matrix. Furthermore, ()† represents the Hermitian operator.

2. Particle Swarm Optimization

PSO is a population based, heuristic, iterative optimization algorithm. Due to the heuristic approach, no gradient information is required to converge to the global optimum. Hence, it can easily be adopted to a wide range of technical optimization problems.

In the following a general overview of PSO is given in Section 2.1. An extension termed cooperative PSO is introduced and applied for MIMO channel estimation in Section 2.2.

2.1. Standard PSO

The standard PSO is described by Algorithm 1. Initially all 𝑁𝑝 particles of a swarm are randomly set throughout the feasible search region [𝐒min,𝐒max], where π’βˆˆβ„π·. The particles of a swarm β€œfly” through a 𝐷-dimensional search space, which is gradually explored by adjusting the trajectory of each particle at each iteration. Within each iteration the current position of a particle 𝐩𝑖=[𝑝1,…,𝑝𝐷] is used as a candidate solution for the optimization metric termed fitness function. The fitness value of a particle is distributed to all particles within the swarm. The previously best position of a particle is termed personal best 𝐩IB𝑖, whereas the previous best position of the swarm is called global best 𝐩GB. The velocity vector of a particle 𝑖 is updated according to [5, 20]:𝐯′𝑖=πŽπ―π‘–+𝑐1πœ€1βˆ˜ξ€·π©IBπ‘–βˆ’π©π‘–ξ€Έ+𝑐2πœ€2βˆ˜ξ€·π©GBβˆ’π©π‘–ξ€Έ,(1) where ∘ denotes the entrywise product. The variables πœ€1 and πœ€2 denote random numbers in the range of [0,1]. The inertia weight 𝝎 typically decreases from 0.9 to 0.4 over the course of iterations. The social and cognitive parameters 𝑐1 and 𝑐2 are acceleration coefficients towards the personal and/or global best position, respectively. The velocity vector of a particle is, similar to the search space, restricted within certain boundaries [𝐯min,𝐯max]. Particles which are beyond the boundaries of search space and velocity, respectively, are reset to the boundary limits.

Initialize swarm
Locate leader
𝑖 = 1
while   𝑖 < 𝑖 m a x or convergence do
 for each particle do
  Update position using (1)/(2),  (4)
  Evaluation using  (5)
  Update pBest
 end  for
 Update leader
  𝑖 ++
end  while

The update function (1) was published in 1998 as part of an already revised version of the PSO algorithm. The original update function of PSO published in 1995 [4] did not include the inertia weight or the cognitive and social parameters. Since then, an overwhelming amount of variations have been proposed. However, no standard algorithm or set of parameters has yet emerged, which delivers optimum performance independent of the optimization problem. Hence, parameters are tuned for each specific problem and settings tuned by means of empirical measures are often applied. The authors of [21] propose a so-called standardized version of PSO which incorporates several general applicable improvements, that is, bound handling, swarm size, and an update equation replacing the inertia weight with a constriction factor. The standardized version improves the performance for most optimization problems compared to the original version. We restrict ourselves to general applicable optimization for PSO, although adaptive versions [22] are also able to improve the performance of the standard PSO, but their parameters may be optimized for each optimization problem.

The update rule based on the constriction factor is given by𝐯′𝑖𝐯=πœ’π‘–+𝑐1πœ€1βˆ˜ξ€·π©IBπ‘–βˆ’π©π‘–ξ€Έ+𝑐2πœ€2βˆ˜ξ€·π©GBβˆ’π©π‘–,ξ€Έξ€Ύ(2) with2πœ’=|||2βˆ’πœ‘βˆ’βˆšπœ‘2|||,βˆ’4β‹…πœ‘(3) where πœ‘=𝑐1+𝑐2,πœ‘>4. The factors 𝑐1 and 𝑐2 are constraints on the velocity towards the global and the personal best position. According to [23], suitable values for a wide range of test functions are as follows: 𝑐1=2.8 and 𝑐2=1.3, which results in πœ’β‰ˆ0.7298. The standardized update function (2) as well as the above-mentioned parameters are applied throughout all simulations. The position of a particle is updated subsequently according to𝐩′𝑖=𝐩𝑖+π―ξ…žπ‘–.(4) The updated velocity vector π―ξ…žπ‘– is added to the current position 𝐩𝑖 of a particle. The new position π©ξ…žπ‘– is used as a candidate solution for the optimization metric. The optimization performed by PSO is described by𝑝OPT=argmin𝐩𝑖𝑓𝐩𝑖.(5) The so-far emerged personal and/or global best 𝐩IB𝑖 and 𝐩GB, respectively, are replaced by the updated position 𝐩′𝑖, if the fitness value 𝑝OPT is improved compared to the values of the personal and the global best position. This procedure is repeated until PSO is converged or the maximum number of iterations 𝑖max is reached. 𝑖max is chosen to be sufficiently large to prevent that the algorithm is stopped before the global optimum could be found. Frequently, the optimum solution is found with just a fraction of 𝑖max. Therefore, a stopping criterion is necessary to reduce the average number of iterations needed for convergence. An overview of suitable stopping criteria is given in [24]. In this paper, PSO is said to be converged if 𝑝OPT is below a certain threshold π‘‘β„Ž for 𝛾 iterations.

In case PSO converges, all particles 𝐩 of the swarm are located at the same position minimizing (5). Without loss of generality, only minimization problems are considered.

2.2. Cooperative PSO

In general, population-based optimization algorithms are searching for a region of small, specified volume in a 𝐷-dimensional search space, surrounding the global optimum. In order to converge to the global optimum, an optimization algorithm needs to create a sample within this region. The probability of generating a sample within the region is the volume of the region divided by the volume of the search space [19]. This probability decreases exponentially with increasing dimensionality of the search space. This effect is often termed β€œcurse of dimensionality.’’

Separating the high-dimensional search space into sets of smaller dimension improves the performance significantly. PSO is known to perform rather poor for high-dimensional problems. A large variety of solutions is proposed to solve this problem. In [25] the update function (1) is adapted to take adaptive parameters into account. These parameters are changed over the course of iterations and improve the converge behavior of PSO algorithm. However, the optimum set of parameters remains problem dependent. An alternative solution to improve the performance of the original PSO algorithm is given by a so-called cooperative approach to particle swarm optimization (CPSO) presented in [19]. The CPSO approach relies on the original update equation and is described in the following. The pseudocode describing CPSO is given by Algorithm 2. The 𝑁𝑝 particles of the PSO swarm are now separated into 𝑁𝑠 swarms with 𝑁′𝑝 particles. The number of particles for both PSO and CPSO should be chosen within a certain range. Too few particles (𝑁𝑝,π‘ξ…žπ‘<5) lead to a deteriorated performance, while too many are not able to increase the performance (𝑁𝑝,π‘ξ…žπ‘>100). About 15 particles is a good tradeoff between complexity and performance [23].

Initialize 𝑁 𝑠 swarms with 𝑁 ξ…ž 𝑃 particles
Locate leader
𝑖 = 1
while   𝑖 < 𝑖 m a x or convergence do
 for each swarm do
  for each particle do
   Update position using (1)/(2),  (4)
   Evaluation using  (6)
   Update pBest
  end  for
 end  for
 Update leader
  𝑖 ++
end  while

Accordingly, the 𝐷-dimensional problem is split into 𝑁s subsets and optimized separately by an individual swarm of particles 𝐬=[𝑠1,…,𝑠𝐷/𝛿], where 𝛿 is the number of dimensions for each swarm. The position of a particle 𝑖 of swarm 𝑠 is given by 𝝆𝑠,𝑖=[𝜌1,…,πœŒπ›Ώ]. The separation of the dimensions is mitigating a drawback of the standard PSO algorithm: Since the standard PSO considers the full-dimensional vector in the update function, it allows that some dimensions move further away from the solution as long as the overall fitness value is improved. On the contrary, cooperative PSO is evaluating subsets of the 𝐷-dimensional vector. The probability that single components are deteriorated in favor of other dimensions is thus reduced.

For 𝑁𝑠=1 swarm, CPSO is equivalent to PSO since all dimensions are optimized by one swarm. In case of 𝑁𝑠>1, the evaluation of the optimization metric is not directly possible since a particle represents only a subset of dimensions of the optimization problem. Consequently, a context vector πœ™π‘ ,𝑖 is necessary. In order to construct a 𝐷-dimensional vector, the π·βˆ’π›Ώ missing dimensions are replaced by the global best positions of the remaining swarms: πœ™π‘ ,𝑖=[𝝆GB1…𝝆𝑠,𝑖…𝝆GB𝐷/𝛿]. The optimization function (5) is changed accordingly:𝑝OPT=argminπœ™π‘ ,π‘–π‘“ξ€·πœ™π‘ ,𝑖.(6)

3. Application to MIMO Channel Estimation

We consider a MIMO system with 𝑁𝑇 transmit and 𝑁𝑅 receive antennas. The received signal vector at time index π‘˜,𝐲[π‘˜]βˆˆβ„‚π‘π‘…Γ—1, is modeled as𝐲[π‘˜][π‘˜][π‘˜],=𝐇𝐱+𝐧(7) where 𝐱[π‘˜]βˆˆβ„‚π‘π‘‡Γ—1 is the transmitted signal vector at time index π‘˜. The entries of the channel matrix π‡βˆˆβ„‚π‘π‘…Γ—π‘π‘‡ are assumed to be independent and identically distributed (i.i.d.) according to π’žπ’©(0,1). We consider a quasi-time-invariant channel for the numerical analysis of PSO and CPSO. The application to time-varying channels is given briefly by the description of a multiple-objective PSO in Section 6. Furthermore, only a memoryless channel is considered in the numerical results in order to simplify the optimization metric and the discussion of the results. The complexity analysis is directly applicable to frequency-selective channels as well, since only the dimensionality of the problem is discussed here. Dimensions can be increased by either transmit and receive antennas and/or the channel memory length 𝐿. The MSE performance for a channel memory length 𝐿>0 will follow the MSE performance of a least-squares estimator. For detailed information about the application of PSO to channels with memory, interested readers are referred to [12]. Furthermore, 𝐧[π‘˜] denotes the noise vector at the receiver whose entries are i.i.d. modeled as π’žπ’©(0,𝜎2𝑛).

Training symbols are transmitted to support pilot-aided channel estimation (PACE). Stacked in a matrix, the transmit vector 𝐱[π‘˜] can be written as π—βˆˆβ„‚π‘π‘‡Γ—π‘π‘‡. A minimum of 𝑁𝑇 training symbols are transmitted to ensure a full rank. The training matrix consists of orthogonal sequences subject to 𝐗𝐗†=πœ‡πˆπ‘π‘‡, where πœ‡ is related to the signal power assigned to the training matrix [26].

In the following, we assume that the transmit vector 𝐱[π‘˜] of length 𝑁𝑇 consists of training symbols only.

In case of a quasi-invariant (block-fading) channel, the maximum-likelihood metric (fitness function) for PSO can be written as follows:𝑓𝐩𝑖=π‘π‘‡ξ“π‘˜=1‖‖𝐲[π‘˜]βˆ’ππ‘–π±[π‘˜]β€–β€–2.(8) The position of the 𝑖th particle 𝐏𝑖 is used as a potential solution for the metric. For a consistent notation in line with (7), the previously used vector notation of the position of the particle is changed here to a matrix notation with ππ‘–βˆˆβ„‚π‘π‘…Γ—π‘π‘‡. Thus, a position of a particle represents a hypothesis of the channel matrix 𝐇. It is of importance to note that each dimension of a particle is real valued. As a particle needs to estimate 𝑁𝑅×𝑁𝑇 complex-valued channel coefficients, the dimensions of the search space results in 𝐷=2⋅𝑁𝑇⋅𝑁𝑅.

The maximum-likelihood metric for CPSO is very similar to the PSO metric. Instead of using the position of a particle, a context matrix πœ™π‘ ,𝑖 is utilized, since a position of a particle of a subswarm represents only a fraction of the channel matrix:π‘“ξ€·πœ™π‘ ,𝑖=π‘π‘‡ξ“π‘˜=1‖‖𝐲[π‘˜]βˆ’πœ™π‘ ,𝑖𝐱[π‘˜]β€–β€–2.(9) In case of MIMO channel estimation, 𝑁𝑅⋅𝑁𝑇 channel coefficients are estimated assuming a flat-fading time-invariant channel. As mentioned before, the performance of PSO is degraded with increasing dimensions. The dimensionality of the optimization problem is not only determined by the number of transmit and/or receive antennas but also by the channel memory length. Thus, channel estimation of a realistic MIMO system may result in a high-dimensional optimization problem.

Figure 1 illustrates the difference between PSO and CPSO for channel estimation of a 2Γ—2 MIMO system. PSO optimizes all channel coefficients with one swarm. CPSO is able to separate the 𝐷=(2⋅𝑁𝑅⋅𝑁𝑇)-dimensional problem into subsets and optimizes each subset with a separate swarm. In this example two swarms are shown; however, the number of possible subswarms is in the range of [1,𝐷]. In the case of 𝐷 subswarms, a single swarm would optimize either real or imaginary part of one channel coefficient. While the number of subswarms 𝑁s is directly related to the number of dimensions, there is no such relation for the number of particles. A minimum number of particles need to be assigned for each subswarm in order to allow convergence. Additionally, the performance of CPSO cannot be improved by increasing the number of particles once a threshold is reached. The number of particles depends again on the optimization problem. A good tradeoff between complexity per iteration and performance is 𝑁𝑝=𝑁′𝑝=15 [23].

4. Performance Comparison

A performance comparison of PSO and CPSO with an optimum MMSE channel estimator in terms of mean squared error (MSE) is given in Figure 2. The simulation setup consists of an MIMO system and a quasi-time-invariant Rayleigh fading channel. Pilot-aided channel estimation is conducted with orthogonal training sequences. The number of particles is fixed to 𝑁𝑝=60 particles for PSO algorithm. The CPSO algorithm consists of 𝑁𝑠=4 subswarms with π‘ξ…žπ‘=15 particles for each swarm. The overall complexity of CPSO compared to PSO for one iterations is thus the same. The MMSE estimate of 𝐇 for a time-invariant channel is given by𝐇=π˜π—β€ ξ€ΊπœŽ2π‘›πˆπ‘π‘‡+π—π—β€ ξ€»βˆ’1.(10) PSO exhibits an inferior performance compared to CPSO and/or MMSE estimation with increasing SNR and dimensions 𝐷 as illustrated in Figure 2. In this case, the dimensions are increased by increasing the number of transmit antennas with a constant number of receive antennas. The larger the dimensions the earlier PSO converges to an error floor. Large dimensions (𝐷>32) can easily be reached with settings defined in upcoming wireless standards, that is, long-term evolution advanced (LTE-A) [1]. For example, for an 8Γ—8-MIMO-OFDM system with a channel memory length of 𝐿=0 the dimensions results in 𝐷=8β‹…8β‹…2=128. Only the dimensions are studied to focus the reader on the problem of large dimensions for PSO. The performance of the original PSO can be further improved by adapting the variables in (1) and/or (2) over the course of iterations and/or by applying more advanced bound handling mechanisms. These optimization methods for PSO have to be tuned to the specific optimization metric. On the other hand, the performance of CPSO reaches the optimum MMSE performance with general settings.

It is worth mentioning that, although GA does not suffer from the curse of dimensionality and is thus able to converge in general, the required number of iterations increases, which renders a practical implementation unfeasible.

Besides reaching the optimum performance, another strength of the PSO algorithm is its fast convergence to a β€œreasonable” MSE. The fast converging nature of PSO/CPSO can be seen in Figure 3, where a PSO algorithm with 𝑁𝑝=60 particles as well as a CPSO algorithm with 𝑁𝑠=4 and π‘ξ…žπ‘=15 is used to estimate a 4Γ—4-MIMO channel. The MSE improves significantly in the first 20, 60, and 120 iterations for CPSO depending on the SNR. The subsequent iterations are needed for convergence to the optimum performance. Interestingly, PSO converges earlier to the optimum performance in case of an SNR of 20 dB. The CPSO algorithm is attracted to local optima which are created by the separation of the dimensions [19]. The convergence speed, is slower in this case. The fast convergence is especially beneficial when PSO/CPSO is used as an initialization algorithm where only a good starting value is needed instead of the optimum value. In order to further illustrate the advantages of CPSO over PSO, the graph-based soft iterative receiver (GSIR) of [3] is initialized by PSO, CPSO, and the MMSE estimator in the following. A 8Γ—8 MIMO system is considered with QPSK modulation and a rate-1/2 repetition code. A fixed number of 10 iterations are used for the GSIR. A training preamble with orthogonal properties, as described in Section 3, is applied. A data sequence of length 𝐾𝐷=100 is transmitted subsequently. Thus, the complete transmitted sequence can be represented as 𝐗=[𝐗𝑇𝐗𝐷]. PSO, CPSO, and the MMSE estimator are applied once, utilizing only the training symbols given in 𝐗𝑇 and provide the initial channel coefficient estimates used by the GSIR. The maximum number of iterations 𝑖max for the PSO/CPSO is hereby restricted to keep the computational overhead of the initialization at a minimum. The BER results are shown in Figure 4. It is obvious that CPSO outperforms PSO in terms of convergence speed. A maximum of only 30 iterations is required for the CPSO to provide initial coefficients that are sufficiently well for the GSIR to converge to the same performance achieved with an MMSE initialization. As suggested by the results given in Figure 3, the performance of PSO/CPSO depends on the number of particles and subswarms. The relation between the number of particles/subswarms on the number of iterations is investigated in the following section. In order to keep the computational overhead at a minimum the maximum number of iterations for the PSO/CPSO should be set to a minimum.

5. Complexity Analysis

The complexity of PSO/CPSO is determined by the number of particles, subswarms, dimensions, and the required number of iterations for convergence. The number of particles and subswarms is a design parameter of the algorithm and is commonly chosen to achieve a good performance in terms of MSE for channel estimation. The number of dimensions is a fixed parameter depending on the optimization problem (e.g., number of transmit and receive antennas and/or channel memory length). In each iteration all particles π‘ξ…žπ‘ of all subswarms 𝑁𝑠 have to evaluate their current position and compare their current fitness value with their personal best as well as the global best, which results in a complexity of orderπ’ͺξ€·CPSO(it)𝑁=π’ͺ′𝑝⋅𝑁𝑠⋅𝐷,(11) per iteration.

The overall number of particles influences the number of iterations needed to converge. In case of using only one particle the required number of iterations until convergence is maximized and computational complexity per iteration is minimized, while, on the other hand, using an infinite number of particles is minimizing the number of iterations and maximizing the computational complexity per iteration. With an infinite number of particles, PSO is equivalent to exhaustive search. Hence, a tradeoff between the overall size of PSO/CPSO and the number of iterations has to be found. Furthermore, the required minimum number of iterations is depending on the optimization metric as well. In general, the more complex (higher dimensional) the optimization problem is, the more iterations are needed and vice versa.

A strategy often used to determine the maximum number of iterations 𝑖max is to find the minimum value of iterations at which the optimum MSE performance is reached. This approach requires extensive simulations over a variety of parameters in order to determine the optimum tradeoff between complexity and iterations.

In the following a general criterion to determine the maximum number of iterations based on the probability distribution function of the iterations required by PSO/CPSO for convergence is presented. The advantage of this strategy is that only a fraction of parameters need to be simulated while missing parameters can be reconstructed by means of an interpolation. PSO/CPSO is said to reach convergence if the fitness value 𝑝OPT of (8)/(9) is below a certain threshold π‘‘β„Ž for 𝛾 iterations. In this case the threshold is set to π‘‘β„Ž=10βˆ’6 with 𝛾=10.

Monte Carlo simulations with a fixed parameter set for CPSO and varying number of transmit antennas are conducted. The iteration at which the stopping criterion is fulfilled is recorded. A histogram of the iterations fulfilling the stopping criterion for different dimensions is shown in Figure 5. Each histogram can be approximated by a generalized extreme value distribution. The characteristic shape of the function is in the steep slope once a certain value is exceeded and a slow decline after the maximum is reached. The probability density function (pdf) of the generalized extreme value distribution is described by (12). The distribution is characterized by three parameters, namely, the shape parameter π‘˜, the scale parameter 𝜎, and the location parameter πœ‡:ξ‚€1𝑝(π‘˜,πœ‡,𝜎)=πœŽξ‚ξƒ©βˆ’ξ‚΅exp1+π‘˜(π‘₯βˆ’πœ‡)πœŽξ‚Άβˆ’(1/π‘˜)ξƒͺΓ—ξ‚΅1+π‘˜(π‘₯βˆ’πœ‡)πœŽξ‚Άβˆ’1βˆ’(1/π‘˜).(12) Given the pdf for a certain parameter set, the maximum number of iteration 𝑖max can be defined to cover a certain percentage of the pdf. The amount to which the pdf is covered defines the tradeoff between performance and complexity. Setting the maximum number of iterations too low reduces the complexity of the algorithm, but also implies a performance loss due to a premature stop of the algorithm. Vice versa, setting the maximum number of iterations too large is increasing complexity without a gain in performance. In case of 𝐷=4 (cf. Figure 5 the location parameter results in πœ‡=45, which resembles the most likely iteration at which the algorithm converges. In order to cover at least 90% of the required iterations the maximum number of iterations should be set to 𝑖maxβ‰₯105.

The aforementioned tradeoff between the number of particles/subswarms and the number of iterations is evaluated in the following. The maximum number of iterations is defined to cover 90% of the pdf.

The number of iterations required by PSO/CPSO until convergence depends on the number of dimensions of the optimization problem and the allocated number of swarms and inherently particles. In Figure 6, the required number of iterations depending on the dimensions of the optimization problem is given for different swarm sizes. With a constant swarm size the iterations are increasing quadratically with the dimensions. On the contrary, with increasing swarm sizes, the required iterations are nearly constant over the dimensions, as can be seen from the similar starting points of the curves. The required number of iterations for PSO (𝑁𝑠=1) to converge, exceeds 8000 at 20 dimensions. Since the three parameters of the extreme value distribution are correlated over the number of particles and subswarms, not all swarm sizes need to be simulated but can be calculated by means of interpolation. The optimum tradeoff between swarm size and iterations can thus be determined with a minimum amount of simulations.

The overall complexity of CPSO depends on the complexity per iteration and the number of iterations:π’ͺξ€·CPSO(total)ξ€Έξ€·=π’ͺCPSO(it)ξ€Έβ‹…π’ͺ(𝐼).(13) The number of iterations is taken into account here, as they are influenced by the dimensionality of the optimization problem. The complexity of the MMSE is dominated by the matrix inversion which has a complexity of order𝑁π’ͺ(MMSE)=π’ͺ3𝑇.(14) A fair comparison of the complexity is not straightforward, when for example the number of complex multiplications is considered, as different implementations as well as optimization techniques for both algorithms will end in different results. On basis of the π’ͺ-notation, CPSO and MMSE are compared to give an insight on the influence of the different parameters of CPSO. The total complexity of CPSO is separable into two parts as can be seen from (13), namely, into the complexity per iteration and the complexity introduced by the number of iterations. The complexity of CPSO per iteration is increased linearly with the number of transmit antennas. It is obvious that the complexity of the MMSE will eventually be larger than the complexity of CPSO per iteration with increasing number of transmit antennas.

However, the total complexity of CPSO will only be lower for a given number of transmit antennas if the number of iterations required by CPSO for convergence is small. An assumption that is fulfilled when CPSO is used for initialization.

With increasing number of transmit antennas a larger number of subswarms/particles can be supported with a lower complexity than the MMSE estimator. Using a criterion for optimum performance and not fast convergence (initialization) is further increasing the complexity of CPSO. Hence, optimum performance with a complexity lower than MMSE is only reached for larger MIMO systems.

The following conclusions can be drawn from this result.(i)CPSO is suitable for medium- to large-sized MIMO systems when used for initialization.(ii)Channel estimation with PSO/CPSO targeting optimum performance is computational feasible for MIMO systems with several dozens of transmit antennas. Such scenarios are often referred to as Large-MIMO systems [27].

6. Multiobjective PSO

High-dimensional optimization problems can be solved efficiently by the aforementioned cooperative particle swarm optimization algorithm. However, PSO as well as CPSO are limited to solve single-objective problems. A constraint that is fulfilled for channel estimation given quasi-time-invariant channels. With time-varying channels, the need for multi-objective optimization arises, since the optimization metric (8) and/or (9) can no longer be minimized by a single constant matrix. Specifically, this means that the position of a particle, which is used as a candidate solution, may be exact for one time index but will not for a second time index.

Considering the coefficients over the time as additional dimensions is infeasible due to the inherent increase of iterations and complexity. However, due to the correlation of adjacent channel coefficients in the time domain a multiobjective particle swarm optimization (MOPSO) can be applied. The fitness function is changed for MOPSO to minimize 𝐾 objectives simultaneously:𝑓𝐩𝑖[π‘˜]ξ€Έ=‖‖𝐲[π‘˜]βˆ’π©π‘–[π‘˜]𝐱[π‘˜]β€–β€–2,1β‰€π‘˜β‰€πΎ,(15) where 𝐾 represents the number of training symbols in case of pilot-aided channel estimation of a time-varying channel. Obviously, the candidate solution 𝐩𝑖[π‘˜] may minimize the π‘˜th objective but is not necessarily optimal for the (π‘˜+1)th objective. This means that one objective cannot be optimized without neglecting the performance of at least one other objective. The previous concept of one global best position 𝐩GB is replaced by an archive 𝐅⋆ with the so-called dominant solutions for all objectives. A particle 𝐩𝑖 is said to dominate another particle 𝐩′𝑖, denoted as π©π‘–β‰»π©ξ…žπ‘–, if and only if(1)βˆ€πœ†βˆˆ{1,…,Ξ›}βˆΆπ©π‘–β‰€π©ξ…žπ‘–, (2)βˆƒπœ†ξ…žβˆˆ{1,…,Ξ›}βˆΆπ©π‘–<π©ξ…žπ‘–.

In each iteration an updated candidate solution is compared to the solutions stored in the external archive. A candidate solution is added to the archive if it improves at least one objective compared to the already existing solutions. A candidate solutions replaces an existing solution if it improves all objectives of an existing solution. The set of solutions contained in the archive is termed Pareto set:π…β‹†β‰ξ€½π©π‘–βˆˆβ„π·βˆ£βˆ„π©ξ…žπ‘–βˆˆβ„π·βˆΆπ©ξ…žπ‘–β‰»π©π‘–ξ€Ύ.(16) It is of importance that all objectives are optimized equally well by the particles of a swarm. Without an additional control mechanism to ensure a certain degree of diversity (quality) within the archive, all particles may concentrate on one objective, equivalent to the single-objective PSO. A diverse solution set can be found by applying the so-called sigma method [28]. The idea of the sigma method is that each point of the 𝐷-dimensional search space is assigned a sigma value. The Euclidean distance of the sigma value of a current particle and the sigma value of the archive members is calculated. The leader of the current particle is determined by the smallest Euclidean distance between the sigma value of the current position and the sigma value of a member of the archive.

The principle of MOPSO is described by Algorithm 3. The update function is hereby unchanged to PSO/CPSO. An additional mutation operator is recommended as the MOPSO algorithm occasionally converges prematurely. The mutation operator is used at certain iteration intervals and increases the velocity of a particle and/or reinitializes a particle. The maintenance and the additional mutation operator contribute to an increased complexity of the algorithm. Nevertheless, a multi-objective PSO (MOPSO) is successfully used in [29] to initialize a graph-based iterative receiver with only 5 MOPSO iterations to achieve the optimum performance with the subsequent graph-based receiver.

Initialize swarm
Locate leader in an external archive
𝑖 = 1
while 𝑖 < 𝑖 m a x or convergence do
 for each particle do
  Select leader from archive
  Update position using (1)/(2),  (4)
  Mutation
  Evaluation using  (15)
  Update pBest
 end  for
 Update leaders in the external archive
 Quality (leaders)
  𝑖 ++
end  while

7. Conclusion

An overview of particle swarm optimization for MIMO channel estimation has been given. General applicable solutions for MIMO channel estimation are presented as well as the achievable performance of the algorithms is evaluated by means of Monte Carlo simulations. Furthermore, an analysis of the complexity based on the distribution of the required iterations until convergence is introduced. The proposed method allows the calculation of a maximum number of iterations with a minimum of simulation overhead, since missing parameters can be reconstructed by means of an interpolation.

It has been shown that cooperative PSO is able to approach the optimum MMSE estimator. Thus, for a potential implementation, the required number of iterations are of utmost importance. The presented MSE and BER results further illustrate that CPSO is also able to converge fast to a β€œreasonable” MSE, which allows an iterative receiver to converge to the same performance achieved with an MMSE-based initialization, with just a minimum of iterations required for the CPSO. The advantage of CPSO over MMSE is in the flexible tradeoff between complexity per iteration and required number of iterations, which makes it ideally suited for parallelization. Furthermore, the parameters for CPSO do not need to be tuned by empirical measures, which is an advantage of CPSO, since the algorithm can directly be applied for MIMO channel estimation.

Although PSO/CPSO can be used to estimate time-varying channels by utilizing the extension to multiple objectives, the strength of PSO/CPSO lies in the estimation of time-invariant channels.