Research Article  Open Access
Yunyi Yan, Yujie He, Yingying Hu, Baolong Guo, "Video Superresolution via ParameterOptimized Particle Swarm Optimization", Mathematical Problems in Engineering, vol. 2014, Article ID 373425, 13 pages, 2014. https://doi.org/10.1155/2014/373425
Video Superresolution via ParameterOptimized Particle Swarm Optimization
Abstract
Video superresolution (VSR) aims to reconstruct a highresolution video sequence from a lowresolution sequence. We propose a novel particle swarm optimization algorithm named as parameteroptimized multiple swarms PSO (POMSPSO). We assessed the optimization performance of POMSPSO by four standard benchmark functions. To reconstruct highresolution video, we build an imaging degradation model. In view of optimization, VSR is converted to an optimization computation problem. And we take POMSPSO as an optimization method to solve the VSR problem, which overcomes the poor effect, low accuracy, and large calculation cost in other VSR algorithms. The proposed VSR method does not require exact movement estimation and does not need the computation of movement vectors. In terms of peak signaltonoise ratio (PSNR), sharpness, and entropy, the proposed VSR method based POMSPSO showed better objective performance. Besides objective standard, experimental results also proved the proposed method could reconstruct highresolution video sequence with better subjective quality.
1. Introduction
Extended and expanded from the definition of still image superresolution reconstruction proposed by Harris [1] and Goodman [2] in 1960s, video superresolution (VSR) was investigated quite intensively in recent years [3–7]. The two common types resolution algorithms to VSR are based on multiframe complementary information [8] and based on motion estimation [9, 10]. The former utilizes the redundancy information of different frames to reconstruct a highresolution video sequence. So it is noneffective when object moves fast or no redundancy information is available. On the other hand, the latter describes movement rule of moving objects by motion vector, whose performance depends on the accuracy in motion estimation. Unfortunately, the accuracy is not very high and even very low in some cases, and yet the process is highly calculationcosted [11].
Particle swarm optimization (PSO) is an established method for optimization with the advantage of simplicity of implementation [12, 13] and it has been applied in many fields including scientific research and engineering problems [12, 14–18]. Compared to other heuristic methods, like genetic algorithm (GA) and ant colony optimization (ACO), PSO is proven to have higher computational accuracy [19] and lower time cost [20] in many complicated applications.
However, the parameter selection will affect the performance deeply [21], but there are no strategies proven to work well for all problems [21–24]. In many cases, we have to adjust the parameters in PSO several times to obtain satisfactory precision. The accession of subswarms with different evolutionary strategies ensured the quality both intensively and extensively [25–27]. In this paper, we introduced one swarm to determine those configuration parameters of PSO and proposed the parameteroptimized multiple swarms PSO (POMSPSO) which could avoid the limitation of fixed parameters and achieve dynamical adjustment of parameters throughout the whole optimized process.
The goal of video superresolution reconstruction (VSR) is to obtain a video with higher resolution that meets certain required specifications, for example, sharpness, succession, details, and so forth. Hence, in view of optimization, VSR can be thought as an optimal procedure, which looks for the optimal result satisfying objective and subjective quality requirements. If imaging degeneration model built, optimization technique can be used to search for the global minimum points according to proper fitness function.
In this paper, we propose a novel video superresolution reconstruction(VSR) scheme which is based on image degeneration model solution via POMSPSO. The paper is organized as follows. In Section 2, the 3D video degeneration model and its solution are introduced. In Section 3, the parameteroptimized multiple swarms PSO and its performance are presented. In Section 4, we implement the video reconstruction using POMSPSO algorithm and evaluate the result by several criteria, and finally the conclusion of this paper is given in Section 5.
2. Video Superresolution Model
2.1. 3D Video Degeneration Model
Spacetime dynamic scene can be presented by coordinate system like . However in most real imaging cases, spacetime dynamic scene can be presented by threecoordinate system under some suitable conditions, such that (1) the scene is at one identical plane and the movement takes place within this plane; (2) the distance between various cameras is much smaller than the distance between the camera and the scene, which is to say, there is almost no parallax between two cameras. In this situation, arrange all the elements into a vector according to the lexicographical order . In order to simplify calculations, only gray value is used to represent the characterization of pixels in images.
The value on in a lowresolution sequence can be deemed as a mapping of corresponding value on in a highresolution sequence as follows: where is a lowresolution sequence, is a point in the lowresolution sequence, is the mapping of corresponding value on in the highresolution sequence, and is the spacetime positiondependent blur kernel function in highresolution coordinates. It consists of spatial domain point spread effect, which can be simulated by kind of Gaussian function model, and time domain fuzzy caused by integration effect by the reason of exposure time during imaging; is white noise which is subjected to Gaussian distribution.
Obviously, the range of support set of the spacetime blur kernel function determines the spacetime resolution relationship between the highresolution sequence and lowresolution sequence. It determines whether the reconstruction occurs in time domain and/or spatial domain(s) and amplification factor used during reconstruction.
Equation (1) is defined in continuous space which presents the map relationship between highresolution scenes and the lowresolution ones in real world. But in digital processing system, like computers or digital signal processors (DSPs), all data need to be stored and processed in digital format. This means we have to transfer the continuous space to discrete space.
The unknown continuous scene can be thought as the unknown highresolution sequence , and the relationship between lowresolution and highresolution sequence can be presented by matrix computing. Hence, we obtain the discrete form of (1) as follows, one giant system composed of linear equations carrying the relation between high and lowresolution sequence, where is a vector composed of all the elements in the lowresolution sequence arranging into lexicographical order, is a vector composed of all the elements in the highresolution sequence arranging into lexicographical order, is a noise vector, and is a sparse Toeplitz matrix corresponding to the spacetime blur kernel function.
2.2. Model Solution
Like other illposed problems, in superresolution reconstruction, the known elements in lowresolution sequence are much less than the unknown elements in highresolution sequence. To solve this kind of problems, extra proper criteria or constraints should be set reasonably. In this paper, a visual criteria were taken as the object optimization function, and the solution of the VSR problem can be shown as follows: where is a vector composed of all the elements in rebuilt highresolution sequence arranging into lexicographical order and is the object optimization function known as the fitness function in with visual criteria taken into consideration, and it is shown as below:
reflects fidelity, and reflects smoothness in directions of , and more specifically, it is defined by the following equation:
is the image smooth coefficients in direction, is the weight matrix in direction, and is the secondorder differential operator in direction, and reflects details of the retention, which can be computed as follows:
is the gradient operator in direction; is a small positive constant.
The schematic diagram of the reconstruction procedure in this paper is given in Figure 1. The input low definition video is transformed into a sequence, and then all elements are arranged in lexicographical order to compose a matrix. Based on the video degeneration model, there is a map between the lowresolution sequence and the highresolution sequence.
In Figure 1, POMSPSO is used to search the best reconstructed results which is represented as in (3). In fact, we code the reconstructed high definition video as a particle of subswarm in the inner loop of Figure 2. And the in (4) is used as the fitness function during the optimization process. The best solution to (4) will be thought as reconstructed video. The utilization of POMSPSO algorithm changes the reconstruction procedure into a problem that search a solution satisfying the video degeneration process in best.
3. ParameterOptimized Multiple Swarms PSO (POMSPSO)
Particle swarm optimization (PSO) has surprising ability of handle optimization problems with multiple local optima and its simplicity of implementation. We combined the concepts of multiswarm and parameter autooptimizing, and then we proposed the improvement of multiswarm PSO named as parameteroptimized PSO.
In fact, many improved PSO algorithms [15, 28–30] have been given with fixed parameters [22, 31]. The parameter selection can affect the performance of PSO [21], but there are no strategies proven to work well for all problems [21–24]. In many cases, when facing real engineering problems, we have to adjust the parameters in PSO several times to obtain satisfactory precision. The concept of parameteroptimizing is to determine those free parameters of the subswarms via PSO program. In this paper, the concept of multisubswarm is based on an ecological approach [25, 26], in which all the particles are divided into two different subswarms depending on their fitness. By adjusting parameters dynamically throughout the whole optimized process, the method can avoid the limitation of fixed parameters. At the same time, those parameters can automatically change when problem to be solved is changed. By introducing different evolutionary strategies using on different subswarms [27], the new framework ensures quality both intensively and extensively. All the advantages above over current methods result in better optimization performance than parameterfixed PSO methods [32, 33].
3.1. Algorithm Introduction
Initialization of the particles which are used to search for the global optimum is spanned as follows: where and are the size and the dimension of particle swarm, respectively. And refers to the th particle with position , and its corresponding speed is . General speaking, one particle’s iteration and update of PSO algorithm can be presented by the following two equations: where is dimensions index of particle’s position vector; is particle index; notes for iteration number; and are positive constants called cognitive and social factor, respectively; is called inertia weight; and are random numbers; is the best position visited by this particle; and is the global best position over the whole swarm. More details on PSO model and its implementation can be found in [28].
We proposed a novel optimization method with a feature that the parameters in PSO can be determined and optimized by its self, which was named as POMSPSO. To accomplish this goal, we introduced two layer loops into the iteration process. In outer loop, we placed one swarm to optimize the parameter setting for the two subswarms in inner loop. And the inner loop is used to find the optimal resolution to given problem. The proposed method is named as POMSPSO, which comes from parameteroptimized multiple swarms PSO. The schematic diagram of PMOSPSO is shown in Figure 2.
In the view of particle swarm optimization, a particle means a potential solution to problem. The PSO algorithm is inspired by the bird or fish swarms’ seeking food procedure in nature. In PSO algorithm, a bird is modelled as a particle, and a bird position is presented by a vector. In Figure 2, POMSPSO employs double layer loops and three swarms. The swarm in outer loop is named as Cswarm and the particles in Cswam is called Cparticle which is based on the fact that the Cparticle’s valued is used to configure the two subswarms’ parameter selection (in inner loop) and the inner loop’s computing is performed with a certain Cparticle’s configuration to find the best solution to given problem. In the proposed POMSPSO, a Cparticle means a parameter setting for the inner loop and a particle in inner loop means a solution to given problem.
As shown in Figure 2, the first step of optimization initializes the particles with random numbers normally distributed in the search space. The Cswarm’s search space is different from the r and Ksubswarm. For example, the search space for the four benchmark functions in Section 3.2 is , , but the Cswarm’s is recommended to set in , due to empirical information. In the step of fitness evaluation, all particles in inner loop are evaluated by given fitness function and are sorted in descend. The top particles are allocated to Ksubswarm and the other to rsubswarm. For example, we totally have 20 particles in inner loop, and is set to 0.1, then 18 out of 20 particles belong to Ksubswarm, and 2 out of 20 belong to rsubswarm. The r and Ksubswarms will perform r and Kselection strategies, respectively, and update their positions.
In most of all optimization based on iteration, the stop criterion can be defined as the iteration could satisfy one of the following two conditions. (1) The maximum generation is reached and (2) the error or the fitness value is good enough to satisfy our requirement. To our experimental experience, when POMSPSO is used to solve complicated problems (e.g., VSR), the error threshold is difficult to estimate and we often just set it to zero. In this case, the inner loop is often stopped by the maximum iteration generation reached unless our luck is good enough to get a solution with zero error.
For every Cparticle, all of its various positions mean various configuration of inner loop. The inner loop deploys two subswarms, the rsubswarm and the Ksubswarm. In ecology, rselection is termed for those species that breed many offspring and live in unstable environments and Kselection for those species that produce few offspring and live in stable environments. The rselection can be characterized as quantitative, little parent care, large growth rate, and rapid development and Kselection can be characterized as qualitative, much parent care, small growth rate, and slow development. Kselection is performed for those particles with high fitness, and so Ksubswarm only can produce relatively fewer progenies but the progenies are nurtured delicately with much parent care. On the other hand, rselection is performed for those particles with relatively lower fitness. Due to little parent care, rsubswarm can produce a large number of progenies; the progenies have to compete with other members in rsubswarm for survival and only the best ones can survive. The main task of rsubswarm is to explore the search space as possible as they can, and Ksubswarm should try to keep the current optimum solutions and exploit the space as they can.
In fact, (8) are the core procedure in most of all PSO algorithms. The rselection and Kselection are defined in ecology to describe various species’ idiosyncrasy; however, in our POMSPSO algorithm, the two strategies are defined by different parameter setting.
Judging from (8), we can find there are three parameters that could affect optimization performance, the inertia weight , the cognitive factor , and the social factor . Due to their importance to PSO, these three parameters of rsubswarm together with those of Ksubswarm are selected to be part of the Cparticle. And POMSPSO will optimize these six parameters.
For the inner loop, we divide the whole swarm into rsubswarm and Ksubswarm as shown in Figure 2. That is why we call them not swarm but subswarm. Now we have to determine how many particles divided into rsubswarm. In other words, the proportion of rsubswarm should be in the range of , but it needs to be determined. And on the other hand, the proportion of Ksubswarm will be .
As the rselection is likely to produce more progenies than Kselection, the fertility rate of rsubswarm is larger than that of Ksubswarm . So we have
Three steps in Figure 2, the r and Kselection and update particles, are presented in details in Figure 3. Suppose the total particle number of r and Ksubswarms is as shown in Figure 3; after one evolutionary generation, the total number of progenies will be
Considering (9), we can draw a conclusion that . To keep stability of r and Ksubswarms size, we only select the best out of to survive to next iteration in inner loop and the others are abandoned. And we have to notice that like the classic PSO, Cparticles only have one progeny.
Another important difference between r and Ksubswarm is that the velocity in rsubswarm may be much higher than the Ksubswarm, because rsubswarm is designed to explore the search space to the greatest extent. The maximum velocity of rsubswarm, , should be larger than that of Ksubswarm . These two parameters, and , should be optimized by Cswarm in POMSPSO. Totally we have eleven parameters for rsubswarm and Ksubswarm to be determined. Table 1 lists these parameters and their meanings. All of these parameters will be optimized in outer loop of POMSPSO. The optimization procedure will be running with these eleven parameters’ configuration. Although the parameter setting of r and Ksubswarm needs to be determined by the Cswarm in POMSPSO, we have to notice that Cswarm perform the iteration procedure using the classic setting as given in [28].

As the inner loop is run under the configuration of a certain Cparticle, the best fitness of inner loop got can be used to evaluate the corresponding Cparticle. As shown in Figure 2, we take the best particle inner loop as the validation of Cparticle. By evolution and update in the outer loop, POMSPSO can find the best Cparticle. And for inner loop, r and Ksubswarms can evolute in best parameter settings.
3.2. Performance Evaluation of POMSPSO
We assessed the performance of parameteroptimized PSO method by employing a suite of benchmark functions and compared the result to that of the standard PSO (SPSO) and the constriction type PSO (CPSO) methods. The benchmark suite consists of two unimodal functions (Tablet and Quadric) and two multimodal functions (Rastrigin and Schaffer) [28]. All functions except for the 2D Schaffer function are optimized in 30 dimensional spaces.
The four functions are presented as follows:
Schaffer:
Tablet:
Rastrigin:
Quadric: The four functions are often used in optimization performance evaluation [28]. To test parameteroptimized PSO’s optimal ability, we used the search space which was larger than those in [28]. The Tablet and Quadric functions are evolved from spherical function, which have only one minimum point in the search space. And the Schaffer and the Rastrigin are multimodal, which means they have multiple local minimum points in the space. All of the benchmark functions have the global minimum at . Unimodal and multimodal should be both evaluated for complete comparison performance assessment. General speaking, multimodal functions are much more difficult to find the optimal point.
As mentioned in Section 3.1, POMSPSO employs 3 swarms in two loops, that is, the rsubswam, the Ksubswarm (in inner loop), and the Cswarm (in outer loop). One task of parameteroptimized PSO is to find the optimal swarm parameters setting for r and Ksubswarms, which is performed by the Cswarm. In our experiments, one population of 20 particles was chosen for r and Ksubswarms and 10 for Cswarm. The maximum number of iterations of outer loop is set to 40 and inner loop is set to 20. Table 2 lists the optimal results of Cparticles of the four benchmark functions.

The parameteroptimized PSO can perform optimization for every benchmark function’s parameter selection to guarantee various functions can get the optimal or better solution under various configurations instead of the common used settings. The parameteroptimized PSO is a problemoriented PSO solution. In fact, we noticed that even we run the outer loop only once, we could obtain the results comparable to SPSO and CPSO in our experiments. In Table 2, all of the eleven parameters are shown for Sch (Schaffer), Tab (Tablet), Ras (Rastrigin), and Qua (Quadric), respectively. It is obvious that various functions have various configuration.
The performance comparison between parameteroptimized PSO, SPSO, and CPSO methods is given in Table 3 for four benchmark functions. The optimization performance comparison is based on 200 trials. We compared the minimum, maximum, mean, and standard deviation of those three methods. The statistical results are listed when the SPSO, CPSO, and parameteroptimized PSO run 200 times. The size of all the populations is 20. The maximum evolution generation of SPSO and CPSO is 800, while that of POMSPSO is also (outer loop: 40, inner loop: 20).

From the values listed in Table 3, we can see that at most time, the POMSPSO can outperform the other two, no matter the minimum, maximum, mean, or standard deviation values, although POMSPSO has much smaller iteration times. The performance of the POMSPSO method decreases the errors by one order of magnitude compared with the other two methods.
As shown in Figure 4, multiruns experiments show POMSPSO results vary in much smaller range intervals which means the proposed methods can give more stable optimum. Due to POMSPSO’s precision improvement under multisubswarms strategy framework, it is a reasonable choice to adapt it into video superresolution reconstruction by solving the video degeneration model showed in (2), (3), and (4).
(a) Schaffer
(b) Tablet
(c) Rastrigin
(d) Quadric
4. Video Superresolution by POMSPSO
4.1. Reconstruction Quality Evaluation Criteria
4.1.1. SimilarityBased Evaluation Criteria
The PSNR is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. The PSNR is most commonly used as a measure of quality of reconstruction in image compression. It is defined via the mean squared error (MSE) as follows: and are two monochrome images where one of the images is considered a noisy approximation of the other.
The PSNR is defined as follows:
4.1.2. DetailBased Evaluation Criteria
Sharpness is a photograph term that reflects image’s detail as an integral part of its appeal. Sharpness is defined by the boundaries between zones of different tones or colors. It is illustrated by the bar pattern of increasing spatial frequency.
As shown in Figure 5, the top portion is sharp and its boundaries are crisp steps, not gradual. The bottom portion illustrates how the degraded pattern is blurred.
The sharpness is most commonly defined as the following equation for one monochrome image : The sharpness lines indicate the average gradient values of all frames in the video, which express whether the variation of scene is large or not in adjacent pixels.
4.1.3. InformationBased Evaluation Criteria
Image entropy reflects the average amount of information for a given image and the entropy of grayscale image indicates the amount of information contained in aggregation feature of gray distribution for the given image and it can be computed as follows: where is the number of gray levels and is the probability associated with gray level .
4.2. Reconstruction Simulation and Experimental Results
We assessed the performance of the video superresolution process using POMSPSO algorithm on several lowresolution video sequences and compared the result to that of the bilinear interpolation method. The resolution of the reconstructed video is twice as large as incoming video, in both height and width. To compare the objective criteria, the input video used to perform superresolution procedure was obtained by 2downsampling the ground truth video. Due to minimize the errors and noise effect on the reconstructed sequence, we introduced an Gaussian filter with size .
All of the reconstructed video should be compared with the ground truth video sequence. Taking 10 frames as experimental cases, we achieved the PSNR, sharpness, and entropy results as shown in Figure 6. The maximum iteration of POMSPSO is set to 15. The size of whole populations is set to 10 particles.
(a) PSNR
(b) Sharpness
(c) Entropy
From PSNR lines in Figure 6, we can see that the result using POMSPSO algorithm achieved PSNR value approximating about 47 dB and bilinear interpolation only got 18 dB. That is to say, the proposed method has a much better objective image quality than that of bilinear interpolation method. The higher sharpness value means the more details the frame shows. We can see that the sharpness values of POMSPSO algorithm is much higher than those of bilinear interpolation method by 1.5 in average and even better than those of ground truth by 0.5 in average. The entropy of proposed method is lower than the bilinear interpolation, but it is very close to the ground truth. The close entropy means the reconstructed sequence has similar pixel value distribution to ground truth, while the higher entropy got by interpolation could be seen as artificial effect.
The results of the reconstruction using bilinear interpolation and POMSPSO are given as Figure 7. The images given are the 9th and the 10th frames of the corresponding reconstructed video. The images in Figure 7 show that video reconstruction using POMSPSO could obtain highquality results with a small number of iterations. The video is continuous smoothly.
(a) The input
(b) VSR results of bilinear interpolation
(c) VSR results of POMSPSO
(d) Ground truth
For more clear details, the helicopter and the railings from the 9th and 10th frames are zoomed and shown in Figure 8. Obviously, the helicopter is of great interest in the whole image, and the railings are the most difficult to recognize fine details part. For the bilinear interpolation, the helicopter and the railings are blurred seriously. Judging from Figure 8, we can draw a conclusion that the objective vision quality of POMSPSO result is much better than that of bilinear interpolation, especially in the regions of interest. Considering other parts of the images, say the sky and the trees, VSR by POMSPSO can also achieve better performance.
(a) Zoomed helicopter of the 9th frame, bilinear (left), proposed method (middle), and ground truth (right)
(b) Zoomed railings of the 9th frame, bilinear (left), proposed method (middle), and ground truth (right)
(c) Zoomed helicopter of the 10th frame, bilinear (left), proposed method (middle), and ground truth (right)
(d) Zoomed railings of the 10th frame, bilinear (left), proposed method (middle), and ground truth (right)
5. Conclusions and Future Works
In this paper, we proposed POMSPSO which could help to find out better swarm configuration for given problem. The POMSPSO employed two layer loops and three subswarms. Performance comparison on four standard benchmark functions showed POMSPSO could achieve higher accuracy in unimodal and multimodal functions.
The model of imaging degeneration is very important during reconstruction. To some extent, proper model determination is the basis of finding the solution. Based on a imaging degeneration model and POMSPSO, we proposed a novel VSR method in view of optimization computation. In view of computation intelligence, the video sequence with higher resolution is thought as the optimal resolution for swarm optimization. Experimental results showed that the proposed novel method could obtain high objective and subjective quality results.
In the future, more efforts should be focused on the perfect degeneration model building and the fast implementation of multiple swarms based adaptive optimization algorithms.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China with no. 61305041 and the Fundamental Research Funds for the Central Universities of China with no. K5051304024.
References
 J. L. Harris, “Diffraction and resolving power,” Journal of the Optical Society of America, vol. 54, no. 7, pp. 931–936, 1964. View at: Publisher Site  Google Scholar
 J. W. Goodman, Introduction to Fourier Optics, McGrawHill, New York, NY, USA, 1968.
 B. R. Hunt, “Superresolution of images: algorithms, principles, performance,” International Journal of Imaging Systems and Technology, vol. 6, no. 4, pp. 297–304, 1995. View at: Publisher Site  Google Scholar
 M. BenEzra, A. Zomet, and S. K. Nayar, “Video superresolution using controlled subpixel detector shifts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 977–987, 2005. View at: Publisher Site  Google Scholar
 C. Liu and D. Sun, “On Bayesian adaptive video super resolution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 2, pp. 346–360, 2014. View at: Google Scholar
 J. Buss, C. Coltharp, and J. Xiao, “Superresolution imaging of the bacterial division machinery,” Journal of Visualized Experiments, no. 71, 2013. View at: Publisher Site  Google Scholar
 H. Su, Y. Wu, and J. Zhou, “Superresolution without dense flow,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1782–1795, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 A. J. Patti, M. I. Sezan, and A. M. Tekalp, “Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time,” IEEE Transactions on Image Processing, vol. 6, no. 8, pp. 1064–1076, 1997. View at: Publisher Site  Google Scholar
 R. R. Schultz and R. L. Stevenson, “Extraction of highresolution frames from video sequences,” IEEE Transactions on Image Processing, vol. 5, no. 6, pp. 996–1011, 1996. View at: Publisher Site  Google Scholar
 B. Bascle, A. Blake, and A. Zisserman, “Motion deblurring and superresolution from an image sequence,” in Proceeding of European Conference on Computer Vision (ECCV '96), pp. 312–320, Springer, 1996. View at: Google Scholar
 M. Irani and S. Peleg, “Motion analysis for image enhancement: resolution, occlusion, and transparency,” Journal of Visual Communication and Image Representation, vol. 4, no. 4, pp. 324–335, 1993. View at: Publisher Site  Google Scholar
 M. Belkacem, T. Bouktir, and K. Srairi, “Strategy based PSO for dynamic control of UPFC to enhance power system security,” Journal of Electrical Engineering and Technology, vol. 4, no. 3, pp. 315–322, 2009. View at: Publisher Site  Google Scholar
 H.T. Yau, C.J. Lin, and Q.C. Liang, “PSO based PI controller design for a solar charger system,” The Scientific World Journal, vol. 2013, Article ID 815280, 13 pages, 2013. View at: Publisher Site  Google Scholar
 A. Chander, A. Chatterjee, and P. Siarry, “A new social and momentum component adaptive PSO algorithm for image segmentation,” Expert Systems with Applications, vol. 38, no. 5, pp. 4998–5004, 2011. View at: Publisher Site  Google Scholar
 Z. Geng and Q. Zhu, “A multiswarm PSO and its application in operational optimization of ethylene cracking furnace,” in Proceedings of the 7th World Congress on Intelligent Control and Automation (WCICA '08), pp. 103–106, Chongqing, China, June 2008. View at: Publisher Site  Google Scholar
 S. Peters and A. Koenig, “A hybrid texture analysis system based on nonlinear & oriented kernels, particle swarm optimization, and kNN vs. support vector machines,” Neural Network World, vol. 17, no. 6, pp. 507–527, 2007. View at: Google Scholar
 M. Meissner, M. Schmuker, and G. Schneider, “Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training,” BMC Bioinformatics, vol. 7, article 125, 2006. View at: Publisher Site  Google Scholar
 A. Subasi, “Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders,” Computers in Biology and Medicine, vol. 43, no. 5, pp. 576–586, 2013. View at: Publisher Site  Google Scholar
 S. Nema, J. Goulermas, G. Sparrow, and P. Cook, “A hybrid particle swarm branchandbound (HPB) optimizer for mixed discrete nonlinear programming,” IEEE Transactions on Systems, Man, and Cybernetics A:Systems and Humans, vol. 38, no. 6, pp. 1411–1424, 2008. View at: Publisher Site  Google Scholar
 B. Sharma, R. K. Thulasiram, and P. Thulasiraman, “Portfolio management using particle swarm optimization on GPU,” in Proceedings of the 10th IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA '12), pp. 103–110, July 2012. View at: Publisher Site  Google Scholar
 R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '00), pp. 84–88, July 2000. View at: Google Scholar
 Y. Shi and R. C. Eberhart, “Parameter selection in particle swarm optimization,” in Proceedings of the 7th Annual Conference on Evolutionary Programming, pp. 591–600, New York, NY, USA, 1998. View at: Google Scholar
 A. Carlisle and G. Dozier, “An offtheshelf PSO,” in Proceedings of the Workshop on Particle Swarm Optimization, pp. 1–6, Indianapolis, Ind, USA, 2001. View at: Google Scholar
 A. Lari, A. Khosravi, and F. Rajabi, “Controller design based on mu analysis and PSO algorithm,” ISA Transactions, vol. 53, no. 2, pp. 517–523, 2014. View at: Publisher Site  Google Scholar
 T. Blackwell and J. Branke, “Multiswarm optimization in dynamic environments,” in Applications of Evolutionary Computing, vol. 3005 of Lecture Notes in Computer Science, pp. 489–500, 2004. View at: Publisher Site  Google Scholar
 Y. Y. Yan and B. L. Guo, “Particle swarm optimization inspired by r and kselection in ecology,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08), pp. 1117–1123, Hong Kong, China, June 2008. View at: Publisher Site  Google Scholar
 Y. Yan and B. Guo, “Convergence analysis of PSO inspired by rand Kselection,” in Proceedings of the 8th International Conference on Intelligent Systems Design and Applications (ISDA ’08), vol. 2, pp. 247–252, Kaohsiung, Taiwan, November 2008. View at: Google Scholar
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 K. E. Parsopoulos and M. N. Vrahatis, “Recent approaches to global optimization problems through particle swarm optimization,” Natural Computing, vol. 1, no. 23, pp. 235–306, 2002. View at: Publisher Site  Google Scholar  MathSciNet
 N. Zeng, Z. Wang, Y. Li, M. Du, and X. Liu, “A hybrid EKF and switching PSO algorithm for joint state and parameter estimation of lateral flow immunoassay models,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 9, no. 2, pp. 321–329, 2012. View at: Publisher Site  Google Scholar
 Y. Yan and B. Guo, “Two image denoising approaches based on wavelet neural network and particle swarm optimization,” Chinese Optics Letters, vol. 5, no. 2, pp. 82–85, 2007. View at: Google Scholar
 Y. Yan, B. Guo, Z. Yang, and X. Fu, “Image noise removal via wavelet transform and r/KPSO,” in Proceedings of the 4th International Conference on Natural Computation (ICNC '08), vol. 5, pp. 544–548, Jinan, China, October 2008. View at: Publisher Site  Google Scholar
 Y. Y. Yan and B. L. Guo, “r/KPSO and its convergence speed analysis,” in Proceedings of the 8th International Conference on Intelligent Systems Design and Applications (ISDA ’08), vol. 2, pp. 247–252, Kaohsiung, Taiwan, November 2008. View at: Google Scholar
Copyright
Copyright © 2014 Yunyi Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.