Mathematical Tools of Soft ComputingView this Special Issue
Research Article | Open Access
Xiao Fu, Wangsheng Liu, Bin Zhang, Hua Deng, "Quantum Behaved Particle Swarm Optimization with Neighborhood Search for Numerical Optimization", Mathematical Problems in Engineering, vol. 2013, Article ID 469723, 10 pages, 2013. https://doi.org/10.1155/2013/469723
Quantum Behaved Particle Swarm Optimization with Neighborhood Search for Numerical Optimization
Quantum-behaved particle swarm optimization (QPSO) algorithm is a new PSO variant, which outperforms the original PSO in search ability but has fewer control parameters. However, QPSO as well as PSO still suffers from premature convergence in solving complex optimization problems. The main reason is that new particles in QPSO are generated around the weighted attractors of previous best particles and the global best particle. This may result in attracting too fast. To tackle this problem, this paper proposes a new QPSO algorithm called NQPSO, in which one local and one global neighborhood search strategies are utilized to balance exploitation and exploration. Moreover, a concept of opposition-based learning (OBL) is employed for population initialization. Experimental studies are conducted on a set of well-known benchmark functions including multimodal and rotated problems. Computational results show that our approach outperforms some similar QPSO algorithms and five other state-of-the-art PSO variants.
Many real-world problems can be formulized into optimization problems over continuous or discrete search space. With development of economic, optimization problems are increasingly complex, and more efficient optimization algorithms are needed. Over the past several years, some population-based random optimization techniques have been widely used to solve optimization problems, such as genetic algorithms (GA) , evolutionary programming (EP) , particle swarm optimization (PSO) , differential evolution (DE) , ant colony optimization (ACO) , and artificial bee colony (ABC) . Due to PSO’s simple concept, easy implementation, yet effective, has been widely applied to various optimization areas [7–11].
PSO was firstly introduced by Kennedy and Eberhart in 1995. It is a new optimization technique inspired by swarm intelligence. Compared to GA, PSO is also a population-based algorithm, but it does not contain any crossover or mutation operator. In PSO, the movement of particles is determined by their corresponding previous best particles (best) and the global best particle (best). Due to the attraction of these best particles (best and best), PSO shows fast convergence rate. However, it easily converges to local minima when solving complex problems. The main reason is that the attraction search pattern of PSO greatly depends on best and best. Once these best particles (best and best) get stuck, all particles in the swarm will quickly converge to the trapped position. To help trapped particles jump out, many improved PSO algorithms have been proposed. In , Shi and Eberhart introduced a inertia weight into the original PSO to achieve a balance between the global and local search. Reported results show that a linearly decreased is a good choice for the test suite. Bergh and Engelbrecht  proposed a cooperative approach for PSO (CPSO-H) for solving multimodal problems. Liang et al.  proposed a comprehensive learning PSO (CLPSO), in which each particle can learn other particles’ experiences in different dimensions. Simulation results show that CLPSO outperforms seven other PSO algorithms. Li et al.  presented an adaptive learning PSO for function optimization, in which the learning mechanism of each particle is separated into three components: its own historical best position, the closest neighbor, and the global best one. By using this individual level adaptive technique, a particle can well control its well-balanced behavior of exploration and exploitation. Zhan et al.  presented an adaptive PSO (APSO) by employing two following strategies. The first one evaluates the population distribution and particle fitness and identifies the current search status. The second one utilizes an elitist learning strategy to help the global best particle jump out of the likely local optima. Wang et al.  proposed a new PSO algorithm with generalized opposition-based learning (GOBL) and Cauchy mutation. GOBL is an enhanced opposition-based learning (OBL) , which is helpful to accelerate the evolution. The Cauchy mutation focuses on improving the global search ability. In , Wang et al. introduced a diversity enhanced PSO algorithm (DNSPSO) which employs a diversity enhancing mechanism and neighborhood search strategies to achieve a tradeoff between exploration and exploitation abilities.
Like other population-based stochastic algorithms, the performance of PSO is greatly influenced by its control parameters, such as initial weight () and acceleration coefficients ( and ). Different parameter settings may result in different performance. To minimize the effects of these parameters, some adaptive parameter mechanisms have been designed [15, 16]. Recently, Sun et al.  proposed a novel PSO algorithm called quantum-behaved PSO (QPSO), in which a quantum model is used to depict the state of particles. Compared to the original PSO, QPSO eliminates the velocity term and does not contain parameters , , and . In QPSO, new particles are generated around the weighted positions of previous best particles and the global best particle. This may result in attracting too fast. To tackle this problem, some improved QPSO algorithms have been proposed [21–24]. Sun et al.  proposed a diversity-guided QPSO (DGQPSO), which employs a mutation operator on the global best particle. In , chaotic search is introduced into QPSO to increase the diversity of swarm in the latter period of the search, so as to help the algorithm escape from local minima. Zhao et al.  proposed a fuzzy QPSO, in which the center of potential particle is influenced by more than two particles in the neighborhood and the influence is defined by a fuzzy variable. Wang and Zhou  presented a local QPSO (LQPSO) as a generalized local search operator. The LQPSO is incorporated into a main QPSO to construct a hybrid algorithm QPSO-LQPSO. Results show that QPSO-LQPSO achieves better results than PSO and QPSO.
In this paper, we also proposed a new QPSO algorithm called NQPSO, which employs one local and one global neighborhood search strategies are utilized to balance exploitation and exploration. In addition, a concept of opposition-based learning (OBL)  is employed for population initialization. To verify the performance of our approach, twelve well-known benchmark functions, including multimodal and rotated problems, are used in the experiments. Simulation results show that NQPSO achieves better results than some similar QPSO algorithms and other state-of-the-art PSO variants.
The rest of the paper is organized as follows. The original PSO and QPSO are briefly introduced in Sections 2 and 3, respectively. Our approach NQPSO is described in Section 4. Experimental results and discussions are presented in Section 5. Finally, the work is summarized in Section 6.
2. Particle Swarm Optimization
PSO is a population-based optimization technique, which is motivated by the behaviors of fish schooling or birds flocking. In PSO, a population is called a swarm, and each member in the swarm is called a particle which is a potential solution to the optimization task. During the evolution, the search direction of one particle is determined by its own previous best particle and the global best particle found so far by all particles.
Let be the swarm size. Each particle has two vectors, velocity and position . At each iteration, each particle in the swarm updates its velocity and position as follows  where and are the position and velocity vectors of the th particle, respectively. represents the previous best particle of the th particle and is the global best particle found so far by all particles. and are two independently generated random numbers with the range of . The parameter is called inertia weight. and are known as acceleration coefficients.
3. Quantum-Behaved Particle Swarm Optimization
A recent theoretical study  reported that each particle converges to its local attractor, defined as follows: where . It can be seen that is a stochastic attractor of particle that lies in a hyperrectangle with and .
Based on the above characteristic, Sun et al.  proposed a quantum-behaved PSO (QPSO) algorithm. In QPSO, each particle only has position vector and does not have the velocity vector. During the evolution, each particle updates its position as follows: where and are two random numbers distributed uniformly with the range of (0,1), respectively. The parameter is called contraction-expansion coefficient which can be tuned to control the convergence speed of the algorithm. is called mean best position of the population which is calculated by where is the population size.
The main steps of the QPSO are described in Algorithm 1, where is the local attractor, FEs is the number of fitness evaluations, and MAX_FEs is the maximum number of FEs. Compared to the original PSO, QPSO does not have the velocity term and the parameters, , , and . But QPSO introduced a new parameter which is linearly deceased from 1.0 to 0.5 reported in some of the literature [20, 21].
4. Proposed Approach
According to (3), new particles are generated around the local attractor . It means particles move to the local attractors during the search process. The local attractors are weighted positions of pbest and gbest. Then, the local attractors are in the neighborhood of the gbest. It indirectly demonstrates that particles move to the neighborhood of the gbest. This search mechanism can obtain fast convergence speed by generating new particles in the neighborhood of gbest. However, it may result in premature convergence because of fast attraction. Figure 1 illustrates the search behavior of QPSO.
To enhance the global search ability and avoid premature convergence, some mutation techniques have been introduced into QPSO algorithm. In , Sun et al. proposed a diversity-guided QPSO, in which a mutation operation is conducted on the gbest if the diversity of swarm is smaller than a predefined value. In , Jamalipour et al. proposed another mutation operator inspired by the mutation scheme of DE, in which each particle updates the position according to the original quantum model or the DE mutation with equal probability.
Although the above mutation techniques can improve the global search ability of QPSO, they show poor search for local search. To make a tradeoff between global and local search, this paper proposes one local and one global neighborhood search strategies.
In the local neighborhood search (LNS) strategy, we focus on searching the local neighborhood of the current particle. This can help find more accurate solutions. The local neighborhood search strategy is defined by where and are the position vectors of two randomly selected particles, , , and are three random numbers with the range of (0,1), and . Figure 2 illustrates the mechanism of the local neighborhood search strategy. The proposed LNS strategy is similar to the local search operator used in , but they are different. The local search operator used in  is based on an assumed ring topology, while our approach is based on the population.
In the global neighborhood search (GNS) strategy, we concentrate on searching the global neighborhood of the current particle. This can enhance the global search and avoid premature convergence. The global neighborhood search strategy is defined by where is a random number generated by Lévy distribution with a parameter . The main reason of using Lévy mutation is that the Lévy probability distribution has an infinite second moment and is, therefore, more likely to generate a new particle that is farther away from its parent than the commonly employed Gaussian mutation. Figure 3 illustrates the mechanism of the global neighborhood search strategy.
When conducting the neighborhood search, two new particles are generated by the local and global neighborhood search strategies, respectively. Then, there are three particles, the current particle and two other new particles. A greedy selection method is employed to choose the best one among the three particles as the new current particle.
Population initialization, as an important step in population-based stochastic algorithms, can affect the convergence speed and quality of solutions. In General, randomly initialization is used to generate initial population when lacking prior information. By the suggestions of , replacing the random initialization with opposition-based learning (OBL) can obtain better initial solutions and accelerate convergence speed. So, this paper also employs OBL for population initialization. This method is described as follows.(1) Randomly generate particles to initialize the population .(2) Calculate the boundaries [, ] of the current population according to (7) (3) For each particle in , an opposite particle is generated by where is the opposite position of . After conduct the opposition, there are opposite particles, which form an opposite population OP.(4) Select fittest particles from and OP as the initial population.
In our approach NQPSO employs one local and one global neighborhood search strategies into the original QPSO. The neighborhood search strategies focus on searching the local and global neighbors of particles and making a balance between the local and global search. The opposition-based population initialization can generate high quality of initial solutions and accelerate the convergence speed.
The main steps of NQPSO are described in Algorithm 2, where rand(0,1) is a random number with the range of , the parameter is the probability of neighborhood search, FEs is the number of fitness evaluations, and MAX_FEs is the maximum number of FEs. Compared to the original QPSO, our approach NQPSO does not add extra loop operations. Therefore, both NQPSO and QPSO have the same computational time complexity.
5. Experimental Study
5.1. Test Problems
There are twelve benchmark functions used in the following experiments. These problems were utilized in previous studies [14, 17, 19]. According to their properties, they are divided into three groups: unimodal and simple multimodal problems (-), unrotated multimodal problems (), and rotated multimodal problems (). For rotated problems, the original variable is left multiplied by the orthogonal matrix M to get the new rotated variable . All problems used in this paper are minimization problems. The brief descriptions of these problems are presented in Table 1.
5.2. Comparison of NQPSO with Other Similar QPSO Algorithms
Since the introducing of QPSO, some improved QPSO algorithms have been proposed. In order to verify the effectiveness our approach, this section compares NQPSO with similar QPSO algorithms, including QPSO, diversity-guided QPSO (DGQPSO) , and QPSO with weighted mean best position (WQPSO) .
To have a fair competition, the same settings are used for common parameters. For all algorithms, the population size is set to 40. The parameter is linearly deceased from 1.0 to 0.5. For DGQPSO, the parameter is set to 0.0001, and the coefficient used in the mutation is equal to 0.00001. For NQPSO, the probability of the neighborhood search is set to 0.2 based on empirical studies. When the number of fitness evaluations (FEs) reaches to the maximum value MAX_FEs, the algorithm stops running. In the experiment, MAX_FEs is set to . All algorithms are run 30 times for each test problem. Throughout the experiments, the mean fitness error values and standard deviation are reported (the mean error value is defined as , where is the fitness value found in the last generation, and is the global optimum of the problem).
Table 2 presents the computational results of QPSO, DGQPSO, WQPSO, and NQPSO on the test suite, where “Mena” indicates the mean fitness error value, and “Std” represents the standard deviation. For each problem, the best result (the minimal value) is shown in bold. It can be seen that DGQPSO outperforms QPSO on . In this problem, all four QPSO algorithms can find the global optimum. Although the diversity-guided method can improve the performance of QPSO, it only achieves small advantages on most test problems. Like DGQPSO, WQPSO performs better than QPSO on all problems except for , but the weighted best position method can significantly improve the performance of QPSO on and . NQPSO significantly improves the performance of QPSO on , , , and . Especially for , , and , only NQPSO can converge to the global optimum, while other algorithms fall into local minima.
To observe the evolutionary processes of the algorithms, Figure 4 lists the convergence characteristics of QPSO, DGQPSO, WQPSO, and NQPSO on some representative problems. As seen, NQPSO shows faster convergence speed than other three QPSO algorithms. Especially for , , and , NQPSO find the global optimum at the beginning stage of the evolution, while other three QPSO algorithms shows slow convergence rate. Although DGQPSO and WQPSO achieve better results than the original QPSO, the convergence characteristics of them are similar.
5.3. Comparison of NQPSO with Other State-of-the-Art PSO Algorithms
To further verify the performance of our approach, this section presents a comparative study of NQPSO with other state-of-the-art PSO variants. These PSO algorithms are listed as follows.(i)Cooperative PSO (CPSO-H) .(ii)Comprehensive learning PSO (CLPSO) .(iii)Adaptive learning PSO (APSO) .(iv)Comprehensive learning PSO with generalized opposition-based learning (GOCLPSO) .(v)Diversity enhanced PSO with neighborhood search (DNSPSO) .(vi)Our approach NQPSO.
The parameter settings of CPSO-H and CLPSO are described in . By the suggestions of , the same parameter settings of APSO are used. The parameters is set to 0.7298, and = = 1.49618. For GOCLPSO, the probability of opposition is set to 0.3 and other parameters keep the same with CLPSO. The parameters , , and used in DNSPSO are set to 2, 0.9, and 0.6, respectively . The above six PSO algorithms use the same population size () and maximum number of fitness evaluations (MAX_FEs = 2.0 + 05). For each test problem, each algorithm is run 30 times and the mean fitness error values are recorded.
Table 3 presents the computational results achieved by CPSO-H, CLPSO, APSO, GOCLPSO, DNSPSO, and NQPSO, where “Mean” indicates the mean fitness error values. For each problem, the best result is shown in bold. From the results, it can be seen that NQPSO outperforms CPSO-H on all test problems except for , , and . CLPSO performs better than NQPSO on , and , while NQPSO achieves better results on the rest 10 problems. APSO outperforms NQPSO on and , while NQPSO performs better than QPSO on the rest 10 problems. GOCLPSO, DNSPSO, and NQPSO can find the global optimum on . NQPSO performs better than GOCLPSO on 10 problems. Both DNSPSO and NQPSO achieve the same results on , , , , , and . DNSPSO outperforms NQPSO on and , while NQPSO obtains better results on , , , and . From the comparison of DNSPSO and NQPSO, both of them employ neighborhood search strategies, but NQPSO shows better performance than DNSPSO on the majority of test problems.
In order to compare the performance of multiple algorithms on the test suite, we conduct Friedman test by the suggestions of . Table 4 presents the average ranking of CPSO-H, CLPSO, APSO, GOCLPSO, DNSPSO, and NQPSO. These ranking values are calculated by SPSS software. The best ranking having the lowest ranking value is shown in bold. As seen, the performance of the six algorithms can be sorted into the following order: NQPSO, DNSPSO, CLPSO, GOCLPSO, CPSO-H, and APSO. The best average ranking was obtained by NQPSO algorithms, which outperforms the other five PSO algorithms. According to the literature , GOCLPSO is better than CLPSO, but our results show that CLPSO is better than GOCLPSO. The main reason is that the benchmark functions tested in this paper are different from the ones in . For different test problems, one algorithm may show different performance.
Besides the Friedman test, we also conduct Wilcoxon signed-rank test to compare the performance differences between NQPSO and the other five PSO algorithms [19, 30]. Table 5 shows the values when comparing NQPSO with other algorithms. The results show that NQPSO is not significantly better than other algorithms, but NQPSO outperforms them according to the average rankings shown in Table 4.
Quantum-behaved PSO (QPSO) is a new PSO variant, which employs a quantum model to update the positions of particles. Compared to the original PSO, QPSO eliminates the velocity term and does not contain the related parameters, , , and . Although QPSO introduces a new parameter to control the step size, it still has fewer control parameters than PSO. Some recent studies show that QPSO performs better than the original PSO on many benchmark functions and real-world problems. However, both PSO and QPSO still easily fall into local minima when solving complex problems. The main reason is that particles tends to move to the neighborhood of the gbest by the attraction of the weighted of pbest and gbest. If particles are attracted too fast, premature convergence will easily occur. To tackle this problem, this paper proposes a new QPSO algorithm called NQPSO, which employs one local and one global neighborhood search strategies to make a balance between exploitation and exploration. Moreover, a concept of opposition-based learning (OBL) is employed for population initialization. To verify the performance of our approach, twelve well-known benchmark functions including multimodal and rotated problems are used in the experiments. Computation results show that NQPSO outperforms some similar QPSO algorithms and five other state-of-the-art PSO variants.
Although NQPSO significantly improves the performance of QPSO on many problems, it still falls into local minima on some problems, such as , , and . How to enhance the performance of NQPSO on these problems will be a possible research direction. In addition, a new parameter is introduced to control the frequency of conducting neighborhood search. We have not investigated the effects of this parameter on the performance of NQPSO. How to select the best will be another research direction in our future work.
The Orthogonal Matrix M
The orthogonal matrix M is 30 × 30
This work is supported by the Twelfth Five-Year Plan of Jilin Province Education Science Project (no. GH11466).
- D. E. Goldberg, Genetic Algorithms in Search Optimization and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
- L. J. Fogel, “Evolutionary programming in perspective: the top-down view,” in Computational Intelligence: Imitating Life, IEEE Press, Piscataway, NJ, USA, 1994.
- J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995.
- K. V. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Natural Computing Series, Springer, Berlin, Germany, 1st edition, 2005.
- M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 26, no. 1, pp. 29–41, 1996.
- D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005.
- L. Ali, S. L. Sabat, and S. K. Udgata, “Particle swarm optimisation with stochastic ranking for constrained numerical and engineering benchmark problems,” International Journal of Bio-Inspired Computation, vol. 4, no. 3, pp. 155–166, 2012.
- C. C. Tseng, J. G. Hsieh, and J. H. Jeng, “Active contour model via multi-population particle swarm optimization,” Expert Systems with Applications, vol. 36, no. 3, pp. 5348–5352, 2009.
- K. W. Yu and Z. L. Huang, “LQ regulator design based on particle swarm optimization,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 4142–4145, Taipei, Taiwan, October 2006.
- C. Priya and P. Lakshmi, “Particle swarm optimisation applied to real time control of spherical tank system,” International Journal of Bio-Inspired Computation, vol. 4, no. 4, pp. 206–216, 2012.
- H. Wang, “Opposition-based barebones particle swarm for constrained nonlinear optimization problems,” Mathematical Problems in Engineering, vol. 2012, Article ID 761708, 12 pages, 2012.
- Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, May 1998.
- F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.
- J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.
- C. Li, S. Yang, and T. T. Nguyen, “A self-learning particle swarm optimizer for global optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 42, no. 3, pp. 627–646, 2012.
- Z. H. Zhan, J. Zhang, Y. Li, and H. S. H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 39, no. 6, pp. 1362–1381, 2009.
- H. Wang, Z. J. Wu, S. Rahnamayan, Y. Liu, and M. Ventresca, “Enhancing particle swarm optimization using generalized opposition-based learning,” Information Sciences, vol. 181, no. 20, pp. 4699–4714, 2011.
- H. Tizhoosh, “Opposition-based reinforcement learning,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 10, no. 4, pp. 578–585, 2006.
- H. Wang, H. Sun, C. Li, S. Rahnamayan, and J. Pan, “Diversity enhanced particle swarm optimization with neighborhood search,” Information Sciences, vol. 223, pp. 119–135, 2013.
- J. Sun, B. Feng, and W. B. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '04), pp. 325–331, June 2004.
- J. Sun, W. B. Xu, and W. Fang, “A diversity-guided quantum-behaved particle swarm optimization algorithm,” in Simulated Evolution and Learning, vol. 4247 of Lecture Notes in Computer Science, pp. 497–504, Springer, New York, NY, USA, 2006.
- K. Yang and H. Nomura, “Quantum-behaved particle swarm optimization with chaotic search,” IEICE Transactions on Information and Systems, vol. 91, no. 7, pp. 1963–1970, 2008.
- W. Zhao, Y. San, and H. Shi, “Fuzzy quantum-behaved particle swarm optimization algorithm,” in Proceedings of the International Symposium on Computational Intelligence and Design (ISCID '10), pp. 49–52, Hangzhou, China, October 2010.
- J. Wang and Y. Zhou, “Quantum-behaved particle swarm optimization with generalized local search operator for global optimization,” in Advanced Intelligent Computing Theories and Applications: With Aspects of Artificial Intelligence, vol. 4682 of Lecture Notes in Computer Science, pp. 851–860, Springer, New York, NY, USA, 2007.
- F. van den Bergh and A. P. Engelbrecht, “A study of particle swarm optimization particle trajectories,” Information Sciences, vol. 176, no. 8, pp. 937–971, 2006.
- M. Jamalipour, R. Sayareh, M. Gharib, F. Khoshahval, and M. R. Karimi, “Quantum behaved particle swarm optimization with differential mutation operator applied to WWER-1000 in-core fuel management optimization,” Annals of Nuclear Energy, vol. 54, pp. 134–140, 2013.
- S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “A novel population initialization method for accelerating evolutionary algorithms,” Computers and Mathematics with Applications, vol. 53, no. 10, pp. 1605–1614, 2007.
- M. Xi, J. Sun, and W. Xu, “An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position,” Applied Mathematics and Computation, vol. 205, no. 2, pp. 751–759, 2008.
- W. Wang, H. Wang, and S. Rahnamayan, “Improving comprehensive learning particle swarm optimiser using generalised opposition-based learning,” International Journal of Modelling, Identification and Control, vol. 14, no. 4, pp. 310–316, 2011.
- H. Wang, S. Rahnamayan, H. Sun, and M. G. H. Omran, “Gaussian bare-bones differential evolution,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 634–647, 2013.
Copyright © 2013 Xiao Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.