Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 969104 | 13 pages | https://doi.org/10.1155/2012/969104

SOMO-m Optimization Algorithm with Multiple Winners

Academic Editor: M. De la Sen
Received01 Apr 2012
Revised06 Jun 2012
Accepted06 Jun 2012
Published05 Aug 2012

Abstract

Self-organizing map (SOM) neural networks have been widely applied in information sciences. In particular, Su and Zhao proposes in (2009) an SOM-based optimization (SOMO) algorithm in order to find a wining neuron, through a competitive learning process, that stands for the minimum of an objective function. In this paper, we generalize the SOM-based optimization (SOMO) algorithm to so-called SOMO-m algorithm with winning neurons. Numerical experiments show that, for , SOMO-m algorithm converges faster than SOM-based optimization (SOMO) algorithm when used for finding the minimum of functions. More importantly, SOMO-m algorithm with can be used to find two or more minimums simultaneously in a single learning iteration process, while the original SOM-based optimization (SOMO) algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

1. Introduction

Self-organizing map (SOM) neural networks have been widely applied in information sciences [17]. In particular, an SOM-based optimization (SOMO) algorithm is proposed in [8, 9] to find a winning neuron, through a competitive learning process, that stands for the minimum of an objective function. They compared the SOM-based optimization (SOMO) algorithm with genetic algorithms [10, 11] and particle swarm optimization algorithm [1217], and they showed that the SOM-based optimization (SOMO) algorithm can locate the minimum much faster than the genetic algorithm and the particle swarm optimization algorithm.

The aim of this paper is to generalize SOMO algorithm to so called SOMO-m algorithm to find m winning neurons in a single learning process. A separation technique is introduced in the learning process to prevent the accumulation of the -wining neurons. Numerical experiments show that, for , SOMO-m algorithm converges faster than SOMO algorithm when they are used for finding the minimum of functions. A more important merit of SOMO-m algorithm with is that it can find two or more minimums simultaneously in a single learning iteration process, while the original SOMO algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

This paper is organized as follows. In Section 2, a brief introduction of SOMO algorithm is given. Our SOMO-m algorithm is proposed in Section 3. Section 4 is devoted to some supporting numerical simulations.

2. Original SOM-Based Optimization (SOMO) Algorithm

Self-organizing map (SOM) is an unsupervised learning algorithm proposed by Kohonen [1820]. The principal goal of the SOM algorithm is to map an incoming pattern in a higher dimensional space into a lower (usually one or two) dimensional space, and perform this transforation adaptively in a topological ordered fashion. The applications of SOM range widely from simulations used for the purpose of understanding and modeling of computational maps in the brain to subsystems for engineering applications such as speech recognition, vector quantization and cluster analysis [1826].

Different from the usual SOM algorithm, an SOM-Based optimization (SOMO) algorithm is introduced in Su and Zhao [8] for continuous optimization. In the following, let us describe SOMO algorithm [8] used for finding the minimum point of a function , . The SOMO network contains neurons arranged as a two dimensional array. For each neuron , its weight is a vector in , where and for some positive integers and . For a special input , the winner out of all the neurons is defined as The idea of SOM training is applied to the network such that the weight of the winner will get closer and closer to the minimum point during the iterative training process. The detail of the training process is as follows:

Step 1. Initialization.

Substep 1 (Initialization of the neurons on the four corners). The weight vectors of the neurons on the corners are initialized as follows: Here, the two points and are randomly chosen and far enough from each other.

Substep 2 (Initialization of the neurons on the four edges). The initialization of the weights of the neurons on the four edges is as follows: where and .

Substep 3 (Initialization of the remaining neurons). where and .

Substep 4 (Random noise). A small amount of random noise is added for each weight so as to keep the weight vectors from being linearly dependent as follows: for and , where denotes a small random noise.

Step 2. Winner finding.

Step 3. Weights updating. The weights of the winner and its neighbors are adjusted by the following formula: where the parameters and are real-valued constants which can be either constants predefined by the user or time-varying parameters which decrease gradually when time increases. is the lateral distance between winning neuron and neuron ; the randomly chosen vector is called the perturbation vector, and

Step 4. Go to step 2 until a prespecified number of generations is achieved, or some kind of termination criterion is satisfied.

3. SOMO-m Algorithm

In this section, let us present our SOMO-m algorithm. We divide the section into two subsections, dealing with the training algorithms for finding one minimum and two minima of a function, respectively.

3.1. SOMO-m Algorithm for Finding One Minimum

The SOMO-m algorithm for finding one minimum of a function is as follows:

Step 1. The initialization of SOMO-m algorithm is the same as that of SOMO algorithm.

Step 2. This step aims to find winning neurons, denoted by , with the best objective function values among the neurons as follows: where and are suitably chosen sizes of neighborhoods.

Step 3. The weights of the winners and its neighbors are adjusted by the following formula: where The and are real-valued parameters which can be constants predefined by the user or time-varying parameters, which decreases gradually with increasing time , and is a perturbation vector.

Step 4. Go to step 2 until a prespecified number of generations is achieved or some kind of termination criteria is satisfied.

We first remark that in the above Step 2, we require the winners to be away from each other by using the neighborhoods and , such that the winners can spread around rather than accumulate in one point. Secondly, we find in our numerical experiments that it is OK to choose and as either small constants independent of time or small variables decreasing with increasing time . Thirdly, we find that it is better to choose . The reason for this may be the following: A smaller constant might help for the first winner to converge to the minimum, while the larger for might help for the other winners to reach a larger searching area.

3.2. SOMO-2 Algorithm for Two Minima

Here we discuss SOMO- algorithm for finding two minima of a function simultaneously. It is an easy matter to generalize the algorithm for finding three or more minima. SOMO- algorithm has the same training steps as those in the last subsection with for finding one minimum, except the step of weights updating process, which is as follows:

For the neurons in the neighborhood of the first winner satisfying , , where The weights updating rule is where

For the rest neurons, where

We see that, in the training, the second winner does not affect the neighboring neurons of the first winner , but does affect all the other neurons. Therefore, hopefully the first winner will converges to the minimum point of the function , while the second winner converges to another minimum point.

4. Simulation Results

4.1. Objective Functions

In this subsection, we use our SOMO-m methods to minimize the following functions. Step function Griewant function Giunta function function function

4.2. Parameters of Simulation

In Table 1 we present the global minima, dimensions, and the upper bound of the number of generations for optimization algorithms. Figure 1 illustrates the graphs of these five functions in two dimensional space. The neurons are arranged as a array, namely, . For SOMO algorithm, we set and ; for SOMO-2, , , and ; for SOMO-3, , ,  , and .


Test function Dimensions Initial range MinimumNumber of generations

Step10 0100
Griewank10 0100
Giunta30 100
function2 100
function2 100

4.3. Simulations of SOMO-m Algorithm for One Minimum

In this subsection, we investigate the performance of SOMO-m algorithm for finding one minimum. For each function, each algorithm conducted 30 runs. The stop criterion for each run is that either 100 generations are iteratively generated, or before then if the difference of two successive minima in the iteration process is less than a prescribed tolerance . The best solutions found for each function and each run were recorded and, for instance, the mean column in Table 2 presents the average of the 30 best solutions of the 30 runs, respectively. To measure and compare the performance of SOMO-, the mean time, that is, the average of processing time over 30 runs is recorded. Table 2 tabulates the comparison of the simulation results of SOMO, SOMO-2, and SOMO-3 algorithms. The mean column and the standard deviation (SD) column represent the mean and the standard deviation (SD) of the best solutions of 30 runs. The highlighted (bold) entries correspond to best solutions found by SOMO-3. Figure 2 shows the performances of SOMO, SOMO-2, and SOMO-3 algorithms in typical runs.


Test function Algorithm Mean Standard deviation Time
mean (s)
Time
SD (s)

SOMO 1.3813 1.4008
StepSOMO-2 1.09731.1419
SOMO-37.8070 e    020 0.8743 0.8686

SOMO 1.75021.1681
Griewant SOMO-2 1.16811.1324
SOMO-32.2204 e    017 0.9849

SOMO 0.9670 1.0116 1.0049
Giunta SOMO-2 0.70870.7053
SOMO-3 0.62170.6451

SOMO 0.80090.7584
function SOMO-2 0.61680.6438
SOMO-31.896401167771335 e  +  004 0.46570.4778

SOMO −7.36465824205229 2.10602.1278
function SOMO-2 −7.36465824208548 1.59191.5198
SOMO-3 0.9831 0.9525

Now, we discuss the accuracy of the algorithms. We measure the errors for each algorithm over the 30 runs as follows: where is the approximate solution of th run, and is the real minimum of the function. Table 3 tabulates the errors of SOMO, SOMO-2, and SOMO-3 algorithms, respectively.


Test functionAlgorithmError

SOMO
StepSOMO-2
SOMO-35.3813 e  020

SOMO
Griewank SOMO-2
SOMO-36.8401 e  020

SOMO0.067056308053595
Giunta SOMO-20.067056308053587
SOMO-3

We draw the following conclusions from the simulations in this subsection for finding one minimum. SOMO-3 algorithm shows the best results for all functions based on the comparisons of the means of the best objective value and the processing time. As shown in Figure 2, the SOMO-3 algorithm locates the minima faster than SOMO and SOMO-2 algorithms. In Table 3, the bold entries show that the error corresponding to SOMO-3 algorithm is smaller than those of SOMO and SOMO-2 algorithms.

4.4. Simulations of SOMO-2 Algorithm for Two Minima

In this subsection, we present the simulation results for the case of finding simultaneously two minima of a function. The best solutions found for each run after prespecified number of generations were recorded. Table 4 tabulates the comparison of the simulation results. The mean column and the standard deviation column represents the mean and the standard deviation of the best solutions of 30 runs.


Test function Algorithm Mean Standard deviation Time
mean (s)
Time
SD (s)

SOMO
 First minimum−7.36465824205229 2.10602.1278
function  Second minimum−5.993228459189910.003792472077002.97162.865
SOMO-2
 First minimum−7.36465824124180 1.49671.4237
 Second minimum−6.07080647154294
SOMO
 First minimum 1.2414 1.1168
function Second minimum 1.13431.0938
SOMO-2
 First minimum 0.55360.5631
 Second minimum

Based on observation from Table 4, our conclusion for finding simultaneously two minima of a function is as follows:

The proposed SOMO- algorithm can find two minima of a function simultaneously in a single learning iteration process, while the original SOM-based optimization (SOMO) algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

Acknowledgments

The authors wish to thank the Associate Editor and the anonymous reviewers for their helpful and interesting comments. This work was supported by National Science Foundation of China (11171367) and the Fundamental Research Funds for the Central Universities of China.

References

  1. T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biological Cybernetics, vol. 43, no. 1, pp. 59–69, 1982. View at: Google Scholar
  2. Y. Xiao, C. S. Leung, T. Y. Ho, and P. M. Lam, “A GPU implementation for LBG and SOM training,” Neural Computing and Applications, vol. 20, no. 7, pp. 1035–1042, 2010. View at: Publisher Site | Google Scholar
  3. I. Valova, D. Beaton, A. Buer, and D. MacLean, “Fractal initialization for high-quality mapping with self-organizing maps,” Neural Computing and Applications, vol. 19, no. 7, pp. 953–966, 2010. View at: Publisher Site | Google Scholar
  4. M. Rubio and V. Giménez, “New methods for self-organising map visual analysis,” Neural Computing and Applications, vol. 12, no. 3-4, pp. 142–152, 2003. View at: Publisher Site | Google Scholar
  5. A. Delgado, “Control of nonlinear systems using a self-organising neural network,” Neural Computing and Applications, vol. 9, no. 2, pp. 113–123, 2000. View at: Google Scholar
  6. N. Ahmad, D. Alahakoon, and R. Chau, “Cluster identification and separation in the growing self-organizing map: application in protein sequence classification,” Neural Computing and Applications, vol. 19, no. 4, pp. 531–542, 2010. View at: Publisher Site | Google Scholar
  7. T. Kohonen, Self-Organizing Maps, vol. 30 of Springer Series in Information Sciences, Springer, Berlin, Germany, 2nd edition, 1997. View at: Publisher Site | Zentralblatt MATH
  8. M. C. Su and Y. X. Zhao, “A variant of the SOM algorithm and its interpretation in the viewpoint of social influence and learning,” Neural Computing and Applications, vol. 18, no. 8, pp. 1043–1055, 2009. View at: Publisher Site | Google Scholar
  9. M. C. Su, Y. X. Zhao, and J. Lee, “SOM-based Optimization,” in Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 781–786, Budapest, Hungary, July 2004. View at: Google Scholar
  10. J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at: Zentralblatt MATH
  11. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, UK, 1989.
  12. M. S. Arumugam and M. V. C. Rao, “On the performance of the particle swarm optimization algorithm with various inertia weight variants for computing optimal control of a class of hybrid systems,” Discrete Dynamics in Nature and Society, vol. 2006, Article ID 79295, 17 pages, 2006. View at: Publisher Site | Google Scholar
  13. J. Kennedy , R. C. Eberhart, and Y. Shi , Swarm Intelligence, Academic Press, New York, NY, USA, 2001.
  14. M. S. Arumugam and M. V. C. Rao, “On the optimal control of single-stage hybrid manufacturing systems via novel and different variants of particle swarm optimization algorithm,” Discrete Dynamics in Nature and Society, no. 3, pp. 257–279, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. R. Eberhart and J. Kennedy, “New optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–43, October 1995. View at: Google Scholar
  16. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995. View at: Google Scholar
  17. P. Umapathy, C. Venkataseshaiah, and M. S. Arumugam, “Particle swarm optimization with various inertia weight variants for optimal power flow solution,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 462145, 15 pages, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. T. Kohonen, Self-Organizing Maps, vol. 30 of Springer Series in Information Sciences, Springer, Berlin, Germany, 3rd edition, 2001. View at: Publisher Site
  19. T. Kohonen, E. Oja, O. Simula, A. Visa, and J. Kangas, “Engineering applications of the self-organizing map,” Proceedings of the IEEE, vol. 84, no. 10, pp. 1358–1383, 1996. View at: Google Scholar
  20. T. Kohonen, Self-Organization and Associative Memory, vol. 8 of Springer Series in Information Sciences, Springer, New York, NY, USA, 3rd edition, 1989. View at: Publisher Site
  21. J. Malone, K. McGarry, S. Wermter, and C. Bowerman, “Data mining using rule extraction from Kohonen self-organising maps,” Neural Computing and Applications, vol. 15, no. 1, pp. 9–17, 2006. View at: Publisher Site | Google Scholar
  22. M. Oja, S. Kaski, and T. Kohonen, “Bibliography of self-organizing map (SOM),” Proceedings of the IEEE, vol. 84, no. 10, pp. 1358–1383, 2003. View at: Google Scholar
  23. S. Kaski, J. Kangas, and T. Kohonen, “Bibliography of self organizing map,” Neural Computing Surveys, vol. 1, 1998. View at: Google Scholar
  24. P. K. Sharpe and P. Caleb, “Self organising maps for the investigation of clinical data: a case study,” Neural Computing and Applications, vol. 7, no. 1, pp. 65–70, 1998. View at: Google Scholar
  25. H. Merdun, “Self-organizing map artificial neural network application in multidimensional soil data analysis,” Neural Computing and Applications, vol. 20, no. 8, pp. 1295–1303, 2010. View at: Publisher Site | Google Scholar
  26. B. Lamrini, E. K. Lakhal, M. V. Le Lann, and L. Wehenkel, “Data validation and missing data reconstruction using self-organizing map for water treatment,” Neural Computing and Applications, vol. 20, no. 4, pp. 575–588, 2011. View at: Publisher Site | Google Scholar

Copyright © 2012 Wei Wu and Atlas Khan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

603 Views | 5287 Downloads | 2 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.