Abstract

Self-organizing map (SOM) neural networks have been widely applied in information sciences. In particular, Su and Zhao proposes in (2009) an SOM-based optimization (SOMO) algorithm in order to find a wining neuron, through a competitive learning process, that stands for the minimum of an objective function. In this paper, we generalize the SOM-based optimization (SOMO) algorithm to so-called SOMO-m algorithm with winning neurons. Numerical experiments show that, for , SOMO-m algorithm converges faster than SOM-based optimization (SOMO) algorithm when used for finding the minimum of functions. More importantly, SOMO-m algorithm with can be used to find two or more minimums simultaneously in a single learning iteration process, while the original SOM-based optimization (SOMO) algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

1. Introduction

Self-organizing map (SOM) neural networks have been widely applied in information sciences [17]. In particular, an SOM-based optimization (SOMO) algorithm is proposed in [8, 9] to find a winning neuron, through a competitive learning process, that stands for the minimum of an objective function. They compared the SOM-based optimization (SOMO) algorithm with genetic algorithms [10, 11] and particle swarm optimization algorithm [1217], and they showed that the SOM-based optimization (SOMO) algorithm can locate the minimum much faster than the genetic algorithm and the particle swarm optimization algorithm.

The aim of this paper is to generalize SOMO algorithm to so called SOMO-m algorithm to find m winning neurons in a single learning process. A separation technique is introduced in the learning process to prevent the accumulation of the -wining neurons. Numerical experiments show that, for , SOMO-m algorithm converges faster than SOMO algorithm when they are used for finding the minimum of functions. A more important merit of SOMO-m algorithm with is that it can find two or more minimums simultaneously in a single learning iteration process, while the original SOMO algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

This paper is organized as follows. In Section 2, a brief introduction of SOMO algorithm is given. Our SOMO-m algorithm is proposed in Section 3. Section 4 is devoted to some supporting numerical simulations.

2. Original SOM-Based Optimization (SOMO) Algorithm

Self-organizing map (SOM) is an unsupervised learning algorithm proposed by Kohonen [1820]. The principal goal of the SOM algorithm is to map an incoming pattern in a higher dimensional space into a lower (usually one or two) dimensional space, and perform this transforation adaptively in a topological ordered fashion. The applications of SOM range widely from simulations used for the purpose of understanding and modeling of computational maps in the brain to subsystems for engineering applications such as speech recognition, vector quantization and cluster analysis [1826].

Different from the usual SOM algorithm, an SOM-Based optimization (SOMO) algorithm is introduced in Su and Zhao [8] for continuous optimization. In the following, let us describe SOMO algorithm [8] used for finding the minimum point of a function , . The SOMO network contains neurons arranged as a two dimensional array. For each neuron , its weight is a vector in , where and for some positive integers and . For a special input , the winner out of all the neurons is defined as The idea of SOM training is applied to the network such that the weight of the winner will get closer and closer to the minimum point during the iterative training process. The detail of the training process is as follows:

Step 1. Initialization.

Substep 1 (Initialization of the neurons on the four corners). The weight vectors of the neurons on the corners are initialized as follows: Here, the two points and are randomly chosen and far enough from each other.

Substep 2 (Initialization of the neurons on the four edges). The initialization of the weights of the neurons on the four edges is as follows: where and .

Substep 3 (Initialization of the remaining neurons). where and .

Substep 4 (Random noise). A small amount of random noise is added for each weight so as to keep the weight vectors from being linearly dependent as follows: for and , where denotes a small random noise.

Step 2. Winner finding.

Step 3. Weights updating. The weights of the winner and its neighbors are adjusted by the following formula: where the parameters and are real-valued constants which can be either constants predefined by the user or time-varying parameters which decrease gradually when time increases. is the lateral distance between winning neuron and neuron ; the randomly chosen vector is called the perturbation vector, and

Step 4. Go to step 2 until a prespecified number of generations is achieved, or some kind of termination criterion is satisfied.

3. SOMO-m Algorithm

In this section, let us present our SOMO-m algorithm. We divide the section into two subsections, dealing with the training algorithms for finding one minimum and two minima of a function, respectively.

3.1. SOMO-m Algorithm for Finding One Minimum

The SOMO-m algorithm for finding one minimum of a function is as follows:

Step 1. The initialization of SOMO-m algorithm is the same as that of SOMO algorithm.

Step 2. This step aims to find winning neurons, denoted by , with the best objective function values among the neurons as follows: where and are suitably chosen sizes of neighborhoods.

Step 3. The weights of the winners and its neighbors are adjusted by the following formula: where The and are real-valued parameters which can be constants predefined by the user or time-varying parameters, which decreases gradually with increasing time , and is a perturbation vector.

Step 4. Go to step 2 until a prespecified number of generations is achieved or some kind of termination criteria is satisfied.

We first remark that in the above Step 2, we require the winners to be away from each other by using the neighborhoods and , such that the winners can spread around rather than accumulate in one point. Secondly, we find in our numerical experiments that it is OK to choose and as either small constants independent of time or small variables decreasing with increasing time . Thirdly, we find that it is better to choose . The reason for this may be the following: A smaller constant might help for the first winner to converge to the minimum, while the larger for might help for the other winners to reach a larger searching area.

3.2. SOMO-2 Algorithm for Two Minima

Here we discuss SOMO- algorithm for finding two minima of a function simultaneously. It is an easy matter to generalize the algorithm for finding three or more minima. SOMO- algorithm has the same training steps as those in the last subsection with for finding one minimum, except the step of weights updating process, which is as follows:

For the neurons in the neighborhood of the first winner satisfying , , where The weights updating rule is where

For the rest neurons, where

We see that, in the training, the second winner does not affect the neighboring neurons of the first winner , but does affect all the other neurons. Therefore, hopefully the first winner will converges to the minimum point of the function , while the second winner converges to another minimum point.

4. Simulation Results

4.1. Objective Functions

In this subsection, we use our SOMO-m methods to minimize the following functions. Step function Griewant function Giunta function function function

4.2. Parameters of Simulation

In Table 1 we present the global minima, dimensions, and the upper bound of the number of generations for optimization algorithms. Figure 1 illustrates the graphs of these five functions in two dimensional space. The neurons are arranged as a array, namely, . For SOMO algorithm, we set and ; for SOMO-2, , , and ; for SOMO-3, , ,  , and .

4.3. Simulations of SOMO-m Algorithm for One Minimum

In this subsection, we investigate the performance of SOMO-m algorithm for finding one minimum. For each function, each algorithm conducted 30 runs. The stop criterion for each run is that either 100 generations are iteratively generated, or before then if the difference of two successive minima in the iteration process is less than a prescribed tolerance . The best solutions found for each function and each run were recorded and, for instance, the mean column in Table 2 presents the average of the 30 best solutions of the 30 runs, respectively. To measure and compare the performance of SOMO-, the mean time, that is, the average of processing time over 30 runs is recorded. Table 2 tabulates the comparison of the simulation results of SOMO, SOMO-2, and SOMO-3 algorithms. The mean column and the standard deviation (SD) column represent the mean and the standard deviation (SD) of the best solutions of 30 runs. The highlighted (bold) entries correspond to best solutions found by SOMO-3. Figure 2 shows the performances of SOMO, SOMO-2, and SOMO-3 algorithms in typical runs.

Now, we discuss the accuracy of the algorithms. We measure the errors for each algorithm over the 30 runs as follows: where is the approximate solution of th run, and is the real minimum of the function. Table 3 tabulates the errors of SOMO, SOMO-2, and SOMO-3 algorithms, respectively.

We draw the following conclusions from the simulations in this subsection for finding one minimum. SOMO-3 algorithm shows the best results for all functions based on the comparisons of the means of the best objective value and the processing time. As shown in Figure 2, the SOMO-3 algorithm locates the minima faster than SOMO and SOMO-2 algorithms. In Table 3, the bold entries show that the error corresponding to SOMO-3 algorithm is smaller than those of SOMO and SOMO-2 algorithms.

4.4. Simulations of SOMO-2 Algorithm for Two Minima

In this subsection, we present the simulation results for the case of finding simultaneously two minima of a function. The best solutions found for each run after prespecified number of generations were recorded. Table 4 tabulates the comparison of the simulation results. The mean column and the standard deviation column represents the mean and the standard deviation of the best solutions of 30 runs.

Based on observation from Table 4, our conclusion for finding simultaneously two minima of a function is as follows:

The proposed SOMO- algorithm can find two minima of a function simultaneously in a single learning iteration process, while the original SOM-based optimization (SOMO) algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

Acknowledgments

The authors wish to thank the Associate Editor and the anonymous reviewers for their helpful and interesting comments. This work was supported by National Science Foundation of China (11171367) and the Fundamental Research Funds for the Central Universities of China.