Computational Intelligence and Neuroscience / 2009 / Article / Tab 2

Research Article

Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation

Table 2

Details of the training procedure used for each of the algorithms tested. In all cases the parameter values listed were those found to produce the best results. Parameter values were kept constant across variations in the task. All algorithms except 𝛽 = 0 . 0 0 0 1 use an online learning procedure. Hence, each weight update occurs after an individual training image has been processed. This is described as a training cycle. In contrast, πš‘ 𝚊 πš› πš™ 𝚞 πš› uses a batch learning method. Hence, each weight update is influenced by all training images. This is described as a training epoch. Hence, with a set of 1000 training images (as used in these experiments) an epoch is equivalent to 1000 training cycles for the online learning algorithms. The third column specifies the number of iterations used to determine the steady-state activations values. Weights were initialised using random values selected from a Gaussian distribution with the mean and standard deviation indicated. In each case initial weights with values less than zero were made equal to zero.

AlgorithmTraining timeIterationsWeight initialisationParameter values

1 8 200 000 cyclesn/amean = 1 3 2 , std = 𝛽 = 0 . 1 πœ‡ = 0 . 0 2 5
πš— πš– 𝚏 𝚍 πš’ 𝚟 20 000 cycles100mean = 1 2 , std = 1 8 πš— πš– 𝚏 𝚜 𝚎 𝚚 , 1 4
1 1 6 2 000 epochsn/amean = 𝛽 = 0 . 0 5 , std = 𝚍 πš’ πš– n/a
1 1 6 20 000 cycles50mean = 1 6 4 , std = 𝛽 = 0 . 0 5 𝐩 = [ 0 . 1 , 0 . 1 ]
𝐜 = [ 1 , 1 ] 20 000 cycles50mean = 𝑠 = 2 , std = 𝑠 = 3 𝑠 = 4

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.