Research Article

Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation

Table 2

Details of the training procedure used for each of the algorithms tested. In all cases the parameter values listed were those found to produce the best results. Parameter values were kept constant across variations in the task. All algorithms except 𝛽 = 0 . 0 0 0 1 use an online learning procedure. Hence, each weight update occurs after an individual training image has been processed. This is described as a training cycle. In contrast, 𝚑 𝚊 𝚛 𝚙 𝚞 𝚛 uses a batch learning method. Hence, each weight update is influenced by all training images. This is described as a training epoch. Hence, with a set of 1000 training images (as used in these experiments) an epoch is equivalent to 1000 training cycles for the online learning algorithms. The third column specifies the number of iterations used to determine the steady-state activations values. Weights were initialised using random values selected from a Gaussian distribution with the mean and standard deviation indicated. In each case initial weights with values less than zero were made equal to zero.

AlgorithmTraining timeIterationsWeight initialisationParameter values

1 8 200 000 cyclesn/amean = 1 3 2 , std = 𝛽 = 0 . 1 𝜇 = 0 . 0 2 5
𝚗 𝚖 𝚏 𝚍 𝚒 𝚟 20 000 cycles100mean = 1 2 , std = 1 8 𝚗 𝚖 𝚏 𝚜 𝚎 𝚚 , 1 4
1 1 6 2 000 epochsn/amean = 𝛽 = 0 . 0 5 , std = 𝚍 𝚒 𝚖 n/a
1 1 6 20 000 cycles50mean = 1 6 4 , std = 𝛽 = 0 . 0 5 𝐩 = [ 0 . 1 , 0 . 1 ]
𝐜 = [ 1 , 1 ] 20 000 cycles50mean = 𝑠 = 2 , std = 𝑠 = 3 𝑠 = 4