Input: A Madaline with given architecture and random initial weights, a set of training data, |
learning parameters , , and the requirements of training precision and the Maximal |
epochs. |
(1) Randomly arrange training samples; |
(2) Loop for all training samples stating with : |
(2.1) Feed the th training sample into the Madaline; |
(2.2) If the output of the Madaline is correct for the th sample, , go to Step 2; |
(2.3) For each hidden layer , from 1 to , do: |
(2.3.1) Determine weight adaptations of all Adalines in the th layer by (12), and then |
calculate values of their sensitivity measure by (9) or (15); |
(2.3.2) Sort th-layer Adalines according to their sensitivity measure values in |
ascending order; |
(2.3.3) For from 1 to do: |
(2.3.3.1) For all possible adjacent Adaline combinations with elements in the queue |
do: |
① Implement the trial reversion for the current Adaline combination; |
② If output errors of the Madaline don’t reduce, reject the adaptation, and |
continue to do for next Adaline combination; |
③ Weight(s) of Adaline(s) in the current combination are adapted by (12); |
Count the Madaline’s output errors. |
④ If the Madaline errors are equal to zero, , go to Step 2; Else and. |
go to Step 2.3. |
(2.4) For the th-Adaline in output layer, from 1 to , do: |
If the output of the th Adaline isn’t correct to the th sample, its weight is adapted by (12). |
(3) Go to Step 1 unless the training precision meets the given requirement for all training |
samples or training epochs reach the given number. |
Output: all weights and training errors under all training samples. |