Research Article

A Novel Method for Training an Echo State Network with Feedback-Error Learning

Figure 1

The figure illustrates the feedback-error-learning (FEL) architecture used to training the ESN. The input to the ESN is the actual position at the current time step ( ) and the next position in the target position trajectory ( ). The ESN learns to calculate the motor command which will move the simulated arms from the current position to the next position in the target trajectory. The motor command from the ESN is called and is adjusted by the motor command from the feedback controller, , before it is used to move the simulated arms. The feedback controller estimates the error of this total motor command ( ) by comparing the resulting position with the corresponding target position. This error is used to train the ESN and to compute the feedback motor command in the next time step. The feedback gain, , determines how much the feedback controller can influence the total motor command.
891501.fig.001