Research Article

A Cold Start Context-Aware Recommender System for Tour Planning Using Artificial Neural Network and Case Based Reasoning

Table 2

Algorithms for training backpropagation multilayer perceptron neural network.

Training algorithmDescription

GDXIt renovates weight and bias values conceding to the gradient descent and current trends in the error surface () with adaptive training rate.
⁢                  

RPIt considers only the sign of the partial derivative to determine the direction of the weight update and multiplies it with the step size ().
⁢                  

CGFThe vector of weights changes is computed as 
⁢                  , 
where is the search direction that is computed as 
⁢                  
where the parameter is computed as 
⁢                  

CGPThe vector of weights changes is computed as 
⁢                  , 
where is the search direction that is computed as 
⁢                    
where the parameter is computed as 
⁢                  

CGBThe search direction is reset to the negative of the gradient only if there is very little orthogonality between the present gradient and the past gradient as explained in this condition.
⁢                  

SCGIt denotes the quadratic approximation to the error in a neighborhood of a point by
⁢                  
To minimize , the critical points for are found as the solution to the following linear system:
⁢                  

BFGThe vector of weights changes is computed as
⁢                  , 
where is the Hessian (second derivatives) matrix approximated by as 
⁢                  

OSSIt generates a sequence of matrices that represents increasingly accurate approximations to the inverse Hessian (). The updated expression uses only the first derivative information of as follows:
⁢                  , 
where 
⁢                  , 
⁢                  , 
⁢                  , 
and is the transfer function. It does not store the complete Hessian matrix and assumes that the previous Hessian was the identity matrix at each iteration.

LMThe vector of weights changes is computed as 
⁢                  ,⁢
where combination coefficient is always positive, is the identity matrix, is Jacobian matrix (first derivatives), and is network error.