Abstract

Conjugate gradient methods constitute excellent neural network training methods characterized by their simplicity, numerical efficiency, and their very low memory requirements. In this paper, we propose a conjugate gradient neural network training algorithm which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant condition proposed by Li et al. (2007). Under mild conditions, we establish that the proposed method is globally convergent for general functions under the strong Wolfe conditions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods and has a potential to significantly enhance the computational efficiency and robustness of the training process.

1. Introduction

Learning systems, such as multilayer feedforward neural networks (FNN), are parallel computational models comprised of densely interconnected, adaptive processing units, characterized by an inherent propensity for learning from experience and also discovering new knowledge. Due to their excellent capability of self-learning and self-adapting, they have been successfully applied in many areas of artificial intelligence [1–5] and are often found to be more efficient and accurate than other classification techniques [6]. The operation of a FNN is usually based on the following equations: 𝑛𝑒𝑑𝑙𝑗=π‘π‘™βˆ’1𝑖=1π‘€π‘™βˆ’1,π‘™π‘–π‘—π‘¦π‘–π‘™βˆ’1+𝑏𝑙𝑗,𝑦𝑙𝑗=𝑓𝑛𝑒𝑑𝑙𝑗,(1) where 𝑛𝑒𝑑𝑙𝑗 is the sum of its weighted inputs for the 𝑗th node in the 𝑙th layer (𝑗=1,…,𝑁𝑙), π‘€π‘™βˆ’1,𝑙𝑖𝑗 are the weights from the 𝑖th neuron at the (π‘™βˆ’1) layer to the 𝑗th neuron at the 𝑙th layer, 𝑏𝑙𝑗 is the bias of the 𝑗th neuron at the 𝑙th layer, 𝑦𝑙𝑖 is the output of the 𝑗th neuron that belongs to the 𝑙th layer, and 𝑓(𝑛𝑒𝑑𝑙𝑗) is the 𝑗th neuron activation function.

The problem of training a neural network is to iteratively adjust its weights, in order to globally minimize a measure of difference between the actual output of the network and the desired output for all examples of the training set [7]. More mathematically, the training process can be formulated as the minimization of the error function 𝐸(𝑀), defined by the sum of square differences between the actual output of the FNN, denoted by 𝑦𝐿𝑗,𝑝 and the desired output, denoted by 𝑑𝑗,𝑝, relative to the appeared output, namely,𝐸(𝑀)=𝑃𝑁𝑝=1𝐿𝑗=1𝑦𝐿𝑗,π‘βˆ’π‘‘π‘—,𝑝2,(2) where π‘€βˆˆβ„π‘› is the vector network weights and 𝑃 represents the number of patterns used in the training set.

Conjugate gradient methods are probably the most famous iterative methods for efficiently training neural networks due to their simplicity, numerical efficiency, and their very low memory requirements. These methods generate a sequence of weights {π‘€π‘˜} using the iterative formulaπ‘€π‘˜+1=π‘€π‘˜+πœ‚π‘˜π‘‘π‘˜,π‘˜=0,1,…,(3) where π‘˜ is the current iteration usually called epoch, 𝑀0βˆˆβ„π‘› is a given initial point, πœ‚π‘˜>0 is the learning rate, and π‘‘π‘˜ is a descent search direction defined byπ‘‘π‘˜=ξ‚»βˆ’π‘”0,ifπ‘˜=0,βˆ’π‘”π‘˜+π›½π‘˜π‘‘π‘˜βˆ’1,otherwise,(4) where π‘”π‘˜ is the gradient of 𝐸 at π‘€π‘˜ and π›½π‘˜ is a scalar. In the literature, there have been proposed several choices for π›½π‘˜ which give rise to distinct conjugate gradient methods. The most well-known conjugate gradient methods include the Fletcher-Reeves (FR) method [8], the Hestenes-Stiefel (HS) method [9], and the Polak-RibiΓ¨re (PR) method [10]. The update parameters of these methods are, respectively, specified as follows:𝛽HSπ‘˜=π‘”π‘‡π‘˜π‘¦π‘˜βˆ’1π‘¦π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1,𝛽FRπ‘˜=β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–2,𝛽PRπ‘˜=π‘”π‘‡π‘˜π‘¦π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2,(5) where π‘ π‘˜βˆ’1=π‘₯π‘˜βˆ’π‘₯π‘˜βˆ’1, π‘¦π‘˜βˆ’1=π‘”π‘˜βˆ’π‘”π‘˜βˆ’1 and β€–β‹…β€– denotes the Euclidean norm.

The PR method behaves like the HS method in practical computation and it is generally believed to be one of the most efficient conjugate gradient methods. However, despite the practical advantages of this method, it has the major drawback of not being globally convergent for general functions and as a result it may be trapped and cycle infinitely without presenting any substantial progress [11]. For rectifying the convergence failure of the PR method, Gilbert and Nocedal [12], motivated by Powell’s work [13], proposed to restrict the update parameter π›½π‘˜ of being nonnegative, namely, 𝛽PR+π‘˜=max{𝛽PRπ‘˜,0}. The authors conducted an elegant analysis of this conjugate gradient method (PR+) and established that it is globally convergent under strong assumptions. Moreover, although that the PR method and the PR+ method usually perform better than the other conjugate gradient methods, they cannot guarantee to generate descent directions, hence restarts are employed in order to guarantee convergence. Nevertheless, there is also a worry with restart algorithms that their restarts may be triggered too often; thus degrading the overall efficiency and robustness of the minimization process [14].

During the last decade, much effort has been devoted to develop new conjugate gradient methods which are not only globally convergent for general functions but also computationally superior to classical methods and are classified in two classes. The first class utilizes second-order information to accelerate conjugate gradient methods by utilizing new secant equations (see [15–18]). Sample works include the nonlinear conjugate gradient methods proposed by Zhang et al. [19–21] which are based on MBFGS secant equation [15]. Ford et al. [22] proposed a multistep conjugate gradient method that is based on the multistep quasi-Newton methods proposed in [16, 17]. Recently, Yabe and Takano [23] and Li et al. [18] proposed conjugate gradient methods which are based on modified secant equation using both the gradient and function values with higher orders of accuracy in the approximation of the curvature. Under proper conditions, these methods are globally convergent and sometimes their numerical performance is superior to classical conjugate gradient methods. However, these methods do not ensure to generate descent directions; therefore the descent condition is usually assumed in their analysis and implementations.

The second class aims at developing conjugate gradient methods which generate descent directions, in order to avoid the usually inefficient restarts. On the basis of this idea, Zhang et al. [20, 24–26] modified the search direction in order to ensure sufficient descent, that is, π‘‘π‘‡π‘˜π‘”π‘˜=βˆ’β€–π‘”π‘˜β€–2, independent of the performed line search. Independently, Hager and Zhang [27] modified the parameter π›½π‘˜ and proposed a new descent conjugate gradient method, called the CG-DESCENT method. More analytically, they proposed a modification of the Hestenes-Stiefel formula 𝛽HSπ‘˜ in the following way:𝛽HZπ‘˜=𝛽HSπ‘˜β€–β€–π‘¦βˆ’2π‘˜βˆ’1β€–β€–2ξ€·π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1ξ€Έ2π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1.(6) Along this line, Yuan [28] based on [12, 27, 29], proposed a modified PR method, that is,𝛽DPR+π‘˜=𝛽PRπ‘˜ξƒ―π›½βˆ’minPRπ‘˜β€–β€–π‘¦,πΆπ‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1ξƒ°,(7) where 𝐢 is a parameter which essentially controls the relative weight between conjugacy and descent and in case 𝐢>1/4 then the above formula satisfies π‘”π‘‡π‘˜π‘‘π‘˜β‰€βˆ’(1βˆ’1/4𝐢)β€–π‘”π‘˜β€–2. An important feature of this method is that it is globally convergent for general functions. Recently, Livieris et al. [30–32] motivated by the previous works presented some descent conjugate gradient training algorithms providing some promising results. Based on their numerical experiments, the authors concluded that the sufficient descent property led to a significant improvement of the training process.

In this paper, we proposed a new conjugate gradient training algorithm which has both characteristics of the previous presented classes. Our method ensures sufficient descent independent of the accuracy of the line search, avoiding thereby the usually inefficient restarts. Moreover, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant condition proposed in [18]. Under mild conditions, we establish the global convergence of our proposed method.

The remainder of this paper is organized as follows. In Section 2, we present our proposed conjugate gradient training algorithm and in Section 3, we present its global convergence analysis. The experimental results are reported in Section 4 using the performance profiles of Dolan and Morè [33]. Finally, Section 5 presents our concluding remarks.

2. Modified Polak-Ribière+ Conjugate Gradient Algorithm

Firstly, we recall that for quasi-Newton methods, an approximation matrix π΅π‘˜βˆ’1 to the Hessian βˆ‡2𝐸(π‘€π‘˜βˆ’1) of a nonlinear function 𝐸 is updated so that a new matrix π΅π‘˜ satisfies the following secant condition:π΅π‘˜π‘ π‘˜βˆ’1=π‘¦π‘˜βˆ’1.(8) Obviously, only two gradients are exploited in the secant equation (8), while the function values available are neglected. Recently, Li et al. [18] proposed a conjugate gradient method based on the modified secant conditionπ΅π‘˜π‘ π‘˜βˆ’1=Μƒπ‘¦π‘˜βˆ’1,Μƒπ‘¦π‘˜βˆ’1=π‘¦π‘˜βˆ’1+ξ€½πœƒmaxπ‘˜βˆ’1ξ€Ύ,0β€–β€–π‘ π‘˜βˆ’1β€–β€–2π‘ π‘˜βˆ’1,(9) where πœƒπ‘˜βˆ’1 is defined byπœƒπ‘˜βˆ’1𝐸=2π‘˜βˆ’1βˆ’πΈπ‘˜ξ€Έ+ξ€·π‘”π‘˜+π‘”π‘˜βˆ’1ξ€Έπ‘‡π‘ π‘˜βˆ’1,(10) and πΈπ‘˜ denotes 𝐸(π‘€π‘˜). The authors proved that this new secant equation (9) is superior to the classical one (8) in the sense that Μƒπ‘¦π‘˜βˆ’1 better approximates βˆ‡2𝐸(π‘€π‘˜)π‘ π‘˜βˆ’1 than π‘¦π‘˜βˆ’1 (see [18]).

Motivated by the theoretical advantages of this modified secant condition (9), we propose a modification of formula (7), in the following way:𝛽MPR+π‘˜=π‘”π‘‡π‘˜Μƒπ‘¦π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2ξƒ―π‘”βˆ’minπ‘‡π‘˜Μƒπ‘¦π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2β€–β€–,πΆΜƒπ‘¦π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1ξƒ°,(11) with 𝐢>1/4. It is easy to see from (4) and (11) that our proposed formula 𝛽MPR+π‘˜ satisfies the sufficient descent conditionπ‘”π‘‡π‘˜π‘‘π‘˜ξ‚€1β‰€βˆ’1βˆ’ξ‚β€–β€–π‘”4πΆπ‘˜β€–β€–2,(12) independent of the line search used.

At this point, we present a high level description of our proposed algorithm, called modified Polak-Ribière+ conjugate gradient algorithm (MPR+-CG).

Algorithm 1 (modified Polak-RibiΓ¨re+ conjugate gradient algorithm). Step 1. Initiate 𝑀0, 0<𝜎1<𝜎2<1, 𝐸𝐺 and π‘˜MAX; set π‘˜=0.Step 2. Calculate the error function value πΈπ‘˜ and its gradient π‘”π‘˜.Step 3. If (πΈπ‘˜<𝐸𝐺), return π‘€βˆ—=π‘€π‘˜ and πΈβˆ—=πΈπ‘˜.Step 4. If (π‘”π‘˜=0), return β€œError goal not met”.Step 5. Compute the descent direction π‘‘π‘˜ using (4) and (11).Step 6. Compute the learning rate πœ‚π‘˜ using the strong Wolfe line search conditions πΈξ€·π‘€π‘˜+π›Όπ‘˜π‘‘π‘˜ξ€Έξ€·π‘€βˆ’πΈπ‘˜ξ€Έβ‰€πœŽ1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜,(13)|||π‘”ξ€·π‘€π‘˜+π›Όπ‘˜π‘‘π‘˜ξ€Έπ‘‡π‘‘π‘˜|||β‰€πœŽ2||π‘”π‘‡π‘˜π‘‘π‘˜||.(14)Step 7. Update the weights π‘€π‘˜+1=π‘€π‘˜+πœ‚π‘˜π‘‘π‘˜(15) and set π‘˜=π‘˜+1.Step 8. If (π‘˜>π‘˜MAX) return β€œerror goal not met”, else go to Step 2.

3. Global Convergence Analysis

In order to establish the global convergence result for our proposed method, we will impose the following assumptions on the error function 𝐸.

Assumption 1. The level set β„’={π‘€βˆˆβ„π‘›βˆ£πΈ(𝑀)≀𝐸(𝑀0)} is bounded.

Assumption 2. In some neighborhood π’©βˆˆβ„’, 𝐸 is differentiable and its gradient 𝑔 is Lipschitz continuous, namely, there exists a positive constant 𝐿>0 such that ‖‖𝑔𝑀‖‖‖‖𝑀‖‖(𝑀)βˆ’π‘”β‰€πΏπ‘€βˆ’,andβˆ€π‘€,π‘€βˆˆπ’©.(16)

Since {𝐸(π‘€π‘˜)} is a decreasing sequence, it is clear that the sequence {π‘€π‘˜} is contained in β„’. In addition, it follows directly from Assumptions 1 and 2 that there exist positive constraints 𝐡 and 𝑀, such that β€–β€–ξ‚π‘€β€–β€–ξ‚π‘€βˆ’β‰€π΅,βˆ€π‘€,π‘€βˆˆβ„’,(17)‖𝑔(𝑀)‖≀𝑀,βˆ€π‘€βˆˆβ„’.(18) Furthermore, notice that since the error function 𝐸 is bounded below in ℝ𝑛 by zero, it is differentiable and its gradient is Lipschitz continuous [34]. Assumptions 1 and 2 always hold.

The following lemma is very useful for the global convergence analysis.

Lemma 2 (see [18]). Suppose that Assumptions 1 and 2 hold and the line search satisfies the strong Wolfe line search conditions (13) and (14). For πœƒπ‘˜ and Μƒπ‘¦π‘˜ defined in (10) and (9), respectively, one has ||πœƒπ‘˜||β€–β€–π‘ β‰€πΏπ‘˜β€–β€–2,β€–β€–Μƒπ‘¦π‘˜β€–β€–β€–β€–π‘ β‰€2πΏπ‘˜β€–β€–.(19)
Subsequently, we will establish the global convergence of Algorithm MPR+-CG for general functions. Firstly, we present a lemma that Algorithm MPR+-CG prevents the inefficient behavior of the jamming phenomenon [35] from occurring. This property is similar to but slightly different from Property(βˆ—), which was derived by Gilbert and Nocedal [12].

Lemma 3. Suppose that Assumptions 1 and 2 hold. Let {π‘€π‘˜} and {π‘‘π‘˜} be generated by Algorithm MPR+-CG, if there exists a positive constant πœ‡>0 such that β€–β€–π‘”π‘˜β€–β€–β‰₯πœ‡,βˆ€π‘˜β‰₯0,(20) then there exist constants 𝑏>1 and πœ†>0 such that ||𝛽MPR+π‘˜||‖‖𝑠≀𝑏,(21)π‘˜βˆ’1β€–β€–||π›½β‰€πœ†βŸΉMPR+π‘˜||≀1𝑏.(22)

Proof. Utilizing Lemma 2 together with Assumption 2 and relations (12), (14), (17), (18), (20) we have ||𝛽MPR+π‘˜||≀||π‘”π‘‡π‘˜Μƒπ‘¦π‘˜βˆ’1||β€–β€–π‘”π‘˜βˆ’1β€–β€–2β€–β€–+πΆΜƒπ‘¦π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4||π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1||β‰€β€–β€–π‘”π‘˜β€–β€–β€–β€–Μƒπ‘¦π‘˜βˆ’1β€–β€–β€–β€–π‘”π‘˜βˆ’1β€–β€–2β€–β€–+πΆΜƒπ‘¦π‘˜βˆ’1β€–β€–2𝜎2||π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1||β€–β€–π‘”π‘˜βˆ’1β€–β€–4≀‖‖𝑠2π‘€πΏπ‘˜βˆ’1β€–β€–β€–β€–π‘”π‘˜βˆ’1β€–β€–2+𝐢4𝐿2β€–β€–π‘ π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4𝜎2ξ‚€11βˆ’ξ‚β€–β€–π‘”4πΆπ‘˜βˆ’1β€–β€–2≀2𝑀𝐿+4𝐿2𝐡𝐢𝜎2(1βˆ’1/4𝐢)πœ‡2ξ‚Άβ€–β€–π‘ π‘˜βˆ’1β€–β€–β€–β€–π‘ β‰œπ·π‘˜βˆ’1β€–β€–.(23) Therefore, by setting b:= max{2,2𝐷𝐡} and Ξ»:= 1/𝐷𝑏, we have relations (21) and (22) hold. The proof is completed.

Subsequently, we present a lemma which shows that, asymptotically, the search directions π‘€π‘˜ change slowly. This lemma corresponds to Lemma 4.1 of Gilbert and Nocedal [12] and the proof is exactly the same as that of Lemma 4.1 in [12], thus we omit it.

Lemma 4. Suppose that Assumptions 1 and 2 hold. Let {π‘€π‘˜} and {π‘‘π‘˜} be generated by Algorithm MPR+-CG, if there exists a positive constant πœ‡>0 such that (21) holds; then π‘‘π‘˜β‰ 0 and ξ“π‘˜β‰₯1β€–β€–π‘’π‘˜βˆ’π‘’π‘˜βˆ’1β€–β€–2<∞,(24) where π‘’π‘˜=π‘‘π‘˜/β€–π‘‘π‘˜β€–.

Next, by making use of Lemmas 3 and 4, we establish the global convergence theorem for Algorithm MPR+-CG under the strong Wolfe line search.

Theorem 5. Suppose that Assumptions 1 and 2 hold. If {π‘€π‘˜} is obtained by Algorithm MPR+-CG where the line search satisfies the strong Wolfe line search conditions (13) and (14), then one has limπ‘˜β†’βˆžβ€–β€–π‘”infπ‘˜β€–β€–=0.(25)

Proof. We proceed by contraction. Suppose that there exists a positive constant πœ‡>0 such that for all π‘˜β‰₯0β€–β€–π‘”π‘˜β€–β€–β‰₯πœ‡.(26) The proof is divided in the following two steps.Step I
A bound on the step π‘ π‘˜. Let Ξ” be a positive integer, chosen large enough that Ξ”β‰₯4𝐡𝐷,(27) where 𝐡 and 𝐷 are defined in (17) and (23), respectively. For any 𝑙>π‘˜β‰₯π‘˜0 with π‘™βˆ’π‘˜β‰€Ξ”, following the same proof as the case II of Theorem 3.2 in [27], we get π‘™βˆ’1𝑗=π‘˜β€–β€–π‘ π‘—β€–β€–<2𝐡.(28)
Step II
A bound on the search directions of 𝑑𝑙. It follows from the definition of π‘‘π‘˜ in (4) together with (18) and (23), we obtain ‖‖𝑑𝑙‖‖2≀‖‖𝑔𝑙‖‖2+||𝛽MPR+π‘˜||2β€–β€–π‘‘π‘™βˆ’1β€–β€–2≀𝑀2+𝐷2β€–β€–π‘ π‘™βˆ’1β€–β€–2β€–β€–π‘‘π‘™βˆ’1β€–β€–2.(29)
Now, the remaining argument is standard in the same way as case III in Theorem 3.2 in [27], thus we omit it. This completes the proof.

4. Experimental Results

In this section, we will present experimental results in order to evaluate the performance of our proposed conjugate gradient algorithm MPR+-CG in five famous classification problems acquired by the UCI Repository of Machine Learning Databases [36]: the iris problem, the diabetes problem, the sonar problem, the yeast problem, and the Escherichia coli problem.

The implementation code was written in Matlab 6.5 on a Pentium IV computer (2.4 MHz, 512 Mbyte RAM) running Windows XP operating system based on the SCG code of Birgin and MartΓ­nez [37]. All methods are implemented with the line search proposed in CONMIN [38] which employs various polynomial interpolation schemes and safeguards in satisfying the strong Wolfe line search conditions. The heuristic parameters were set as 𝜎1=10βˆ’4 and 𝜎2=0.5 as in [30, 39]. All networks have received the same sequence of input patterns and the initial weights were generated using the Nguyen-Widrow method [40]. For evaluating classification accuracy we, have used the standard procedure called π‘˜-fold cross-validation [41]. The results have been averaged over 500 simulations.

4.1. Training Performance

The cumulative total for a performance metric over all simulations does not seem to be too informative, since a small number of simulations can tend to dominate these results. For this reason, we use the performance profiles proposed by Dolan and MorΓ¨ [33] to present perhaps the most complete information in terms of robustness, efficiency, and solution quality. The performance profile plots the fraction 𝑃 of simulations for which any given method is within a factor 𝜏 of the best training method. The horizontal axis of each plot shows the percentage of the simulations for which a method is the fastest (efficiency), while the vertical axis gives the percentage of the simulations that the neural networks were successfully trained by each method (robustness). The reported performance profiles have been created using the Libopt environment [42] for measuring the efficiency and the robustness of our method in terms of computational time (CPU time) and function/gradient evaluations (FE/GE). The curves in the following figures have the following meaning.(i)β€œPR’’ stands for the Polak-RibiΓ¨re conjugate gradient method.(ii)β€œPR+’’ stands for the Polak-RibiΓ¨re+ conjugate gradient method.(iii)β€œMPR+’’ stands for Algorithm MPR+-CG.

4.1.1. Iris Classification Problem

This benchmark is perhaps the most best known to be found in the pattern-recognition literature [36]. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. The network architectures constitute of 1 hidden layer with 7 neurons and an output layer of 3 neurons. The training goal was set to 𝐸𝐺≀0.01 within the limit of 1000 epochs and all networks were tested using 10-fold cross-validation [30].

Figure 1 presents the performance profiles for the iris classification problem, regarding both performance metrics. MPR+ illustrates the best performance in terms of efficiency and robustness, significantly outperforming the classical training methods PR and PR+. Furthermore, the performance profiles show that MPR+ is the only method reporting an excellent (100%) probability of being the optimal training method.

4.1.2. Diabetes Classification Problem

The aim of this real-world classification task is to decide whether a Pima Indian female is diabetes positive or not. The data of this benchmark consists of 768 different patterns, each of them having 8 features of real continuous values and a class label (diabetes positive or not). We have used neural networks with 2 hidden layers of 4 neurons each and an output layer of 2 neurons [43]. The training goal was set to 𝐸𝐺<0.14 within the limit of 2000 epochs and all networks were tested using 10-fold cross-validation [44].

Figure 2 illustrates the performance profiles for the diabetes classification problem, investigating the efficiency and robustness of each training method. Clearly, our proposed method MPR+ significantly outperforms the conjugate gradient methods PR and PR+ since the curves of the former lie above the curves of the latter, regarding both performance metrics. More analytically, the performance profiles show that the probability of MPR+ to successfully train a neural network within a factor 3.41 of the best solver is 100%, in contrast with PR and PR+ which have probability 84.3% and 85%, respectively.

4.1.3. Sonar Classification Problem

This is the dataset used by Gorman and Sejnowski [45] in their study of the classification of sonar signals using a neural network. The dataset contains signals obtained from a variety of different aspect angles, spanning 90 degrees for the cylinder and 180 degrees for the rock. The network architecture for this problem constitutes of 1 hidden layer of 24 neurons and an output layer of 2 neurons [45]. The training goal was set to 𝐸𝐺=0.1 within the limit of 1000 epochs and all networks were tested using 3-fold cross-validation [45].

In Figure 3 are presented the performance profiles for the sonar classification problem, relative to both performance metrics. Our proposed conjugate gradient method MPR+ presents the highest probability of being the optimal training method. Furthermore, MPR+ significantly outperforms PR and is slightly more robust than PR+, regarding both performance metrics.

4.1.4. Yeast Classification Problem

This problem is based on a drastically imbalanced dataset and concerns the determination of the cellular localization of the yeast proteins into ten localization sites. Saccharomyces cerevisiae (yeast) is the simplest Eukaryotic organism. The network architecture for this classification problem consists of 1 hidden layer of 16 neurons and an output layer of 10 neurons [46]. The training goal was set to 𝐸𝐺<0.05 within the limit of 2000 epochs and all networks were tested using 10-fold cross validation [47].

Figure 4 presents the performance profiles for the yeast classification problem, regarding both performance metrics. The interpretation in Figure 4 highlights that our proposed conjugate gradient method MPR+ is the only method exhibiting an excellent (100%) probability of successful training. Moreover, it is worth noticing that PR and PR+ report very poor performance exhibiting 0% and 5% probability of successfully training, respectively, in contrast with our proposed method MPR+ which has successfully trained all neural networks.

4.1.5. Escherichia coli Classification Problem

This problem is based on a drastically imbalanced data set of 336 patterns and concerns the classification of the E. coli protein localization patterns into eight localization sites. E. coli, being a prokaryotic gram-negative bacterium, is an important component of the biosphere. Three major and distinctive types of proteins are characterized in E. coli: enzymes, transporters, and regulators. The largest number of genes encoding enzymes (34%) (this should include all the cytoplasm proteins) is followed by the genes for transport functions and the genes for regulatory process (11.5%) [48]. The network architectures constitute of 1 hidden layer with 16 neurons and an output layer of 8 neurons [46]. The training goal was set to 𝐸𝐺≀0.02 within the limit of 2000 epochs and all neural networks were tested using 4-fold cross-validation [47].

In Figure 5 are presented the performance profiles for the Escherichia coli classification problem. Similar observations can be made with the previous benchmarks. More specifically, MPR+ significantly outperforms the classical training methods PR and PR+, since the curves of the former lie above the curves of the latter, regarding both performance metrics. Moreover the performance profiles show that the probability of MPR+ is the only method reporting excellent (100%) probability of being the optimal training method.

4.2. Generalization Performance

In Table 1 are summarized the generalization results of PR, PR+, and MPR+ conjugate gradient methods, measured by the percentage of testing patterns that were classified correctly in the presented classification problems. Each row reports the average performance in percentage for each problem and the best conjugate gradient method for a problem is illustrated in boldface. Moreover, β€œβˆ’β€ means that the method reported 0% training success.

The interpretation on Table 1 illustrates that MPR+ is an excellent generalizer since it manages to have the highest generalization performance, outperforming the classical training methods PR and PR+ in all classification problems.

5. Conclusions

In this paper, we proposed a conjugate gradient method for efficiently training neural networks. An attractive property of our proposed method is that it ensures sufficient descent, avoiding thereby the usually inefficient restarts. Furthermore, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant equation proposed in [18]. Under mild conditions, we established that our proposed method is globally convergent. Based on our numerical experiments, we concluded that our proposed method outperforms classical conjugate gradient training methods and has a potential to significantly enhance the computational efficiency and robustness of the training process.