Research Article

Gradient-Sensitive Optimization for Convolutional Neural Networks

Table 1

Parameter change in each algorithm during the iteration process in Booth’s function test.

Iterations/methodsAdaGradAdamdiffGradRMSpropGS-AdamGS-RMSprop

0[0, 0][0, 0][0, 0][0, 0][0, 0][0, 0]
100[1.4058, 1.4764][0.2942, 0.263][0.1499, 0.134][1.2047, 3.2149][1.6262, 1.7724][1.0039, 2.9961]
200[1.676, 1.8651][0.5888, 0.5271][0.2998, 0.2681][1.0501, 3.0501][1.7108, 2.1871][1, 3]
300[1.7353, 2.0636][0.8743, 0.7857][0.4492, 0.4019][1.05, 3.05][1.6243, 2.3547][1, 3]
400[1.717, 2.1872][1.1423, 1.0342][0.5977, 0.5352][1.05, 3.05][1.4945, 2.4908][1, 3]
500[1.6712, 2.2771][1.3817, 1.2665][0.7445, 0.6677][1.05, 3.05][1.3777, 2.6111][1, 3]
600[1.6179, 2.3495][1.5798, 1.4768][0.8889, 0.7989][1.05, 3.05][1.283, 2.7085][1, 3]
700[1.5647, 2.4116][1.7253, 1.6603][1.0297, 0.9283][1.05, 3.05][1.2104, 2.7833][1, 3]
800[1.5144, 2.4665][1.8123, 1.8155][1.1656, 1.0552][1.05, 3.05][1.1564, 2.8389][1, 3]
900[1.468, 2.5156][1.8428, 1.945][1.2948, 1.1788][1.05, 3.05][1.1168, 2.8797][1, 3]
1000[1.4255, 2.56][1.8267, 2.0551][1.4155, 1.2982][1.05, 3.05][1.0879, 2.9095][1, 3]