Journal of Chemistry

Journal of Chemistry / 2013 / Article
Special Issue

Composite Nanoparticles

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 305713 |

Parvaneh Shabanzadeh, Norazak Senu, Kamyar Shameli, Maryam Mohaghegh Tabar, "Artificial Intelligence in Numerical Modeling of Silver Nanoparticles Prepared in Montmorillonite Interlayer Space", Journal of Chemistry, vol. 2013, Article ID 305713, 8 pages, 2013.

Artificial Intelligence in Numerical Modeling of Silver Nanoparticles Prepared in Montmorillonite Interlayer Space

Academic Editor: Elefterios Lidorikis
Received14 Dec 2012
Revised29 Jan 2013
Accepted10 Feb 2013
Published23 Apr 2013


Artificial neural network (ANN) models have the capacity to eliminate the need for expensive experimental investigation in various areas of manufacturing processes, including the casting methods. An understanding of the interrelationships between input variables is essential for interpreting the sensitivity data and optimizing the design parameters. Silver nanoparticles (Ag-NPs) have attracted considerable attention for chemical, physical, and medical applications due to their exceptional properties. The nanocrystal silver was synthesized into an interlamellar space of montmorillonite by using the chemical reduction technique. The method has an advantage of size control which is essential in nanometals synthesis. Silver nanoparticles with nanosize and devoid of aggregation are favorable for several properties. In this investigation, the accuracy of artificial neural network training algorithm was applied in studying the effects of different parameters on the particles, including the AgNO3 concentration, reaction temperature, UV-visible wavelength, and montmorillonite (MMT) d-spacing on the prediction of size of silver nanoparticles. Analysis of the variance showed that the AgNO3 concentration and temperature were the most significant factors affecting the size of silver nanoparticles. Using the best performing artificial neural network, the optimum conditions predicted were a concentration of AgNO3 of 1.0 (M), MMT d-spacing of 1.27 nm, reaction temperature of 27°C, and wavelength of 397.50 nm.

1. Introduction

Nanoparticles are important materials for fundamental studies and diversified technical applications because of their size dependent properties or highly active performance due to their large surface areas. The synthesis of well dispersed pure metal nanoparticle is difficult as they have high tendency to form agglomerates particles [1, 2]. Therefore, to overcome agglomeration, preparation of nanoparticles based on clay compounds in which nanoparticles are supported within the interlamellar spaces of clay and/or on its external surfaces is one of the most effective solutions [3, 4]. In the same way, synthesis of metal nanoparticle on solid supports such as smectite clays is also a suitable way to prepare practically applicable supported particles as well as to control the particle size [5].

Smectite clays have an excellent swelling and adsorption ability, which is especially interesting for the impregnation of antibacterial active nanosize metals in the interlamellar space of clay [6, 7]. Montmorillonite (MMT) as lamellar clay has intercalation, swelling, and ion exchange properties. Its interlayer space has been used for the synthesis of material and biomaterial nanoparticles [8].

Due to its properties and areas of use, Ag is one of the most studied metals. However, certain factors, such as stability, morphology, particle size distribution, and surface state charge/modification which are significant in the controlled synthesis of Ag-NPs have attracted high attention, since this task is far from being understood [58]. For the synthesis of Ag-NPs in the solid substrate, AgNO3 is often used as the primary source of Ag+. There are numerous ways of Ag+ reduction: use of γ-ray [4], or UV irradiations [5], glucose [2], application of reducing chemicals, such as hydrazine [9], sodium borohydride [1, 3, 68], plant extraction [10, 11], and so forth.

Artificial neural network (ANN) is a nonlinear statistical analysis technique and is especially suitable for simulation of systems which are hard to be described by physical and chemical models. It provides a way of linking input data to output data using a set of nonlinear functions [12]. The ANN is a highly simplified model of the structure of a biological network [13]. The fundamental processing element of ANN is an artificial neuron or simply a neuron. A biological neuron receives inputs from other sources, combines them, performs generally a nonlinear operation on the result, and then outputs the final result [14]. The basic advantage of ANN is that it does not need any mathematical model since an ANN learns from examples and recognizes patterns in a series of input and output data without any prior assumptions about their nature and interrelations [13]. The ANN eliminates the limitations of the classical approaches by extracting the desired information using the input data. Applying ANN to a system needs sufficient input and output data instead of a mathematical equation [15].

Selecting the optimum architecture of the network is one of the challenging steps in ANN modelling. The term “architecture” mentions the number of layers in the network and number of neurons in each layer. Also a training algorithm is required for solving a problem via a neural network modeling. Many different algorithms can be applied for training procedure. During the training process, the weights and biases in the network are modified to minimize the error and obtain a high performance in the solution. At the end of the training and during the training error, mean absolute percentage error is computed between desired outputs and target outputs [16, 17]. It is the most useful numerical technique in modern engineering analysis and has been employed to study different problems in heat transfer, electrical systems, mechanical properties, solid mechanics, rigid body dynamics, fluid mechanics, and many other fields. In recent years, ANN has been introduced in nanotechnology applications as techniques to model data showing nonlinear relationships [12, 1720] and/or estimation of particle size in variety of nanoparticle samples [21].

Employing neural network models would lead to saving time and cost by predicting the results of the reactions so that the most promising conditions can then be verified [22].

The aim of the present research is the investigation of the effects of different parameters including AgNO3 concentration, temperature, wavelength, and MMT interlayer d-spacing on the prediction of size of Ag-NPs. The ANN model was used to predict the optimum size of Ag-NPs.

2. Experimental

To prepare the stable Ag-NPs via the chemical reduction method, it was significant to choose the suitable stabilizer and reducing agent. In this research, MMT suspension was used as the appropriate support for reducing AgNO3/MMT suspension using NaBH4 as the strong reducing agent. The surfaces of MMT suspension are assisting the Ag-NPs nucleation during the reduction process. The schematic illustration of the synthesis of Ag/MMT nanocomposites from AgNO3/MMT suspension produced by using sodium borohydride is shown in Figure 1. Meanwhile, as shown in (Figure 1(a)) AgNO3/MMT suspension was colourless, but after the addition of the reducing agent to the suspension they turned to brown colour indicating the formation of Ag-NPs in the MMT suspensions (Figure 1(b)).

2.1. Data Sets

Table 1 shows the experimental data used for ANN design. The experimental data were randomly divided into three sets: 26, 10, and 6, which were used as training, validation, and testing, respectively. The training data was applied to compute the network parameters while the validation data was applied to ensure robustness of the network parameters. If a network “learns too well” from the training data, the rules might not fit as well for the rest of the cases in the data. To avoid this “over fitting” phenomenon, the testing stage was used to control error; when it increased, the training stops [23]. For assessment the predictive ability of the generated model was used as the testing data set.

No.AgNO3 con. (M)Temperature (°C)Wavelength (nm)MMT d-spacing (nm) Size of Ag-NPs (actual)Size of Ag-NPs (predict)

Training set


Validation set


Testing set


Concentration (con.), molar (M), degree centigrade (°C), nanometre (nm), montmorillonite (MMT), silver nanoparticles (Ag-NPs).

2.2. ANN Description

An artificial neural network learns general rules based on calculation upon numerical data [24]. Neurons are the smallest parts of ANN. Neurons are constructed in three layers including input, output, and hidden layer. The hidden layer can be developed with similar layer structure as neuron. In the other expression, the structure is as if the output of a hidden layer is the input for the sequent hidden layer [25]. The multilayer perceptron (MLP) networks are the most widely used neural networks that consist of one input layer, one output layer, and one or more hidden layers. In the ANN methodology, the sample data is often subdivided into training, validation, and test sets. The distinctions among these subsets are crucial. Ripley [26] defines the following “Training set: A set of examples used for learning that is to fit the parameters (weights) of the classifier. Validation set: A set of examples used to tune the parameters of a classifier, for example, to choose the number of hidden units in a neural network. Test set: A set of examples used only to assess the performance (generalization) of a fully specified classifier.” In this research, a multilayer perception (MLP) based feed forward ANN which uses back-propagation learning algorithm, was applied for the modeling of Ag-NPs size prediction. The inputs for the network include AgNO3 concentration, reaction temperature, UV-visible wavelength, and MMT d-spacing. The output is the size of Ag-NPs. In Figure 2, is shown a diagram of a typical MLP neural network with one hidden layer structure of proposed ANN. The input to the node in the hidden layer is given by

Each neuron consists of a transfer function expressing an internal activation level. The output from a neuron is determined by transforming its input using a suitable transfer function [27]. Generally, the transfer functions for function approximation (regression) are sigmoidal function, hyperbolic tangent, and linear function [28]. The most popular transfer function for a nonlinear relationship is the sigmoidal function [19, 29]. The output from the neuron of the hidden layer is given by

In (1) and (2), is the number of neurons in the input layer, is the number of neurons in the hidden layer, is the bias term, is the weighting factor, and is the activation function of the hidden layer such as the tan-sigmoid transfer function [18, 30].

The output of the th neuron in the output layer is given by where is the weighting factor, is the bias term, and is the number of neurons in the output layer. The values of the interconnection weights are determined by the training or learning process using a set of data. The aim is to find the value of the weight that minimizes the error [28]. A popular measure for evaluation of prediction ability of ANN models is the root mean square error (RMSE): where is the number of points, is the predicted value obtained from the neural network model, and is the actual value. The coefficient of determination reflects the degree of fit for the mathematical model [31]. The closer the value is to 1, the better the model fitting towards the actual data [32]: where is the number of points, is the predicted value obtained from the neural network model, is the actual value, and is the average of the actual values. Absolute average deviation (AAD) is another important index to evaluate the ANN output error between the actual and the predicted output [14]: where and are the predicted and actual responses, respectively, and is the number of the points. The network having minimum RMSE, minimum AAD, and maximum is considered as the best neural network model [33].

3. Results and Discussion

3.1. ANN Modeling Results

In this research, many different structures with two and three hidden layers with the different number of neurons in each layer were tested to obtain the best ANN configuration. The choosing of number of neurons in hidden layers is very important as it affects the training time and generalization property of neural networks. A higher value of neurons in hidden layer may force the network to memorize (as opposed to generalize) the patterns which it has seen during training whereas a lower value of neurons in hidden layer would waste a great deal of training time in finding its optimal representation [34]. There is no general rule for selecting the number of neurons in a hidden layer. It depends on the complexity of the system being modeled [33]. The most popular approach for finding the optimal number of neurons in hidden layer is by trial and error [35]. The tan-sigmoid function is used as transfer function for first layer and linear function is applied as transfer function for second layer.

In this paper, the trial and error approach was utilized in determining the optimum neurons in the hidden layers. The feed-forward neural network generally has one or more hidden layers, which enable the network to model nonlinear and complex functions [28]. Nevertheless, the number of hidden layers is difficult to decide [19]. It has been reported in the literature that one hidden layer is normally sufficient to provide an accurate prediction and can be the first choice for any practical feed-forward network design [36].

Therefore, a single hidden layer network was applied in this paper [12, 20]. The RMSE was applied as the error function. Also, and AAD were used as a measure of the predictive ability of the network. Each topology was repeated five times to avoid random correlation due to the random initialization of the weights [37]. Many types of learning algorithms are explained in the literature which can be applied for training of the network. However, it is difficult to know which learning algorithm will be more efficient for a given problem [38].

The LM is often the fastest back-propagation algorithm and is highly recommended as a first-choice supervised algorithm, even though it does require more memory than other algorithms. The LM back-propagation is a network training function that updates weight and bias values according to LM optimization. The LM is an approximation to the Newton’s method [39]. This is very well suited to the training of the neural network [40]. The algorithm uses the second-order derivatives of the mean squared error between the desired output and the actual output so that better convergence behavior can be obtained [41]. Therefore, various topologies (from 1 to 20 hidden neurons) using LM algorithm were examined. Results obtained showed that a network with 4 hidden neurons presented the best performance.

Figure 3 presents the scatter plots of the ANN model predicted versus actual values using LM algorithm for the training, validation, testing, and all data sets. Also, it presents the predicted model for the training, validation, testing, and all data sets well fitted to the actual values. The results of this study illustrated that the network consisted of three layers: input, hidden, and output with 4 nodes in hidden layer presenting the best performances. RMSE and between the actual and predicted values were determined as 0.0055 and 0.9999 for training set, 0.01529 and 0.9994 for validation set, and 0.7917 and 0.955 for testing set. The RMSE and for all data sets were also calculated as 0.0071 and 0.9753, respectively. These results show that the predictive accuracy of the model is high. In Table 2 are showed values of connection weights (parameters of the model) for the completed ANN model.

Node 1 Node 2 Node 3 Node 4 Bias 2

Input 1 0.3431
Input 2 0.7898
Input 3 2.0698
Input 4 0.3286
Bias 1 3.2889
Output 0.165571.4701

3.2. Sensitivity Analysis

In this research, a data analysis was performed to determine the effectiveness of a variable using the suggested ANN model in this work [20]. In the analysis, performance evaluations of different possible interaction of variables were investigated. Therefore, performances of the four groups (one, two, three, and four) variables were studied by the optimal ANN model using the LM with 4 neurons in the hidden layer.

The groups of input vectors were defined as follows: , AgNO3 concentration; , reaction temperature; , UV-visible wavelength; and , MMT d-spacing. The results are summarized in Table 3. The results in Table 3 showed to be the most effective parameter in the group of one variable, due to its lower RMSE, 0.301.

No.CombinationRMSE AAD

Group of one variable

1 0.3010.9850.972
2 0.5050.0850.007
3 0.8010.8930.805
4 1.8100.9600.922

Group of two variables

5 0.0110.9930.987
6 0.0150.9990.999
7 0.2020.9990.999
8 0.2570.9890.979
9 0.2510.9930.999
10 0.3540.8560.888

Group of three variables

11 0.0090.9990.999
12 0.2400.8910.882
13 0.0460.9990.999
14 0.0130.9990.999

Group of four variables

15 0.0070.9740.999

As shown in Table 3, the value of RMSE significantly decreased when was used in interaction with other variables in other groups. The minimum value of RMSE in the group of two was determined to be 0.011 with a further interaction of . The values of RMSE became smaller in the interaction of ; the best case of group of two variables was used with . The minimum value of RMSE in the group of three variables was 0.009 using the interaction of . The value of RMSE was decreased from 0.009 to 0.007 when was used in interaction with other variables in the after group of four variables.

3.3. Comparison of Experimental Data and ANN Output

The optimal conditions for the prediction of size of Ag-NPs were predicted as presented in Table 4 along with predicted and actual size of Ag-NPs. For this goal, ANN based LM was adopted for predicting the size of Ag-NPs in optimal conditions and then experiment was carried out under the recommended conditions [42]. The resulting response was compared to the predicted value. The optimum parameters were 1 M, 27°C, 397.5 nm, 1.27 nm for the concentration of AgNO3, reaction temperature, UV-visible wavelength, and MMT d-spacing, respectively. As shown in Table 4, the concentration was the most effective parameter on the size of Ag-NPs. The experimental reaction gave a reasonable size of Ag-NPs 4.3 nm. This result confirmed the validity of the model, and the experimental value was determined to be quite close to the ANN predicted value 4.5 nm, less of 1% in relative deviation, implying that the empirical model derived from the ANN can be used to sufficiently describe the relationship between the independent variables and response.

Optimal conditionsSize of Ag-NPs
Concentration of AgNO3 (M)Reaction temperature (°C)Wavelength (nm)MMT ( ) (nm)ActualPredictedStandard deviation


4. Conclusion

In this research, the artificial neural network models for the size prediction of Ag-NPs were presented. The analysis carried out confirms that ANN was a powerful tool for analysis and modeling. Based on the obtained results it can be concluded that the LM neural network model with 4 neurons in 1 hidden layer will be the fastest training algorithm and can present a very good performance for ANN modeling of nanocomposites behaviours. Data analysis showed that AgNO3 concentration (1.0 M) and reaction temperatures (27°C) are two most sensitive parameters. Therefore, employing neural network models would lead to saving time and cost by predicting the results of the reactions.


  1. K. Shameli, M. B. Ahmad, M. Zargar, W. M. Z. W. Yunus, and N. A. Ibrahim, “Fabrication of silver nanoparticles doped in the zeolite framework and antibacterial activity,” International Journal of Nanomedicine, vol. 6, pp. 331–341, 2011. View at: Google Scholar
  2. K. Shameli, M. B. Ahmad, S. D. Jazayeri et al., “Synthesis and characterization of polyethylene glycol mediated silver nanoparticles by the green method,” International Journal of Molecular Sciences, vol. 13, no. 6, pp. 6639–6650, 2012. View at: Google Scholar
  3. K. Shameli, M. B. Ahmad, W. M. Z. W. Yunus, N. A. Ibrahim, A. Rustaiyan, and M. Zargar, “Synthesis of silver nanoparticles in montmorillonite and their antibacterial behavior,” International Journal of Nanomedicine, vol. 6, pp. 581–590, 2010. View at: Google Scholar
  4. K. Shameli, M. B. Ahmad, W. M. Z. W. Yunus, N. A. Ibrahim, Y. Gharayebi, and S. Sedaghat, “Synthesis of silver/montmorillonite nanocomposites using γ-irradiation,” International Journal of Nanomedicine, vol. 5, no. 1, pp. 1067–1077, 2010. View at: Publisher Site | Google Scholar
  5. K. Shameli, M. B. Ahmad, W. M. Z. W. Yunus et al., “Green synthesis of silver/montmorillonite/chitosan bionanocomposites using the UV irradiation method and evaluation of antibacterial activity,” International Journal of Nanomedicine, vol. 5, no. 1, pp. 875–887, 2010. View at: Publisher Site | Google Scholar
  6. M. B. Ahmad, K. Shameli, W. M. Z. W. Yunus, and N. A. Ibrahim, “Synthesis and characterization of silver/clay/starch bionanocomposites by green method,” The Australian Journal of Basic and Applied Sciences, vol. 4, no. 7, pp. 2158–2165, 2010. View at: Google Scholar
  7. K. Shameli, M. B. Ahmad, W. M. Z. W. Yunus, N. A. Ibrahim, and M. Darroudi, “Synthesis and characterization of silver/talc nanocomposites using the wet chemical reduction method,” International Journal of Nanomedicine, vol. 5, pp. 743–751, 2010. View at: Publisher Site | Google Scholar
  8. K. Shameli, M. B. Ahmad, M. Zargar et al., “Synthesis and characterization of silver/montmorillonite/chitosan bionanocomposites by chemical reduction method and their antibacterial activity,” International Journal of Nanomedicine, vol. 6, pp. 271–284, 2011. View at: Google Scholar
  9. K. Szczepanowicz, J. Stefan'ska, R. P. Socha, and P. Warszyn'ski, “Preparation of silver nanoparticles via chemical reduction and their antimicrobial activity,” Physicochemical Problems of Mineral Processing, vol. 45, pp. 85–98, 2010. View at: Google Scholar
  10. M. Zargar, A. A. Hamid, F. A. Bakar et al., “Green synthesis and antibacterial effect of silver nanoparticles using Vitex Negundo L,” Molecules, vol. 16, no. 8, pp. 6667–6676, 2011. View at: Google Scholar
  11. K. Shameli, M. B. Ahmad, A. Zamanian et al., “Green biosynthesis of silver nanoparticles using Curcuma longa tuber powder,” International Journal of Nanomedicine, vol. 7, pp. 5603–5610, 2012. View at: Google Scholar
  12. M. O. Shabani and A. Mazahery, “Artificial Intelligence in numerical modeling of nano sized ceramic particulates reinforced metal matrix composites,” Applied Mathematical Modelling, vol. 36, no. 11, pp. 5455–5465, 2011. View at: Google Scholar
  13. S. Mandal, P. V. Sivaprasad, S. Venugopal, and K. P. N. Murthy, “Artificial neural network modeling to evaluate and predict the deformation behavior of stainless steel type AISI 304L during hot torsion,” Applied Soft Computing Journal, vol. 9, no. 1, pp. 237–244, 2009. View at: Publisher Site | Google Scholar
  14. D. Bas and I. H. Bonyaci, “Modeling and optimization II: comparison of estimation capabilities of response surface methodology with artificial neural networks in a biochemical reaction,” Journal of Food Engineering, vol. 78, no. 3, pp. 846–854, 2007. View at: Google Scholar
  15. M. A. Akcayol and C. Cinar, “Artificial neural network based modeling of heated catalytic converter performance,” Applied Thermal Engineering, vol. 25, no. 14-15, pp. 2341–2350, 2005. View at: Publisher Site | Google Scholar
  16. M. A. Hussain, M. S. Rahman, and C. W. Ng, “Prediction of pores formation (porosity) in foods during drying: generic models by the use of hybrid neural network,” Journal of Food Engineering, vol. 51, no. 3, pp. 239–248, 2002. View at: Google Scholar
  17. M. G. Moghaddam, F. B. H. Ahmad, M. Basri, and M. B. A. Rahman, “Artificial neural network modeling studies to predict the yield of enzymatic synthesis of betulinic acid ester,” Electronic Journal of Biotechnology, vol. 15, no. 3, 2010. View at: Google Scholar
  18. M. Rashidi, M. Hayati, and A. Rezaei, “Application of artificial neural network for prediction of the oxidation behavior of aluminized nano-crystalline nickel,” Materials and Design, vol. 42, pp. 308–316, 2012. View at: Google Scholar
  19. A. Ghaffari, H. Abdollahi, M. R. Khoshayand, I. S. Bozchalooi, A. Dadgar, and M. Rafiee-Tehrani, “Performance comparison of neural network training algorithms in modeling of bimodal drug delivery,” International Journal of Pharmaceutics, vol. 327, no. 1-2, pp. 126–138, 2006. View at: Publisher Site | Google Scholar
  20. M. Khajeh, M. Ghaffari, and M. Shakeri, “Application of artificial neural network in predicting the extraction yield of essential oils of Diplotaenia cachrydifolia by supercritical fluid extraction,” The Journal of Supercritical Fluids, vol. 69, pp. 91–96, 2012. View at: Google Scholar
  21. A. Amani, P. York, H. Chrystyn, B. J. Clark, and D. Q. Do, “Determination of factors controlling the particle size in nanoemulsions using Artificial Neural Networks,” The European Journal of Pharmaceutical Sciences, vol. 35, no. 1-2, pp. 42–51, 2008. View at: Google Scholar
  22. M. B. Abdul Rahman, N. Chaibakhsh, M. Basri, A. B. Salleh, and R. N. Abdul Rahman, “Application of artificial neural network for yield prediction of lipase catalyzed synthesis of dioctyl adipate,” Applied Biochemistry and Biotechnology, vol. 158, no. 3, pp. 722–735, 2009. View at: Google Scholar
  23. X. Song, A. Mitnitski, C. MacKnight, and K. Rockwood, “Assessment of individual risk of death using self-report data: an artificial neural network compared with a frailty index,” Journal of the American Geriatrics Society, vol. 52, no. 7, pp. 1180–1184, 2004. View at: Publisher Site | Google Scholar
  24. F. Hakimiyana and V. Derhami, “Design of quantum dot semiconductor optical amplifier by intelligence methods,” Procedia Computer Science, vol. 3, pp. 449–452, 2011. View at: Google Scholar
  25. A. H. Mesbahi, D. Semnani, and S. N. Khorasani, “Performance prediction of a specific wear rate in epoxy nanocomposites with various composition content of polytetrafluoroethylen (PTFE), graphite, short carbon fibers (CF) and nano-TiO2 using adaptive neuro-fuzzy inference system (ANFIS),” Composites B, vol. 43, no. 2, pp. 549–558, 2012. View at: Google Scholar
  26. B. D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, Cambridge, UK, 1996.
  27. M. A. Razavi, A. Mortazavi, and M. Mousavi, “Dynamic modelling of milk ultrafiltration by artificial neural network,” Journal of Membrane Science, vol. 220, no. 1-2, pp. 47–58, 2003. View at: Publisher Site | Google Scholar
  28. E. Jorjani, C. S. Chelgani, and S. H. Mesroghli, “Application of artificial neural networks to predict chemical desulfurization of Tabas coal,” Fuel, vol. 87, no. 12, pp. 2727–2734, 2008. View at: Publisher Site | Google Scholar
  29. J. S. Torrecilla, L. Otero, and P. D. Sanz, “Optimization of an artificial neural network for thermal/pressure food processing: evaluation of training algorithms,” Computers and Electronics in Agriculture, vol. 56, no. 2, pp. 101–110, 2007. View at: Publisher Site | Google Scholar
  30. A. R. Gallant and H. White, “On learning the derivatives of an unknown mapping with multilayer feedforward networks,” Neural Networks, vol. 5, no. 1, pp. 129–138, 1992. View at: Google Scholar
  31. A. Nath and P. K. Chattopadhyay, “Optimization of oven toasting for improving crispness and other quality attributes of ready to eat potato-soy snack using response surface methodology,” Journal of Food Engineering, vol. 80, no. 4, pp. 1282–1292, 2007. View at: Publisher Site | Google Scholar
  32. H. N. Sin, S. Yusof, N. S. A. Hamid, and R. A. Rahman, “Optimization of enzymatic clarification of sapodilla juice using response surface methodology,” Journal of Food Engineering, vol. 73, no. 4, pp. 313–319, 2006. View at: Publisher Site | Google Scholar
  33. L. Wang, B. Yang, R. Wang, and X. Du, “Extraction of pepsin-soluble collagen from grass carp (Ctenopharyngodon idella) skin using an artificial neural network,” Food Chemistry, vol. 111, no. 3, pp. 683–686, 2008. View at: Publisher Site | Google Scholar
  34. M. Hussain, J. S. Bedi, H. Singh, M. Hussain, J. S. Bedi, and H. Singh, “Determining number of neurons in hidden layers for binary error correcting codes,” in Proceedings of Applications of Artificial Neural Networks III, pp. 1015–1022, Orlando, FL, USA, April 1992. View at: Google Scholar
  35. E. F. Ahmed, “Artificial neural networks for diagnosis and survival prediction in colon cancer,” Molecular Cancer, vol. 4, article 29, 2005. View at: Google Scholar
  36. D. Hush and B. G. Horne, “Progress in supervised neural networks,” IEEE Signal Processing Magazine, vol. 10, no. 1, pp. 8–39, 1993. View at: Google Scholar
  37. M. B. Kasiri, H. Aleboyeh, and A. Aleboyeh, “Modeling and optimization of heterogeneous photo-fenton process with response surface methodology and artificial neural networks,” Environmental Science and Technology, vol. 42, no. 21, pp. 7970–7975, 2008. View at: Publisher Site | Google Scholar
  38. Ö. G. Saracoglu, “An artificial neural network approach for the prediction of absorption measurements of an evanescent field fiber sensor,” Sensors, vol. 8, no. 3, pp. 1585–1594, 2008. View at: Google Scholar
  39. M. T. Hagan and M. B. Menhaj, “Training feedforward networks with the Marquardt algorithm,” IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 989–993, 1994. View at: Publisher Site | Google Scholar
  40. L. M. Saini and M. K. Soni, “Artificial neural network based peak load forecasting using Levenberg-Marquardt and quasi-Newton methods,” IEE Proceedings Generation, Transmission & Distribution, vol. 149, no. 5, pp. 578–584, 2002. View at: Google Scholar
  41. A. Gulbag and F. Temurtas, “A study on quantitative classification of binary gas mixture using neural networks and adaptive neuro-fuzzy inference systems,” Sensors and Actuators B, vol. 115, no. 1, pp. 252–262, 2006. View at: Publisher Site | Google Scholar
  42. H. R. F. Masoumi, A. Kassim, M. Basri, D. K. Abdullah, and M. J. Haron, “Multivariate optimization in the biosynthesis of a triethanolamine (TEA)-based esterquat cationic surfactant using an artificial neural network,” Molecules, vol. 16, pp. 5538–5549, 2011. View at: Google Scholar

Copyright © 2013 Parvaneh Shabanzadeh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.