Table of Contents Author Guidelines Submit a Manuscript
Journal of Chemistry
Volume 2017, Article ID 6560983, 12 pages
Research Article

Robust Nonlinear Regression in Enzyme Kinetic Parameters Estimation

1Faculty of Chemistry and Technology, University of Split, Ruđera Boškovića 35, 21000 Split, Croatia
2Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Ruđera Boškovića 32, 21000 Split, Croatia

Correspondence should be addressed to Tea Marasović; rh.bsef@vosaramt

Received 18 October 2016; Accepted 6 February 2017; Published 5 March 2017

Academic Editor: Murat Senturk

Copyright © 2017 Maja Marasović et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Accurate estimation of essential enzyme kinetic parameters, such as and , is very important in modern biology. To this date, linearization of kinetic equations is still widely established practice for determining these parameters in chemical and enzyme catalysis. Although simplicity of linear optimization is alluring, these methods have certain pitfalls due to which they more often then not result in misleading estimation of enzyme parameters. In order to obtain more accurate predictions of parameter values, the use of nonlinear least-squares fitting techniques is recommended. However, when there are outliers present in the data, these techniques become unreliable. This paper proposes the use of a robust nonlinear regression estimator based on modified Tukey’s biweight function that can provide more resilient results in the presence of outliers and/or influential observations. Real and synthetic kinetic data have been used to test our approach. Monte Carlo simulations are performed to illustrate the efficacy and the robustness of the biweight estimator in comparison with the standard linearization methods and the ordinary least-squares nonlinear regression. We then apply this method to experimental data for the tyrosinase enzyme (EC extracted from Solanum tuberosum, Agaricus bisporus, and Pleurotus ostreatus. The results on both artificial and experimental data clearly show that the proposed robust estimator can be successfully employed to determine accurate values of and .

1. Introduction

Enzymes are molecules that act as biological catalysts and are responsible for maintaining virtually all life processes. Most enzymes are proteins, although a few are catalytic RNA molecules. Like all catalysts, enzymes increase the rate of chemical reactions without themselves undergoing any permanent chemical change in a process. They achieve their effect by temporarily binding to the substrate and, in doing so, lowering the activation energy needed to convert it to a product. The study of the rate at which an enzyme works is called enzyme kinetics and it is often regarded as one of the most fascinating research areas in biochemistry [1].

Mathematically, the relationship between substrate concentration and reaction rate under isothermal conditions for many of enzyme-catalyzed reactions can be modeled by the Michaelis-Menten equation [2]:where denotes a reaction rate, is a substrate concentration, is the maximum initial velocity, which is theoretically attained when the enzyme has been “saturated” by an infinite concentration of a substrate, and is the Michaelis constant, representing a measure of affinity of the enzyme-substrate interaction. By definition, is equal to the concentration of the substrate at half maximum initial velocity. The Michaelis constant, , is an intrinsic parameter of enzyme-catalyzed reactions and it is significant for its biological function [3].

Three most common methods, available in the literature, for determining the parameters of Michaelis-Menten equation based on a series of measurements of velocity as a function of substrate concentration, are Lineweaver-Burk plot, also known as the double reciprocal plot, Eadie-Hofstee plot, and Hanes-Woolf plot. All three of these methods are linearized models that transform the original Michaelis-Menten equation into a form which can be graphed as a straight line.

Lineweaver-Burk [4] (LB) plot, still the most popular and favored plot amongst the researchers, is defined by an equation:The -intercept in this plot is , the -intercept in second quadrant represents , and the slope of the line is .

Eadie-Hofstee [5] (EH) plot is a semireciprocal plot of versus . The linear equation has the following form:where the -intercept is and the slope is .

In Hanes-Woolf [6] (HW) plot, is plotted against . The linear equation is given bywhere the -intercept is and the slope is .

In all of the above-described linear transformations, linear regression is used to estimate the slope and intercept of the straight line and afterwards and are computed from the straight line parameters. Although these methods are very useful for data visualization and are still widely employed in enzyme kinetic studies, each of them possesses certain deficiencies, which make them prone to errors. For instance, Lineweaver-Burk plot has the disadvantage of compressing the data points at high substrate concentrations into a small region and emphasizing the points at lower substrate concentrations, which are often the least accurate [7]. The -intercept in Lineweaver-Burk plot is equivalent to inverse of due to which any small error in measurement gets magnified. Similarly, the Eadie-Hofstee plot has the disadvantage that appears on both axes; thus, any experimental error will also be present in both axes. In addition, experimental errors or uncertainties are propagated unevenly and become larger over the abscissa thereby giving more weight to smaller values of . Hanes-Woolf plot is the most accurate of the three; however, its major drawback is that again neither ordinate nor abscissa represents independent values: both are dependent on substrate concentration.

In order to reduce the errors due to the linearization of parameters, Wilkinson [8] proposed the use of least-squares nonlinear regression for more accurate estimation of enzyme kinetic parameters. Nonlinear regression allows direct determination of parameter values from untransformed data points. The process starts with initial estimates and then iteratively converges on parameter estimates that provide the best fit of the underlying model to the actual data points [9, 10]. The algorithms used include the Levenberg-Marquardt method, the Gauss-Newton method, the steepest-descent method, and simplex minimization. Numerous software packages, such as Excel, MATLAB, and GraphPrism, nowadays include readily available routines and scripts to perform nonlinear least-squares fitting [11, 12].

Least-squares nonlinear regression has been criticized for its performance in dealing with experimental data. This is mainly due to the fact that implicit assumptions related with nonlinear regression are in general not met in the context of deviations that appear as a result of biological errors (e.g., variations in the enzyme preparations due to oxidation or contaminations) and/or experimental errors (e.g., variations in measured volume of substrates and enzymes, imprecisions of the instrumentation). With the presence of outliers or influential observations in the data, the ordinary least-squares method can result in misleading values for the parameters of the nonlinear regression and estimates may no longer be reliable [13].

In this paper, we propose the use of robust nonlinear regression estimator based on modified Tukey’s biweight function for determining the parameters of Michaelis-Menten equation using experimental measurements in enzyme kinetics. The main idea is to fit a model to the data that gives resilient results in the presence of influential observations and/or outliers. To the best of our knowledge, this is the first study that examines the use of this technique for application in Michaelis-Menten enzyme analysis. We employ Monte Carlo simulations to validate the efficacy of the proposed procedure in comparison with the ordinary least-squares method and Eadie-Hofstee, Hanes-Woolf and Lineweaver-Burk plots. In addition, we illustrate the viability of our method by estimating the kinetic parameters of tyrosinase, an important enzyme widely distributed in microorganisms, animals, and plants, responsible for melanin production in mammal and enzymatic browning in plants, extracted from potato and two edible mushrooms.

The remainder of the paper is organized as follows. Section 2 provides a brief overview of the robust estimation model. Section 3 describes the experimental setup used in this research and the diagnostics that will be used to evaluate the effectiveness of the proposed procedure in determination of enzyme kinetic parameters. Results are discussed in Section 4. Finally, Section 5 summarizes the paper with a few concluding remarks.

2. Robust Nonlinear Regression

Nonlinear regression, same as linear regression, relies heavily on the assumption that the scatter of data around the ideal curve follows, at least approximately, a Gaussian or normal distribution. This assumption leads to the well-known regression goal: to minimize the sum of the squares of the vertical distances (a.k.a residuals) between the points and the curve. In practice, however, this assumption does not always hold true. The analytical data often contains outliers that can play havoc with standard regression methods based on the normality assumption, causing them to produce more or less strongly biased results, depending on the magnitude of deviation and/or sensitivity of the procedure. It is not unusual to find an average of of outlying observations in data set of some processes [14].

Outliers are most commonly thought to be extreme values which are a result of measurement or experimental errors. Barnett and Lewis [15] provide a more cautious definition of the term outlier, describing it as the observation (or subset of observations) that appears to be inconsistent with the remainder of the dataset. This definition also includes the observations that do not follow the majority of the data, such as values that have been measured correctly but are, for one reason or another, far away from other data values, while the formulation “appears to be inconsistent” reflecting the subjective judgement of the observer whether or not an observation is declared to be outlying.

The ordinary least-squares (OLS) estimate of the parameter vector is obtained as the solution of the problem: where denotes the number of observations, is a matrix, whose rows are -dimensional vectors of predictor variables (or regressors), is a vector of responses, and is model function. Since all data points are attributed the same weights, OLS implicitly favors the observations with very large residuals and, consequently, the estimated parameters end up distorted if outliers are present.

In order to achieve robustness in coping with the problem of outliers, Huber [16] introduced a class of so-called -estimators, for which the sum of function of the residuals is minimized. The resulting vector of parameters estimated by an -estimator is thenThe residuals are standardized by a measure of dispersion to guarantee scale equivariance (i.e., independence with respect to the measurement units of the dependent variable). Function must be even, nondecreasing for positive values, and less increasing than the square.

The minimization in (6) can always be done directly. However, often it is simpler to differentiate function with respect to and solve for the root of the derivative. When this differentiation is possible, the -estimator is said to be of -type. Otherwise, the -estimator is said to be of -type.

Let be the derivative of . Assuming is known and defining weights , the estimates can be obtained by solving the system of equations:The weights are dependent upon the residuals, the residuals are dependent upon the estimated coefficients, and the estimated coefficients are dependent upon the weights. Hence, to solve for -estimators, an iteratively reweighted least-squares (IRLS) algorithm is employed. Starting from some initial estimates , at each iteration until it converges, this algorithm computes the residuals and the associated weights from the previous iteration and yields new weighted least-squares estimates.

2.1. Objective Function

Several functions can be used. Here we opted for Tukey’s biweight [17] or bisquare function defined aswhere is a tuning constant and .

The corresponding function is

Tukey’s biweight estimator has a smoothly redescending function that prevents extreme outliers to affect the calculation of the biweight estimates by assigning them a zero weighting. As can be seen in Figure 1, the weights for the biweight estimator decline as soon as departs from 0 and are 0 for . Smaller values of produce more resistance to outliers, but at the expense of lower efficiency when the errors are normally distributed. The tuning constant is generally picked to give reasonably high efficiency in normal case; in particular produces a efficiency when the errors are normal, while guaranteeing resistance to contamination of up to of outliers.

Figure 1: Tukey’s biweight estimator objective, , and weight functions for .

In an application, an estimate of the standard deviation of the errors is needed in order to use these results. Usually a robust measure of spread is used in preference to the standard deviation of the residuals. A common approach is to take , where MAD is the median absolute deviation. Despite having the best possible breakout point of , the MAD is not without its weaknesses. It exhibits superior statistical efficacy for the contaminated data (i.e., the data that contains extreme scores); however, when the data approaches a normal distribution, the MAD is only efficient. Furthermore, it is ill-suited for asymmetrical distributions, since it attaches equal importance to positive and negative deviations from location estimate.

Hence, the scale parameter is computed using Rousseeuw-Croux estimator [18]:where is a calibration factor and , where is roughly half the number of observations. The estimator has the optimal breakdown point; it is equally suitable for both symmetrical and asymmetrical distributions and considerably more efficient (about ) than the MAD under a Gaussian distribution.

3. Experimental Setup

To illustrate the efficacy of the proposed approach we use both artificial data, generated from the Monte Carlo simulations, and the experimental data for the tyrosinase enzyme (EC extracted from three different sources.

3.1. Monte Carlo Simulations

Simulation studies are useful for gaining insight into the examined algorithms strengths and weaknesses, such as robustness, against number of variable factors. There are three outlier scenarios and a total of 18 different situations considered in this research. The data sets are generated from the model:where regression coefficients are fixed for each . The explanatory variables are set to and a zero mean unit variance random number with Gaussian density is added as measurement error.

The factors considered in this simulation are () level of outlier contamination: or , () sample size: small (), medium (), or large (), and () distances of outliers from clean observations: 10 standard deviations, 50 standard deviations, or 100 standard deviations. There are 1200 replications for each scenario and all simulations are carried out in MATLAB. The 3 scenarios and the 18 situations considered in this research are summarized in Table 1.

Table 1: The 18 situations considered in the simulations.

These simulated data are then used to estimate the values of and using different fitting techniques. The mean estimated values of and for a particular scenario and fitting technique are subsequently calculated by averaging and values obtained in each of the 1200 trials. The estimator efficacy is assessed in terms of its bias, precision, and accuracy. Bias is defined as an absolute difference between mean estimated parameter values and known parameter values:where is the total number of replications in the simulated scenario. The term precision refers to the absence of random errors or variability. It is measured by the coefficient of variation , that is, the standard deviation expressed as a percentage of the mean:The prediction accuracy is defined as the overall distance between estimated values and true values. The accuracy is measured by a normalized mean squared error (NMSE), that is, the mean of the squared differences between the estimated and the known parameter values normalized by a mean of estimated data:where again is the total number of replications in the simulated scenario.

3.2. Enzyme Data Sets

Tyrosinase (EC is a ubiquitous enzyme responsible for melanization in animals and plants [19, 20]. In the presence of molecular oxygen, this enzyme catalyzes hydroxylation of monophenols to -diphenols (cresolase activity) and their subsequent oxidation to -quinones (catecholase activity). The latter products are unstable in aqueous solution, further polymerizing to undesirable brown, red, and black pigments. Tyrosinase has attracted a lot of attention with respect to its biotechnological applications [21], due to its attractive catalytic ability, as the catechol products are useful as drugs or drug synthons.

For the purposes of present study, tyrosinase was extracted from potato (Solanum tuberosum) and two species of common edible mushrooms: Agaricus bisporus (Ab) and Pleurotus ostreatus (Po). All the source materials were purchased from the local green market in Split, Croatia. Enzyme extraction was prepared with 100 mL of cold 50 mM phosphate buffer (pH 6.0) per 50 g of a source material. The homogenates were centrifuged at 5000 rpm for 30 min and supernatant was collected. The sediments were mixed with cold phosphate buffer and were allowed to sit in cold condition with occasional shaking. Then the sediments containing buffer were centrifuged once again to collect supernatant. These supernatants were subsequently used as sources of enzyme.

The tyrosinase activity was determined spectrophotometrically at room temperature and , measuring the conversion of L-DOPA to red coloured oxidation product dopachrome [22]. The reaction mixture—obtained after adding a μL of enzyme extract to a cuvette containing 1.2 mL of 50 mM phosphate buffer (pH 6.0) and 0.8 mL of 10 mM L-DOPA—was immediately shaken and the increase in absorbance was measured for 3 minutes. The change in the absorbance was proportional to the enzyme concentration. The initial rate was calculated from the linear part of the recorded progress curve. One unit of enzyme was defined as the amount which catalyzed the transformation of of L-DOPA to dopachrome per minute under the above conditions. The dopachrome extinction coefficient at 475 nm was .

To determine the values of and for tyrosinase, experimental kinetic data, summarized in Table 2, was gathered by measuring enzyme activity in a cuvette where of enzyme solution was added to 2 mL of 50 mM phosphate buffer (pH 6.0) containing various concentrations of L-DOPA (0–10 mM). In this case, the estimator performance is evaluated by computing mean absolute error (MAE), that is, the mean of the absolute differences between the observed reaction rate, and the expected reaction rate, calculated using estimates of and , at a concentration, :where is the number of experimental data points. Mean absolute error is regularly employed quality measure that provides an objective assessment of how well the various estimated values of and fit the untransformed experimental data.

Table 2: Input experimental kinetic data sets.

4. Results and Discussion

4.1. Parameter Estimation Using Simulated Data

Figures 25 provide the summary of our simulation outcomes for different sample sizes, different contamination levels, and different outlier distances. By examining the simulation results, it is evident that modified robust Tukey’s biweight estimator outperforms all other four alternative fitting techniques with respect to bias, coefficient of variation, and normalized mean square error, yielding both accurate and precise estimates of and at all test conditions. For example, looking at the set of values obtained for a small sample size with a minimal level of contamination present in the data and minimal outlier scatter (Situation 1, Table 1), we observe that as per the RNR estimator the average estimated values of and are and . When EH, HW, and LB plots were used, the data produced , , and , respectively, as average estimates of and , and , respectively, as . When OLS estimator was applied, the corresponding average values of and were and , respectively. If the reported standard deviations are scaled by dividing them with an appropriate mean, the resulting coefficients of variation of and are and (EH), and (HW), and (LB), and (OLS), and and (RNR), respectively. Thus, it is revealed that, though all three Hanes-Woolf, Lineweaver-Burke, and ordinary least-squares methods have a low bias (Figures 2 and 4) and produce the results that are in a close proximity of the values obtained by robust nonlinear regression method, their estimates are much more imprecise and as such are of a limited utility. Figure 6 shows the plots of fitted reaction curves for the randomly selected replications of situations with small sample size (Situations 1–6).

Figure 2: Mean estimated values (black dots) and standard deviations of Michaelis constant, , for different simulated scenarios. Red lines denote true parameter value.
Figure 3: Normalized mean square error of for different scenarios.
Figure 4: Mean estimated values (black dots) and standard deviations of maximum initial velocity, for different simulated scenarios. Red lines denote true parameter value.
Figure 5: Normalized mean square error of for different scenarios.
Figure 6: Curves fitted using different fitting techniques for random replications of situations 1–6.

For a medium sample size with the same levels of contamination and outlier scatter (Situation 7, Table 1), the analysis for the RNR approach yielded almost identical results, that is, average and estimates as and , respectively. The EH, HW, and LB plots estimated the average as , , and , respectively, and as , , and , respectively. The average kinetic parameters, that is, and , obtained by the OLS method were and , respectively. Again, if the standard deviations are scaled by an appropriate mean, the resulting coefficients of variation of and are and (EH), and (HW), and (LB), and (OLS), and and (RNR), respectively.

Similarly, for a large sample size with the same levels of contamination and outlier scatter (Situation 13, Table 1), the values of and are estimated as and (EH), and (HW), and (LB), and (OLS), and and (RNR), respectively. In this case, the resulting coefficients of variation of and are and (EH), and (HW), and (LB), and (OLS), and and (RNR), respectively. It should be noted that, all the while in all of the aforementioned cases, the Eadie-Hofstee method has coefficients of variation that are highly comparable to those based on ordinary least-squares method; the EH estimated and values are much further away from the true values than the estimates obtained by Hanes-Woolf, ordinary least-squares, and robust nonlinear regression methods.

With the increase of the contamination level and the outlier scatter, the average estimates of and values as per linear plots and the OLS method begin to deviate significantly. However, the modified robust Tukey’s biweight estimator is able to keep the errors in check and produce the results that are highly comparable and much closer to the true parameter values.

Numerically, by looking at the estimated values, it is hard to tell which of the selected estimators has the overall best performance; nevertheless, with the help of the normalized mean square error method we can see the values of parameters for which the error is minimum. Thus, from Figures 3 and 5, we may say that the best (most accurate) values of and are obtained in situation 17, for which the minimum error values are and , respectively (as obtained by RNR). This proves that the estimated values are more or less similar to the true values. The worst error values of and for RNR method are obtained in situation 4 (Figures 3 and 5). In all other situations, the normalized mean square errors for RNR method are less than and , respectively (Figures 3 and 5). This shows the credibility and the robustness of the proposed modified Tukey’s biweight estimator relative to other methods when outliers or influential observations are present in the data. If we compare the robust nonlinear regression method with ordinary least-squares method, we find that the RNR method normalized mean square errors are on average more than 10 times lower than the normalized mean square errors produced by the OLS method.

4.2. Parameter Estimation Using Experimental Data

The viability of the proposed robust estimator was also tested by using the experimental kinetic data for tyrosinase enzyme. The corresponding and values, produced by different estimation models, are given in Table 3. Upon closer inspection and analysis of these values, it can be observed that, in case of Ab mushroom and potato tyrosinase, the kinetic values, that is, and , yielded by the RNR method (599  and 0.38  for Ab mushroom and 10740  and 0.118  for potato, resp.) are much closer to the values yielded by HW plot (555  and 0.381  for Ab mushroom and 10659  and 0.11  for potato, resp.) than by OLS method (545  and 0.376  for Ab mushroom and 25720  and 0.209  for potato, resp.). Furthermore, it is interesting to note that the parameter values yielded by LB plot ( 263  and 0.248  for Ab mushroom, 360  and 0.002  for Po mushroom, and 1216  and 0.021  for potato) for all three tyrosinase source are very far from the values yielded by other four estimation methods. Figures 7(a), 7(b), and 8(a) show the curves fitted to the experimental data using the modified Tukey’s biweight estimator in comparison with the standard linearization methods and the ordinary least-squares nonlinear regression. The mean absolute errors between the predicted reaction rates and the actual data are plotted in the right graph in Figure 8. Particularly, the mean errors for RNR method are 0.0048, , and 0.0014, respectively, which shows a good fit of the achieved model.

Table 3: Kinetic parameters, and , values, and mean absolute errors for tyrosinase extracted from different sources, estimated using different methods.
Figure 7: Curves fitted using different fitting techniques for tyrosinase extracted from Agaricus bisporus (a) and Pleurotus ostreatus (b) mushrooms.
Figure 8: Curves fitted using different fitting techniques for tyrosinase extracted from potato (a). Mean absolute errors for different regression models used (b).

5. Conclusion

When an enzymatic reaction follows Michaelis-Menten kinetics, the equation for the initial velocity of reaction as a function of the substrate concentration is characterized by two parameters, the Michaelis constant, , and the maximum velocity of reaction, . Up to this day, these parameters are routinely estimated using one of these different linearization models: Lineweaver-Burke plot ( versus ), Eadie-Hofstee plot ( versus ), and Hanes-Wolfe plot ( versus ). Although the linear plots obtained by these methods are very illustrative and useful in analyzing the behavior of enzymes, the common problem they all share is the fact that transformed data usually do not satisfy the assumptions of linear regression, namely, that the scatter of data around the straight line follows Gaussian distribution, and that the standard deviation is equal for every value of independent variable.

More accurate approximation of Michaelis-Menten parameters can be achieved through use of nonlinear least-squares fitting techniques. However, these techniques require good initial guess and offer no guarantee of convergence to the global minimum. On top of that, they are very sensitive to the presence of outliers and influential observations in the data, in which case they are likely to produce biased, inaccurate, and imprecise parameter estimates.

In this paper, a robust estimator of nonlinear regression parameters based on a modification of Tukey’s biweight function is introduced. Robust regression techniques have received considerable attention in mathematical statistics literature, but they are yet to receive similar amount of attention by practitioners performing data analysis. Robust nonlinear regression aims to fit a model to the data so that the results are more resilient to the extreme values and are relatively consistent when the errors come from the high-tailed distribution. The experimental comparisons, using both real and synthetic kinetic data, show that the proposed robust nonlinear estimator based on modified Tukey’s biweight function outperforms the standard linearization models and ordinary least-squares method and yields superior results with respect to bias, accuracy, and consistency, when there are outliers or influential observations present in the data.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The work reported in this paper was supported by Croatian Science Foundation Research Project “Investigation of Bioactive Compounds from Dalmatian Plants: Their Antioxidant, Enzyme Inhibition and Health Properties” (IP 2014096897).


  1. H. J. Fromm and M. Hargrove, Essentials of Biochemistry, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  2. K. A. Johnson and R. S. Goody, “The original Michaelis constant: translation of the 1913 Michaelis-Menten paper,” Biochemistry, vol. 50, no. 39, pp. 8264–8269, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Cornish-Bowden, “The origins of enzyme kinetics,” FEBS Letters, vol. 587, no. 17, pp. 2725–2730, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Lineweaver and D. Burk, “The determination of enzyme dissociation constants,” Journal of the American Chemical Society, vol. 56, no. 3, pp. 658–666, 1934. View at Publisher · View at Google Scholar · View at Scopus
  5. G. S. Eadie, “The inhibition of cholinesterase by physostigmine and prostigmine,” The Journal of Biological Chemistry, vol. 146, pp. 85–93, 1942. View at Google Scholar
  6. C. S. Hanes, “Studies on plant amylases,” Biochemical Journal, vol. 26, no. 5, pp. 1406–1421, 1932. View at Publisher · View at Google Scholar
  7. A. Ferst, Structure and Mechanism in Protein Science: A Guide to Enzyme Catalysis and Protein Folding, Freeman, 2000.
  8. G. N. Wilkinson, “Statistical estimations in enzyme kinetics,” The Biochemical journal, vol. 80, pp. 324–332, 1961. View at Publisher · View at Google Scholar · View at Scopus
  9. H. Motulsky and A. Christopoulus, Fitting Models to Biological Data Using Linear and Nonlinear Regression, GraphPad Software Inc, 2003.
  10. C. Cobelli and E. Carson, Introduction to Modeling in Physiology and Medicine, Academic Press, 2008.
  11. S. R. Nelatury, C. F. Nelatury, and M. C. Vagula, “Parameter estimation in different enzyme reactions,” Advances in Enzyme Research, vol. 2, no. 1, pp. 14–26, 2014. View at Publisher · View at Google Scholar
  12. G. Kemmer and S. Keller, “Nonlinear least-squares data fitting in Excel spreadsheets,” Nature Protocols, vol. 5, no. 2, pp. 267–281, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. C. Lim, P. K. Sen, and S. D. Peddada, “Robust nonlinear regression in applications,” Journal of the Indian Society of Agricultural Statistics, vol. 67, no. 2, pp. 215–234, 291, 2013. View at Google Scholar · View at MathSciNet
  14. R. A. Maronna, R. D. Martin, and V. J. Yohai, Robust Statistics: Theory and Methods, Wiley Series in Probability and Statistics, John Wiley & Sons, New York, NY, USA, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. V. Barnett and T. Lewis, Outliers in statistical data, Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons, Ltd., Chichester, England, Third edition, 1994. View at MathSciNet
  16. P. J. Huber, “Robust estimation of a location parameter,” Annals of Mathematical Statistics, vol. 35, no. 1, pp. 73–101, 1964. View at Publisher · View at Google Scholar · View at MathSciNet
  17. F. Mosteller and J. W. Tukey, Data Analysis and Regression, Addison-Weasley, 1977.
  18. P. J. Rousseeuw and C. Croux, “Alternatives to the median absolute deviation,” Journal of the American Statistical Association, vol. 88, no. 424, pp. 1273–1283, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  19. M. R. Loizzo, R. Tundis, and F. Menichini, “Natural and synthetic tyrosinase inhibitors as antibrowning agents: an update,” Comprehensive Reviews in Food Science and Food Safety, vol. 11, no. 4, pp. 378–398, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Y. Lee, N. Baek, and T.-G. Nam, “Natural, semisynthetic and synthetic tyrosinase inhibitors,” Journal of Enzyme Inhibition and Medicinal Chemistry, vol. 31, no. 1, pp. 1–13, 2016. View at Publisher · View at Google Scholar · View at Scopus
  21. K. U. Zaidi, A. S. Ali, S. A. Ali, and I. Naaz, “Microbial tyrosinases: promising enzymes for pharmaceutical, food bioprocessing, and environmental industry,” Biochemistry Research International, vol. 2014, Article ID 854687, 16 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. Z. Yang and F. Wu, “Catalytic properties of tyrosinase from potato and edible fungi,” Biotechnology, vol. 5, no. 3, pp. 344–348, 2006. View at Publisher · View at Google Scholar · View at Scopus