- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2013 (2013), Article ID 231735, 7 pages

http://dx.doi.org/10.1155/2013/231735

## A New Adaptive LSSVR with Online Multikernel RBF Tuning to Evaluate Analog Circuit Performance

^{1}College of Engineering, Bohai University, Jinzhou 121013, China^{2}Department of Engineering, Faculty of Engineering and Science, The University of Agder, 4898 Grimstad, Norway

Received 6 November 2013; Accepted 29 November 2013

Academic Editor: Ming Liu

Copyright © 2013 Aihua Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Focusing on the analog circuit performance evaluation demand of fast time responding online, a novel evaluation strategy based on adaptive Least Squares Support Vector Regression (LSSVR) which employs multikernel RBF is proposed in this paper. The superiority of the multi-kernel RBF has more flexibility to the kernel function online such as the bandwidths tuning. And then the decision parameters of the kernel parameters determine the input signal to map to the feature space deduced that a well plant model by discarding redundant features. Experiment adopted the typical circuit Sallen-Key low pass filter to prove the proposed evaluation strategy via the eight performance indexes. Simulation results reveal that the testing speed together with the evaluation performance, especially the testing speed of the proposed, is superior to that of the traditional LSSVR and -SVR, which is suitable for promotion online.

#### 1. Introduction

Although many analog electronic functions have been replaced with digital equivalents, there still exists a need to use analog circuits [1] such as voice signals conversion, sensor signals microprocession, and conversion. Actually, all of the electronic circuits are not out of the analog circuits [2].

The presence of performance evaluation or detection is vital in this age of large electronic equipment that has swarmed our living. Physical damage, manufacturing technique, aging, radiation, temperature changes, and power surges are possible reasons for such performance changing. Moreover, the further state of the electronic equipment can be forecasted via performance detection, and some catastrophic errors can be avoided such as in spacecraft engineering field. The purpose of analog circuits performance evaluation is to guarantee the electronic system to be well running states before they are put into use and/or to realize the fast performance detection of the electronic system online to assure the running status. Some researchers focus on the data-driven method and lots of pieces of literature [3–6] had attempted to use it.

To this question, some researchers have focused on analog circuits fault diagnosis and performance evaluation [7]. And they are in the early stage of development, but the technique still developed slowly for complication development of electronic equipment complex. Nowadays, the normal techniques include neural network, fuzzy logic, genetic algorithm and so forth, which offer enough developed space for the analog circuit performance evaluation [8–10]. And the neural network and support vector machine (SVM) were extensive applied and researched. Aihua and Zhongdang [11] focused the promise about analog circuit performance evaluation method’s portability and low cost, the support vector regression (SVR) evaluation strategy was firstly proposed, and this inherited the evaluation precision simultaneously. However, the lower convergence rate is the largest defect and this problem can also be discussed in literature [12, 13].

For taking into consideration the realization issue of superior convergence rate, Suykens and Vandewalle [14] addressed norm LSSVR method. The primary advantage of this approach is that the training process follows the structural risk minimization principle and takes the equality constraint instead of inequality constraints, and this has made the operation speed improved greatly. The LSSVR formulation also involves less tuning parameters. However, a drawback is that sparseness is lost in the LSSVR case. Therefore some researchers has put their eyes on investigating imposing sparseness by pruning support values from the sorted support value spectrum which results from the solution to the linear system. Suykens et al. [14, 15] in their later literature present sparse approximation strategy for modifying the defect of the norm LSSVR. Although this method can realize the decremental based ascending sort with the setting threshold constraint, the training sample is still difficult to accept or reject with facing the uniformity of LSSVR spectrum. Wang et al. [16] employed a novel LSSVR algorithm in the stability space. And another LSSVR based matrix model on the linear class problem also was discussed in literature [17]. Furthermore, Zhao and Sun [18] adopted the recursion algorithm to reduce the growing data samples of LSSVR, and the sparsity solution is obtained. Theoretically speaking, the more training samples will get the higher accuracy machine learning, and this is rarely practical.

Kernel function design is the most important part of the component in LSSVR, and it is a nonlinear mapping function from the input to the feature space [19]. The main function of the kernel is to convert a linearly nonseparable classification problem in low dimension to a separable one in high dimension and hence plays a crucial role in the modeling and control performance. Kernel functions are generally parametric and the numerical values of these parameters have significant effect on both modeling and control performance. Depending on the initial values of kernel parameters some features significant for the model may be discarded or redundant or irrelevant features may also be mapped to the feature space and better performance may be achieved by discarding some feature [13, 14]. Owing to such factors, the selection of optimal kernel parameters is vital in terms of the solution of the SVR problem. There are lots of optimization methods on the kernel parameters, such as particle swarm optimization, pattern search, and grid search [20, 21]. The goal mostly located in offline calculation kernel parameters. Literature [22] used gradient optimization method to realize the single RBF kernel function (SRBF) online adjustment variance.

This work, researched on the literature [15, 23], presents an analog circuit evaluation strategy based LSSVR which also treats the circuit and signal online but adopts multikernel RBF to realize the adjustment of kernel width, which not only contribute the regression of LSSVR, but also improve the evaluation speed greatly.

Focusing on the analog circuit performance evaluation demand of fast time responding online, a novel evaluation strategy based on adaptive Least Squares Support Vector Regression (LSSVR) which employs multikernel RBF is proposed. The superiority of the multikernel RBF has more flexibility to the kernel function online such as the bandwidths tuning. And then the decision parameters of the kernel parameters determine the input signal to map to the feature space that deduced a well plant model by discarding redundant features.

#### 2. Evaluation Algorithm

##### 2.1. Support Vector Regression

Support vector machine (SVM) is originally developed by Vapnik [24] for solving classification problems, and it has also been studied extensively for the solution of regression problems. Meanwhile, the superior [25] has been revealed via the structural risk minimization principle of SVM which employed by conventional neural networks. SVM also has a greater ability to generalize, which is the important task in statistical learning. SVR is the extension of SVMs to solve regression to minimize the generalized error bound so as to achieve generalized performance. When using SVM in regression tasks, the SVR must use a cost function to measure the empirical risk in order to minimize the regression error. The brief details about SVR are presented as follow.

Consider the learning sample for SVR, , where is a vector representing a set of sample inputs at a certain instant and is a vector representing the corresponding a set of sample outputs. This purpose is to find a function which can estimate output data in a better way.

###### 2.1.1. Linear SVR

Consider where “’’ denotes the inner product, and are the parameters of the function, and is the test pattern in a normalized form. The structural risk minimization principle can be realized by minimizing the empirical risk defined by where denotes error-insensitive loss function of the empirical risk, and it can be defined by is the insensitive loss function; that is to say, it is the tolerance error between the target output and the estimated output values in optimization process, and is a training pattern. The problem of finding and to reduce the empirical risk with respect to an insensitive loss function is equivalent to the convex optimization problem that minimizes the margin and slack variables as where the first term is the margin; the parameter is a positive constant. To solve the above optimization problem, one has to find a saddle point of the Lagrange function described as [26]

###### 2.1.2. Nonlinear SVR

In fact, the linear SVR is not for all the real system because of the problem complex of the real word. The nonlinear SVR is as an alternative for linear SVR that has appeared. The input data sample is transformed into feature space by a nonlinear function [12]. Then, the same optimization algorithm is applied in the same way as the linear SVR. Therefore, the nonlinear function of SVR can be expressed by where “’’ denotes the inner product, , are the parameters of the function, and is the mapping function from the input feature to a higher dimensional feature space.

For the regression problem of the given training set , the classical SVR model [15] can be obtained from the following optimization problem: where is the evaluated error of sample. For the purpose to get the evaluated formulation just like (6) from the optimization problem (1) to realize the evaluation and diagnosis for the future samples, then the optimization problem (7) with the employed Lagrange multipliers and matrix in variable method can be rewritten as where , is the correlation matrix, and are Lagrange multipliers, is the output vector, and is RBF kernel function which will be stated alone in the next section. The key point of solving (8) is to confirm the inverse matrix , once a new sample joins in the training set, we can get the predictor , namely, SVR, as follows: where , are kernel correlation matrixes of training sets , , , and . Once can be obtained via , the training mission of incremental SVR is done [27]. As for LSSVR, to solve under the knowing , , , namely, for a given sample set, adopting inverse training algorithm. Here, we adopt strategy to remove the th line with the th list of to eliminate part sample to get . Via the algorithm of reduced order and inversion [28], cause , then the reduced order formulation can be achieved

##### 2.2. Multikernel RBF Adjust Strategy

For realizing the flexibility to the kernel, in this part, we modify the kernel RBF which utilizes the linear combinations of RBF kernels. The multikernel RBF is addressed as follows: where is the bandwidth of the kernel function, is the current state vector of the plant, is the test data samples, and is the Euclidean distance between current data which is expressed by

To guarantee the fast responding, such as the computation speed, we adopt multikernel RBF which is expressed by where

To verify the superior performance of the multikernel RBF, we also employ LSSVR with the norm kernel RBF in this paper. Firstly, the multikernel RBF was fixed bandwidths which is equivalent to saying that the norm kernel RBF with varying bandwidth depending on scaling coefficients and Euclidean distance between features, namely, the multikernel RBF, has the better flexibility to the unknown problem. Then the LSSVR function can be rewritten as follows: where is Lagrange multiplier expressed by

Partial derivatives of LSSVR model with respect to weights and bandwidths of the kernels are obtained as follows:

Then, the objective function to be minimized for improving the LSSVR model performance is chosen as follows:

The kernel width and scaling coefficients can be adjusted via the method proposed in literature [29]: where is the learning rate obtained by any line search algorithm. So, the kernel parameters can be adjusted as

##### 2.3. Algorithm of the Multikernel Adaptive LSSVR

Aiming the training set that is given in Section 2.2, then the regression function is expressed by where , are the regression parameters, is training sample working set, and is the regression parameter set of .

In this paper, multikernel RBF LSSVR algorithm includes initialization and adaptive update the design and procedure as follows [23].

###### 2.3.1. Initialization

*Step 1. *Make , and , can be confirmed by set (8).

*Step 2. *If , the regression function can be detected by the sample ; if , then , and should be recomputed via increment algorithm, confirming least support vector spectrum , constructing temporary training set , utilizing inverse training algorithm computing via , and using the regression function which can be detected by the sample . If , in that way , .

*Step 3. *Compute the value of the working set objective function.

*Note 1. *The objective function is

###### 2.3.2. Adaptive Update

*Step 1. *If , then the output regression function is exported, otherwise turn to Step 2. If , simultaneously, if and , then , , and is computed again via increment algorithm.

*Step 2. *Compute the value of the working set objective function.

*Note 2. * is objective function that has been updated in this time; is the objective function that has been updated in the last time.

*Note 3. *The forecast training accuracy and test precision are set to be and algorithm stop parameter set to be .

###### 2.3.3. Termination Judgment

If , then the training is stopped.

#### 3. Simulation

##### 3.1. Prepare before Simulation

The CUT in this paper is a typical circuit Sallen-Key low pass filter as shown in Figure 1 [30]. The evaluating indicator for performance includes eight indexes: gain, transmission band, cutoff frequency, lower cut-off frequency, maximum undistorted output amplitude, maximum undistorted power output, input sensitivity, and noise voltage. Then to confirm training set based on the eight indexes, we first define sample point and correspondingly obtain training set .

##### 3.2. Data Selection and Standardized Processing

Experiment adopted the typical circuit Sallen-Key low pass filter to prove the proposed evaluation strategy via the eight performance indexes which obtained by precise instrument evaluation in two years. The sample number is 259 × 100, record data set . Before verifying the proposed method in this paper, the first thing to be done is to establish data sets of training and testing. However, the strangeness value in the data set caused by human record and other noncircuit fault factors will make great effects to model performance of LSSVR, especially the data set including the strangeness value that are used for modeling. Hence, a normalization of the data is required before presenting the input patterns to any statistical machine learning algorithm. In this experiment, 0-1 normalization method, denoted by (23), is utilized to preprocess: where and are the th components of the input vector before and after normalization, respectively, and and are the maximum and minimum values of all the components of the input vector before the normalization. Completing data processing via 0-1 normalization method, the noise has been reduced obviously.

After the above data selection and data normalization, 200 × 100 samples are selected randomly to be the training samples; the rest data samples are to be a test sample. To validate the superior evaluation performance of the proposed MKALSSVR to evaluate the analog circuit performance online, the different methods such as LSSVR, -SVR, and the precision instrument are also carried out for the comparison purpose while the analog circuit performance evaluation is on. Meanwhile several parameters need to be introduced before applying the three SVR algorithms. First of all, it is required to denote three parameters, namely, error insensitive zone (), penalty factor , and kernel specific parameters . Problem regarding the choice of , , and was studied by several researchers [31, 32]. The penalty factor controls the smoothness or flatness of the approximation function. If we set the value to be large, the objective is only to minimize the empirical risk, which makes the learning machine more complex. On the contrary, if we set the value to be small, the objective is to cause the errors to be excessively tolerated yielding a learning machine with poor approximation [33]. In this study, SVR models have been constructed with and varied starting from and which are the empirical values given by [33]. Via some testing, the parameters and have been varied over a specific corresponding range in order to obtain better coefficient of correlation value, and the correlation value, denoted , is determined by (24). The kernel specific parameters are restricted since the value shown in Table 1 gives the better prediction for these models. The three values for each model are shown in Table 1. This study adopts RBF (11), where is width of RBF; this is also known as kernel function. The adopted , , and values for the four models are shown in following Table 1: where and are the actual and predicted values, respectively; and are mean of actual and predicted values corresponding to patterns. The number of support vector (SVN), the number of testing support vector (TESN), the number of training support vector (TRSN), the number of the data feature (FN), testing data mean square error (TEMSE), and training data mean square error (TDMSE) are all shown in Table 1. And MSE = , where is the real value, is the predicted value, and is a testing sample number.

##### 3.3. Simulation Experiment

To validate the superior evaluation performance of the proposed MKALSSVR, the other two different methods, LSSVR and -SVR, are also employed in this part. The sharp contrast about the time response of the three methods are presented in Figure 2. We take one period testing time of LSSVR as comparison and giving the other two methods testing time, respectively. Via this testing comparing, we can see clearly that the testing speed is superior greater than the other two methods. In Figure 3, we can see that the support vector density is closely bound up the curvature. If the curvature is bigger; the support vector density is also bigger, on the contrary, while in the position of the relatively smooth, the support vector density is relatively small.

For the same purpose above, Tables 1 and 2 all give out the same things to prove the evaluation precision and speed via the proposed method MKALSSVR. And for the purpose to prove the well performance of the evaluation, the precise instrument method is utilized.

#### 4. Conclusion

In this paper, a novel online evaluation strategy MKALSSVR aimed to analog circuit. Via numerical simulation, we can draw a conclusion that the proposed MKALSSVR has the merit as follows: first, the adaptive training strategy can confirm the training sample number adaptively; second, the multikernel design has changed the RBF width and having the more flexible adjust ability. And this makes the evaluation have the online processing ability. Third, this method avoids the overflow problem of norm LSSVR and support vector sparsity. Meanwhile, considering the low cost, high evaluation precision, and high operation rate of the proposed method MKALSSVR, this strategy is worth to be developed and implemented. Based on this discussion, we will take the issue about how to deal with the fault value as the future research problem.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This present work was supported partially by National Natural Science Foundation of China (Project no: 61304149) and Natural Science Foundation of Liaoning, China (Project no: 2013020044). The authors highly appreciate the above financial supports.

#### References

- P. Kabisatpathy, A. Barua, and S. Sinha,
*Fault Diagnosis of Analog Integrated Circuits*, vol. 30, Springer, 2005. - J. R. Koza, F. H. Bennett III, D. Andre, M. A. Keane, and F. Dunlap, “Automated synthesis of analog electrical circuits by means of genetic programming,”
*IEEE Transactions on Evolutionary Computation*, vol. 1, no. 2, pp. 109–128, 1997. View at Publisher · View at Google Scholar · View at Scopus - S. Yin, S. X. Ding, A. H. A. Sari, and H. Hao, “Data-driven monitoring for stochastic systems and its application on batch process,”
*International Journal of Systems Science*, vol. 44, no. 7, pp. 1366–1376, 2013. View at Publisher · View at Google Scholar · View at MathSciNet - S. Yin, S. Ding, A. Haghani, H. Hao, and P. Zhang, “A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process,”
*Journal of Process Control*, vol. 22, no. 9, pp. 1567–1581, 2012. View at Publisher · View at Google Scholar - S. Yin, X. Yang, and H. R. Karimi, “Data-driven adaptive observer for fault diagnosis,”
*Mathematical Problems in Engineering*, vol. 2012, Article ID 832836, 21 pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Yin, H. Luo, and S. Ding, “Real-time implementation of fault-tolerant control systems with performance optimization,”
*IEEE Transactions on Industrial Electronics*, vol. 61, no. 5, pp. 2402–2411, 2013. View at Publisher · View at Google Scholar - H. Lin, L. Zhang, D. Ren, H. Kang, and G. Gu, “Fault diagnosis in nonlinear analog circuit based on Wiener kernel and BP neural network,”
*Chinese Journal of Scientific Instrument*, vol. 30, no. 9, pp. 1946–1949, 2009. View at Google Scholar · View at Scopus - D. Sánchez, P. Melin, O. Castillo, and F. Valdez, “Modular neural networks optimization with hierarchical genetic algorithms with fuzzy response integration for pattern recognition,” in
*Advances in Computational Intelligence*, pp. 247–258, Springer, 2013. View at Publisher · View at Google Scholar - S. Abdulla and M. Tokhi, “Fuzzy logic based FES driven cycling by stimulating single muscle group,” in
*Converging Clinical and Engineering Research on Neurorehabilitation*, pp. 173–182, Springer, 2013. View at Publisher · View at Google Scholar - C. W. Chen, P. C. Chen, and W. L. Chiang, “Modified intelligent genetic algorithm-based adaptive neural network control for uncertain structural systems,”
*Journal of Vibration and Control*, vol. 19, no. 9, pp. 1333–1347, 2013. View at Publisher · View at Google Scholar - Z. Aihua and Y. Zhongdang, “Research on amplifier performance evaluation based on support vector regression machine,”
*Chinese Journal of Scientific Instrument*, vol. 29, no. 3, pp. 618–622, 2008. View at Google Scholar · View at Scopus - A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,”
*Statistics and Computing*, vol. 14, no. 3, pp. 199–222, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - S. K. Shevade, S. S. Keerthi, C. Bhattacharyya, and K. R. K. Murthy, “Improvements to the SMO algorithm for SVM regression,”
*IEEE Transactions on Neural Networks*, vol. 11, no. 5, pp. 1188–1193, 2000. View at Publisher · View at Google Scholar · View at Scopus - J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,”
*Neural Processing Letters*, vol. 9, no. 3, pp. 293–300, 1999. View at Google Scholar · View at Scopus - J. A. K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: robustness and sparce approximation,”
*Neurocomputing*, vol. 48, no. 1, pp. 85–105, 2002. View at Publisher · View at Google Scholar · View at Scopus - L. Wang, L. F. Bo, F. Liu, and L. C. Jiao, “Least squares hidden space support vector machines,”
*Chinese Journal of Computers*, vol. 28, no. 8, pp. 1302–1307, 2005. View at Google Scholar · View at MathSciNet · View at Scopus - Z. Wang and S. Chen, “New least squares support vector machines based on matrix patterns,”
*Neural Processing Letters*, vol. 26, no. 1, pp. 41–56, 2007. View at Publisher · View at Google Scholar · View at Scopus - Y. Zhao and J. Sun, “Recursive reduced least squares support vector regression,”
*Pattern Recognition*, vol. 42, no. 5, pp. 837–842, 2009. View at Publisher · View at Google Scholar · View at Scopus - W. M. Campbell, D. E. Sturim, and D. A. Reynolds, “Support vector machines using GMM supervectors for speaker verification,”
*IEEE Signal Processing Letters*, vol. 13, no. 5, pp. 308–311, 2006. View at Publisher · View at Google Scholar · View at Scopus - Y. C. Guo, “An integrated PSO for parameter determination and feature selection of SVR and its application in STLF,” in
*Proceedings of the International Conference on Machine Learning and Cybernetics*, pp. 359–364, IEEE, July 2009. View at Publisher · View at Google Scholar · View at Scopus - M. Momma and K. P. Bennett, “A pattern search method for model selection of support vector regression,” in
*Proceedings of the SIAM Conference on Data Mining (SDM '02)*, 2002. - K. Uçak and G. Öke, “Adaptive PID controller based on online LSSVR with kernel tuning,” in
*Proceedings of the International Symposium on INnovations in Intelligent SysTems and Applications (INISTA '11)*, pp. 241–247, IEEE, June 2011. View at Publisher · View at Google Scholar · View at Scopus - K. Ucak and G. Oke, “An improved adaptive PID controller based on online LSSVR with multi RBF kernel tuning,” in
*Adaptive and Intelligent Systems*, pp. 40–51, Springer, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - V. N. Vapnik,
*The Nature of Statistical Learning Theory*, Springer, New York, NY, USA, 2nd edition, 2000. View at MathSciNet - S. R. Gunn, “Support vector machines for classification and regression,”
*ISIS Technical Report*, 1998. View at Google Scholar - S. Rajasekaran, S. Gayathri, and T. L. Lee, “Support vector regression methodology for storm surge predictions,”
*Ocean Engineering*, vol. 35, no. 16, pp. 1578–1587, 2008. View at Publisher · View at Google Scholar · View at Scopus - Z. Liang and Y. Li, “Incremental support vector machine learning in the primal and applications,”
*Neurocomputing*, vol. 72, no. 10–12, pp. 2249–2258, 2009. View at Publisher · View at Google Scholar · View at Scopus - G. Cauwenberghs and T. Poggio, “Incremental and decremental support vector machine learning,” in
*Advances in Neural Information Processing Systems*, pp. 409–415, 2001. View at Google Scholar - D. G. Luenberger,
*Linear and Nonlinear Programming*, Kluwer Academic Publishers, Boston, Mass, USA, Second edition, 2003. View at MathSciNet - M. Aminian and F. Aminian, “Neural-network based analog-circuit fault diagnosis using wavelet transform as preprocessor,”
*IEEE Transactions on Circuits and Systems II*, vol. 47, no. 2, pp. 151–156, 2000. View at Publisher · View at Google Scholar · View at Scopus - V. Cherkassky and F. Mulier,
*Learning from Data: Concepts, Theory, and Methods*, Wiley, 2007. View at Publisher · View at Google Scholar · View at MathSciNet - V. Cherkassky and Y. Ma, “Practical selection of SVM parameters and noise estimation for SVM regression,”
*Neural Networks*, vol. 17, no. 1, pp. 113–126, 2004. View at Publisher · View at Google Scholar · View at Scopus - P. S. Yu, S. T. Chen, and I. F. Chang, “Support vector regression for real-time flood stage forecasting,”
*Journal of Hydrology*, vol. 328, no. 3-4, pp. 704–716, 2006. View at Publisher · View at Google Scholar · View at Scopus