Abstract

Focusing on the analog circuit performance evaluation demand of fast time responding online, a novel evaluation strategy based on adaptive Least Squares Support Vector Regression (LSSVR) which employs multikernel RBF is proposed in this paper. The superiority of the multi-kernel RBF has more flexibility to the kernel function online such as the bandwidths tuning. And then the decision parameters of the kernel parameters determine the input signal to map to the feature space deduced that a well plant model by discarding redundant features. Experiment adopted the typical circuit Sallen-Key low pass filter to prove the proposed evaluation strategy via the eight performance indexes. Simulation results reveal that the testing speed together with the evaluation performance, especially the testing speed of the proposed, is superior to that of the traditional LSSVR and -SVR, which is suitable for promotion online.

1. Introduction

Although many analog electronic functions have been replaced with digital equivalents, there still exists a need to use analog circuits [1] such as voice signals conversion, sensor signals microprocession, and conversion. Actually, all of the electronic circuits are not out of the analog circuits [2].

The presence of performance evaluation or detection is vital in this age of large electronic equipment that has swarmed our living. Physical damage, manufacturing technique, aging, radiation, temperature changes, and power surges are possible reasons for such performance changing. Moreover, the further state of the electronic equipment can be forecasted via performance detection, and some catastrophic errors can be avoided such as in spacecraft engineering field. The purpose of analog circuits performance evaluation is to guarantee the electronic system to be well running states before they are put into use and/or to realize the fast performance detection of the electronic system online to assure the running status. Some researchers focus on the data-driven method and lots of pieces of literature [36] had attempted to use it.

To this question, some researchers have focused on analog circuits fault diagnosis and performance evaluation [7]. And they are in the early stage of development, but the technique still developed slowly for complication development of electronic equipment complex. Nowadays, the normal techniques include neural network, fuzzy logic, genetic algorithm and so forth, which offer enough developed space for the analog circuit performance evaluation [810]. And the neural network and support vector machine (SVM) were extensive applied and researched. Aihua and Zhongdang [11] focused the promise about analog circuit performance evaluation method’s portability and low cost, the support vector regression (SVR) evaluation strategy was firstly proposed, and this inherited the evaluation precision simultaneously. However, the lower convergence rate is the largest defect and this problem can also be discussed in literature [12, 13].

For taking into consideration the realization issue of superior convergence rate, Suykens and Vandewalle [14] addressed norm LSSVR method. The primary advantage of this approach is that the training process follows the structural risk minimization principle and takes the equality constraint instead of inequality constraints, and this has made the operation speed improved greatly. The LSSVR formulation also involves less tuning parameters. However, a drawback is that sparseness is lost in the LSSVR case. Therefore some researchers has put their eyes on investigating imposing sparseness by pruning support values from the sorted support value spectrum which results from the solution to the linear system. Suykens et al. [14, 15] in their later literature present sparse approximation strategy for modifying the defect of the norm LSSVR. Although this method can realize the decremental based ascending sort with the setting threshold constraint, the training sample is still difficult to accept or reject with facing the uniformity of LSSVR spectrum. Wang et al. [16] employed a novel LSSVR algorithm in the stability space. And another LSSVR based matrix model on the linear class problem also was discussed in literature [17]. Furthermore, Zhao and Sun [18] adopted the recursion algorithm to reduce the growing data samples of LSSVR, and the sparsity solution is obtained. Theoretically speaking, the more training samples will get the higher accuracy machine learning, and this is rarely practical.

Kernel function design is the most important part of the component in LSSVR, and it is a nonlinear mapping function from the input to the feature space [19]. The main function of the kernel is to convert a linearly nonseparable classification problem in low dimension to a separable one in high dimension and hence plays a crucial role in the modeling and control performance. Kernel functions are generally parametric and the numerical values of these parameters have significant effect on both modeling and control performance. Depending on the initial values of kernel parameters some features significant for the model may be discarded or redundant or irrelevant features may also be mapped to the feature space and better performance may be achieved by discarding some feature [13, 14]. Owing to such factors, the selection of optimal kernel parameters is vital in terms of the solution of the SVR problem. There are lots of optimization methods on the kernel parameters, such as particle swarm optimization, pattern search, and grid search [20, 21]. The goal mostly located in offline calculation kernel parameters. Literature [22] used gradient optimization method to realize the single RBF kernel function (SRBF) online adjustment variance.

This work, researched on the literature [15, 23], presents an analog circuit evaluation strategy based LSSVR which also treats the circuit and signal online but adopts multikernel RBF to realize the adjustment of kernel width, which not only contribute the regression of LSSVR, but also improve the evaluation speed greatly.

Focusing on the analog circuit performance evaluation demand of fast time responding online, a novel evaluation strategy based on adaptive Least Squares Support Vector Regression (LSSVR) which employs multikernel RBF is proposed. The superiority of the multikernel RBF has more flexibility to the kernel function online such as the bandwidths tuning. And then the decision parameters of the kernel parameters determine the input signal to map to the feature space that deduced a well plant model by discarding redundant features.

2. Evaluation Algorithm

2.1. Support Vector Regression

Support vector machine (SVM) is originally developed by Vapnik [24] for solving classification problems, and it has also been studied extensively for the solution of regression problems. Meanwhile, the superior [25] has been revealed via the structural risk minimization principle of SVM which employed by conventional neural networks. SVM also has a greater ability to generalize, which is the important task in statistical learning. SVR is the extension of SVMs to solve regression to minimize the generalized error bound so as to achieve generalized performance. When using SVM in regression tasks, the SVR must use a cost function to measure the empirical risk in order to minimize the regression error. The brief details about SVR are presented as follow.

Consider the learning sample for SVR, , where is a vector representing a set of sample inputs at a certain instant and is a vector representing the corresponding a set of sample outputs. This purpose is to find a function which can estimate output data in a better way.

2.1.1. Linear SVR

Consider where “’’ denotes the inner product, and are the parameters of the function, and is the test pattern in a normalized form. The structural risk minimization principle can be realized by minimizing the empirical risk defined by where denotes error-insensitive loss function of the empirical risk, and it can be defined by is the insensitive loss function; that is to say, it is the tolerance error between the target output and the estimated output values in optimization process, and is a training pattern. The problem of finding and to reduce the empirical risk with respect to an insensitive loss function is equivalent to the convex optimization problem that minimizes the margin and slack variables as where the first term is the margin; the parameter is a positive constant. To solve the above optimization problem, one has to find a saddle point of the Lagrange function described as [26]

2.1.2. Nonlinear SVR

In fact, the linear SVR is not for all the real system because of the problem complex of the real word. The nonlinear SVR is as an alternative for linear SVR that has appeared. The input data sample is transformed into feature space by a nonlinear function [12]. Then, the same optimization algorithm is applied in the same way as the linear SVR. Therefore, the nonlinear function of SVR can be expressed by where “’’ denotes the inner product, , are the parameters of the function, and is the mapping function from the input feature to a higher dimensional feature space.

For the regression problem of the given training set , the classical SVR model [15] can be obtained from the following optimization problem: where is the evaluated error of sample. For the purpose to get the evaluated formulation just like (6) from the optimization problem (1) to realize the evaluation and diagnosis for the future samples, then the optimization problem (7) with the employed Lagrange multipliers and matrix in variable method can be rewritten as where , is the correlation matrix, and are Lagrange multipliers, is the output vector, and is RBF kernel function which will be stated alone in the next section. The key point of solving (8) is to confirm the inverse matrix , once a new sample joins in the training set, we can get the predictor , namely, SVR, as follows: where , are kernel correlation matrixes of training sets , , , and . Once can be obtained via , the training mission of incremental SVR is done [27]. As for LSSVR, to solve under the knowing , , , namely, for a given sample set, adopting inverse training algorithm. Here, we adopt strategy to remove the th line with the th list of to eliminate part sample to get . Via the algorithm of reduced order and inversion [28], cause , then the reduced order formulation can be achieved

2.2. Multikernel RBF Adjust Strategy

For realizing the flexibility to the kernel, in this part, we modify the kernel RBF which utilizes the linear combinations of RBF kernels. The multikernel RBF is addressed as follows: where is the bandwidth of the kernel function, is the current state vector of the plant, is the test data samples, and is the Euclidean distance between current data which is expressed by

To guarantee the fast responding, such as the computation speed, we adopt multikernel RBF which is expressed by where

To verify the superior performance of the multikernel RBF, we also employ LSSVR with the norm kernel RBF in this paper. Firstly, the multikernel RBF was fixed bandwidths which is equivalent to saying that the norm kernel RBF with varying bandwidth depending on scaling coefficients and Euclidean distance between features, namely, the multikernel RBF, has the better flexibility to the unknown problem. Then the LSSVR function can be rewritten as follows: where is Lagrange multiplier expressed by

Partial derivatives of LSSVR model with respect to weights and bandwidths of the kernels are obtained as follows:

Then, the objective function to be minimized for improving the LSSVR model performance is chosen as follows:

The kernel width and scaling coefficients can be adjusted via the method proposed in literature [29]: where is the learning rate obtained by any line search algorithm. So, the kernel parameters can be adjusted as

2.3. Algorithm of the Multikernel Adaptive LSSVR

Aiming the training set that is given in Section 2.2, then the regression function is expressed by where , are the regression parameters, is training sample working set, and is the regression parameter set of .

In this paper, multikernel RBF LSSVR algorithm includes initialization and adaptive update the design and procedure as follows [23].

2.3.1. Initialization

Step 1. Make , and , can be confirmed by set (8).

Step 2. If , the regression function can be detected by the sample ; if , then , and should be recomputed via increment algorithm, confirming least support vector spectrum , constructing temporary training set , utilizing inverse training algorithm computing via , and using the regression function which can be detected by the sample . If , in that way , .

Step 3. Compute the value of the working set objective function.

Note 1. The objective function is

2.3.2. Adaptive Update

Step 1. If , then the output regression function is exported, otherwise turn to Step 2. If , simultaneously, if and , then , , and is computed again via increment algorithm.

Step 2. Compute the value of the working set objective function.

Note 2. is objective function that has been updated in this time; is the objective function that has been updated in the last time.

Note 3. The forecast training accuracy and test precision are set to be and algorithm stop parameter set to be .

2.3.3. Termination Judgment

If , then the training is stopped.

3. Simulation

3.1. Prepare before Simulation

The CUT in this paper is a typical circuit Sallen-Key low pass filter as shown in Figure 1 [30]. The evaluating indicator for performance includes eight indexes: gain, transmission band, cutoff frequency, lower cut-off frequency, maximum undistorted output amplitude, maximum undistorted power output, input sensitivity, and noise voltage. Then to confirm training set based on the eight indexes, we first define sample point and correspondingly obtain training set .

3.2. Data Selection and Standardized Processing

Experiment adopted the typical circuit Sallen-Key low pass filter to prove the proposed evaluation strategy via the eight performance indexes which obtained by precise instrument evaluation in two years. The sample number is 259 × 100, record data set . Before verifying the proposed method in this paper, the first thing to be done is to establish data sets of training and testing. However, the strangeness value in the data set caused by human record and other noncircuit fault factors will make great effects to model performance of LSSVR, especially the data set including the strangeness value that are used for modeling. Hence, a normalization of the data is required before presenting the input patterns to any statistical machine learning algorithm. In this experiment, 0-1 normalization method, denoted by (23), is utilized to preprocess: where and are the th components of the input vector before and after normalization, respectively, and and are the maximum and minimum values of all the components of the input vector before the normalization. Completing data processing via 0-1 normalization method, the noise has been reduced obviously.

After the above data selection and data normalization, 200 × 100 samples are selected randomly to be the training samples; the rest data samples are to be a test sample. To validate the superior evaluation performance of the proposed MKALSSVR to evaluate the analog circuit performance online, the different methods such as LSSVR, -SVR, and the precision instrument are also carried out for the comparison purpose while the analog circuit performance evaluation is on. Meanwhile several parameters need to be introduced before applying the three SVR algorithms. First of all, it is required to denote three parameters, namely, error insensitive zone (), penalty factor , and kernel specific parameters . Problem regarding the choice of , , and was studied by several researchers [31, 32]. The penalty factor controls the smoothness or flatness of the approximation function. If we set the value to be large, the objective is only to minimize the empirical risk, which makes the learning machine more complex. On the contrary, if we set the value to be small, the objective is to cause the errors to be excessively tolerated yielding a learning machine with poor approximation [33]. In this study, SVR models have been constructed with and varied starting from and which are the empirical values given by [33]. Via some testing, the parameters and have been varied over a specific corresponding range in order to obtain better coefficient of correlation value, and the correlation value, denoted , is determined by (24). The kernel specific parameters are restricted since the value shown in Table 1 gives the better prediction for these models. The three values for each model are shown in Table 1. This study adopts RBF (11), where is width of RBF; this is also known as kernel function. The adopted , , and values for the four models are shown in following Table 1: where and are the actual and predicted values, respectively; and are mean of actual and predicted values corresponding to patterns. The number of support vector (SVN), the number of testing support vector (TESN), the number of training support vector (TRSN), the number of the data feature (FN), testing data mean square error (TEMSE), and training data mean square error (TDMSE) are all shown in Table 1. And MSE = , where is the real value, is the predicted value, and is a testing sample number.

3.3. Simulation Experiment

To validate the superior evaluation performance of the proposed MKALSSVR, the other two different methods, LSSVR and -SVR, are also employed in this part. The sharp contrast about the time response of the three methods are presented in Figure 2. We take one period testing time of LSSVR as comparison and giving the other two methods testing time, respectively. Via this testing comparing, we can see clearly that the testing speed is superior greater than the other two methods. In Figure 3, we can see that the support vector density is closely bound up the curvature. If the curvature is bigger; the support vector density is also bigger, on the contrary, while in the position of the relatively smooth, the support vector density is relatively small.

For the same purpose above, Tables 1 and 2 all give out the same things to prove the evaluation precision and speed via the proposed method MKALSSVR. And for the purpose to prove the well performance of the evaluation, the precise instrument method is utilized.

4. Conclusion

In this paper, a novel online evaluation strategy MKALSSVR aimed to analog circuit. Via numerical simulation, we can draw a conclusion that the proposed MKALSSVR has the merit as follows: first, the adaptive training strategy can confirm the training sample number adaptively; second, the multikernel design has changed the RBF width and having the more flexible adjust ability. And this makes the evaluation have the online processing ability. Third, this method avoids the overflow problem of norm LSSVR and support vector sparsity. Meanwhile, considering the low cost, high evaluation precision, and high operation rate of the proposed method MKALSSVR, this strategy is worth to be developed and implemented. Based on this discussion, we will take the issue about how to deal with the fault value as the future research problem.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This present work was supported partially by National Natural Science Foundation of China (Project no: 61304149) and Natural Science Foundation of Liaoning, China (Project no: 2013020044). The authors highly appreciate the above financial supports.