Research Article | Open Access
Xing Huo, Aihua Zhang, Hamid Reza Karimi, "Research on Amplifier Performance Evaluation Based on δ-Support Vector Regression", Abstract and Applied Analysis, vol. 2014, Article ID 574547, 6 pages, 2014. https://doi.org/10.1155/2014/574547
Research on Amplifier Performance Evaluation Based on δ-Support Vector Regression
Focusing on the amplifier performance evaluation demand, a novel evaluation strategy based on -support vector regression (-SVR) is proposed in this paper. Lower computer calculation demand is considered firstly. And this is dealt with by the superiority of -SVR which can be significantly improved on the number of support vectors. Moreover, the function of -SVR employs the modified RBF kernel function which is constructed from an original kernel by removing the last coordinate and adding the linear term with the last coordinate. Experiment adopted the typical circuit Sallen-Key low pass filter to prove the proposed evaluation strategy via the eight performance indexes. Simulation results reveal that the need of the number of -SVR support vectors is the lowest among the other two methods LSSVR and ε-SVR under obtaining nearly the same evaluation result. And this is also suitable for promotion computational speed.
With the popularization and complication of electronic equipment, many analog electronic functions have been replaced with digital equivalents; however, there still exists a need to use amplifiers . Actually, all of the electronic circuits, such as voice signals conversion, conversion, and sensor signals microprocession and conversion, are not out of the amplifiers . At the same time, the existence of circuit nonlinearities and component tolerances, noise, and the lack of training data make the performance detection or diagnosis of amplifier very complex [3–5]. At the same time, it looks that the performance evaluation or the detection of amplifiers has become increasingly important in the age full of electronic products world. Lots of reasons such as physical damage, manufacturing technique, aging, radiation, temperature changes, and power surges can all make the performance change. Via this performance or detection system, the further electronic products status can be forecasted, and some disaster faults can be avoided. Then the electronic systems can be in good condition on the right time. To this issue, some researchers have paid attention to the fault diagnosis, performance evaluation, and so on about amplifiers . And they are not in the early stage of development, but the technique still developed slowly for complication development of electronic equipment complex. Some researchers focus on the data-driven method and lots of literatures [7–10] had attempted to use it. The same things happened to the robust control [11–17].
With the technique development of the control strategy, much control theory such as neural network, fuzzy logic, genetic algorithm, and so forth, which offer enough develop space for amplifiers performance evaluation [18–20]. And support vector machine (SVM) has been extensively applied and researched. Zhang and Yu  focused on the requirements for amplifier performance evaluation method’s portability and low cost. The support vector regression (SVR) evaluation strategy was firstly proposed, and this evaluation scheme has also inherited the evaluation precision simultaneously. However, the need of large number of support vectors is the largest defect and this has been the major cause of its own being promoted and applied. Focusing on this issue, some literatures also discussed deeply [22, 23], and especially, the issue about the number of the support vectors required in the evaluation system concerned. -SVR is concerned by a lot of researchers regarding its ability to generalize, realize SRM, and generate sparse solutions .
This work, researched on the literature [24–26], proposed an amplifier evaluation strategy based on -SVR, presenting the superiority of -SVR about reducing support vector number. Moreover the modified RBF kernel function is also adopted which is constructed from an original kernel by removing the last coordinate and adding the linear term with the last coordinate. To demonstrate the effect, a typical circuit Sallen-Key low pass filter is employed. Considering the eight performance indexes of amplifiers, the testing is on.
2. Least Square Support Vector Regression
2.1. Normal LSSVR
Support vector machine (SVM) is originally developed by Vapnik  for solving nonlinear classification problems, and it has also been widely used in the regression problems . Here suppose the training data , where is the input with dimension and is its corresponding target; the normal LSSVR soft case optimization problem is where is the normal vector of the hyperplane, is the offset, represents the prediction residual vector, is the regularization parameter, and is the mapping from the input space to the feature space.
In practice, (1) is obtained by optimizing the Lagrangian where is the Lagrangian multiplier vector. The conditions for optimality are described by
Eliminating the vectors and , the following linear equation set is defined: where , with is the kernel function on the paired input vectors . The commonly used kernel function is the RBF which is defined by . After obtaining the solution via (4), for any new testing sample, then we have the predicted value
Aiming the same set of , the th training data is similarly mapped to . And in this paper we employ the -SVR scheme proposed in  as follows.(i)Every training example is duplicated; an output value is translated by a value of a parameter for an original training example and translated by for a duplicated training example.(ii)Every training example is converted to a classification example by incorporating the output as an additional feature and setting class 1 for original training examples and class -1 for duplicated training examples.(iii)Support vector classification (SVC) is run with the classification mappings.(iv)The solution of SVC is converted to a regression form.
Note 1. The definition of SVC, which can be found in , is omitted here.
Generally speaking, simple linear kernel function should not solve the problem about a longer time of testing new examples. To overcome this issue, many literatures have discussed much. One of them presented a novel scheme which employed a new kernel type in which the last coordinate is placed only inside a linear term . Based on this idea,  proposed a new kernel is constructed from an original kernel by removing the last coordinate and adding the linear term with the last coordinate. And here the most popular kernel RBF is employed and defined by where and are here dimensional vectors. The proposed method of constructing new kernels always generates a function fulfilling Mercer’s condition. And the explicit form for -SVR is defined by where and is the original kernel from which the new one was constructed (8).
3.1. Data Processing
Before the evaluation system, the data should be processed firstly. In this experiment, experiment data obtained based on the college analog electronics technique and the experiment data, eight indexes, such as gain, transmission band, cut-off frequency, lower cut-off frequency, maximum undistorted output amplitude, maximum undistorted power output, input sensitivity, and noise voltage are obtained by precise instrument evaluation in two years. For the following experiments, a lot of preprocessing should be done.
We set the number of data sample to be 259 × 100, and this is recorded data set . And a normalization data scheme, denoted by (9), is employed to settle the strangeness value in the data set: where and are the th components of the input vector before and after normalization, respectively, and and are the maximum and minimum values of all the components of the input vector before the normalization, respectively. Completing data preprocessing via 0-1 normalization method, the noise has been reduced obviously.
After the above data selection and data normalization, there are 200 × 100 samples selected randomly to be the training samples, and the rest parts are to be a test sample. During this testing, in order to achieve performance comparison and analysis, another two evaluation schemes LSSVR and -SVR are also carried out while the amplifier performance evaluation is on with the modified -SVR method. At the same time, several parameters need to be introduced firstly. First of all, it is necessary to denote three parameters, namely, error insensitive zone (), penalty factor , and kernel specific parameters . Then the parameters selection is another key issue. Several researchers had discussed the problem regarding the choice of , , and [30, 31]. The penalty factor controls the smoothness or flatness of the approximation function. Whatever the penalty factor is to be set big or small, the result would not be satisfied. If we set the value to be large, the objective is only to minimize the empirical risk, which makes the learning machine more complex. On the contrary, if we set the value to be small, the objective is to cause the errors to be excessively tolerated yielding a learning machine with poor approximation . In this experiment, LSSVR models have been constructed with and varied starting from and which are the empirical values given by . Via some testing, the parameters and have been varied over a specific corresponding range in order to obtain better coefficient of correlation value, and the correlation value, denoted by , is determined by (10). The kernel specific parameter is restricted since the value shown in Table 1 gives the better prediction for these models. The other necessary parameters of these three evaluation schemes are shown in Table 1. Only the proposed evaluation scheme adopts the modified RBF (7), and another two evaluation methods employ the popular RBF kernel function. The adopted , , and values for the four models are shown in Table 1. Consider where and are the actual and predicted values, respectively, and and are mean of actual and predicted values corresponding to patterns. And the mean square error is denoted as follows: where is the real value, is the predicted value, and is a testing sample number.
|Note 2. SVN denotes the number of support vectors, TESN denotes the number of testing support vectors, TRSN denotes the number of training support vectors, FN denotes the number of the data features, TEMSE denotes the testing data mean square error and TDMSE denotes the training data mean square error. |
3.2. Preparing before Simulation
For proving this proposed amplifiers performance evaluation, a typical circuit Sallen-Key low pass filter is employed, which is shown in Figure 1 , to be the testing object. Aiming at eight indexes of the amplifiers, the training data set is confirmed. Thus, the sample point and the correspondingly training set can be defined.
3.3. Simulation Experiment
After the above data preprocessing, simulation experiments are to be done. For validating, the superiority of the -SVR has significantly improved on the number of support vectors and has the best evaluation performance at the same time; the other two evaluation schemes, LSSVR and -SVR, are also employed here.
The sharp contrast of the well performance evaluation and improving on the number of support vectors with the three methods are presented in Figures 2, 3, 4, and 5. We take 6.2-second testing time as a period to be in comparison. Via this testing comparing, we can see clearly that the three methods all have well performance evaluation ability, but the proposed -SVR scheme has more ability to improve the number of support vectors. For further explanation of the issue, Tables 1 and 2 have all given out the same things to prove the evaluation precision and the ability to improve on the number of support vectors. Moreover, the precise instrument method is utilized in this experiment for proving the well performance of the evaluation.
Considering the demand of lower computer calculation and complexity, a novel amplifiers performance evaluation strategy is presented based on -SVR. The modified kernel function RBF is employed. The modified RBF is constructed from an original kernel by removing the last coordinate and adding the linear term with the last coordinate. Experiments reveal the superiority of the -SVR, which needs a small amount of support vectors, compared with the other two methods LSSVR and -SVR. The performance evaluation precision by the three schemes is also verified via this experiment.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This present work was supported partially by the National Natural Science Foundation of China (Project no. 61304149) and the Natural Science Foundation of Liaoning, China (Project no. 2013020044), and the Polish-Norwegian Research Programme (Project no. Pol-Nor/200957/47/2013). The authors highly appreciate the above financial support.
- P. Kabisatpathy, A. Barua, and S. Sinha, Fault Diagnosis of Analog Integrated Circuits, vol. 30, Springer, Berlin, Germany, 2005.
- J. R. Koza, F. H. Bennett III, D. Andre, M. A. Keane, and F. Dunlap, “Automated synthesis of analog electrical circuits by means of genetic programming,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 2, pp. 109–128, 1997.
- C. Alippi, M. Catelani, A. Fort, and M. Mugnaini, “SBT soft fault diagnosis in analog electronic circuits: a sensitivity-based approach by randomized algorithms,” IEEE Transactions on Instrumentation and Measurement, vol. 51, no. 5, pp. 1116–1125, 2002.
- Z. Czaja and R. Zielonko, “On fault diagnosis of analogue electronic circuits based on transformations in multi-dimensional spaces,” Measurement, vol. 35, no. 3, pp. 293–301, 2004.
- J. Cui and Y. Wang, “A novel approach of analog circuit fault diagnosis using support vector machines classifier,” Measurement, vol. 44, no. 1, pp. 281–289, 2011.
- H. Lin, L. Zhang, D. Ren, H. Kang, and G. Gu, “Fault diagnosis in nonlinear analog circuit based on Wiener kernel and BP neural network,” Chinese Journal of Scientific Instrument, vol. 30, no. 9, pp. 1946–1949, 2009.
- S. Yin, S. X. Ding, A. H. A. Sari, and H. Hao, “Data-driven monitoring for stochastic systems and its application on batch process,” International Journal of Systems Science. Principles and Applications of Systems and Integration, vol. 44, no. 7, pp. 1366–1376, 2013.
- S. Yin, S. X. Ding, A. Haghani, H. Hao, and P. Zhang, “A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process,” Journal of Process Control, pp. 1567–1581, 2012.
- S. Yin, X. Yang, and H. R. Karimi, “Data-driven adaptive observer for fault diagnosis,” Mathematical Problems in Engineering, vol. 2012, Article ID 832836, 21 pages, 2012.
- S. Yin, H. Luo, and S. Ding, “Real-time implementation of fault-tolerant control systems with performance optimization,” IEEE Transactions on Industrial Electronics, pp. 2402–2411, 2013.
- X. Zhao, X. Liu, S. Yin, and H. Li, “Improved results on stability of continuous-time switched positive linear systems,” Automatica, 2013.
- X. Zhao, P. Shi, and L. Zhang, “Asynchronously switched control of a class of slowly switched linear systems,” Systems and Control Letters, vol. 61, no. 12, pp. 1151–1156, 2012.
- X. Zhao, L. Zhang, and P. Shi, “Stability of a class of switched positive linear time-delay systems,” International Journal of Robust and Nonlinear Control, vol. 23, no. 5, pp. 578–589, 2013.
- X. Zhao, L. Zhang, P. Shi, and H. Karimi, “Novel stability criteria for TS fuzzy systems,” IEEE Transactions on Fuzzy Systems, 2013.
- X. Zhao, L. Zhang, P. Shi, and H. Karimi, “Robust control of continuous-time systems with state-dependent uncertainties and its application to electronic circuits,” IEEE Transactions on Industrial Electronics, 2013.
- X. Zhao, L. Zhang, P. Shi, and M. Liu, “Stability and stabilization of switched linear systems with mode-dependent average dwell time,” IEEE Transactions on Transactions on Automatic Control, vol. 57, no. 7, pp. 1809–1815, 2012.
- X. Zhao, L. Zhang, P. Shi, and M. Liu, “Stability of switched positive linear systems with average dwell time switching,” Automatica, vol. 48, no. 6, pp. 1132–1137, 2012.
- D. Sánchez, P. Melin, O. Castillo, and F. Valdez, “Modular neural networks optimization with hierarchical genetic algorithms with fuzzy response integration for pattern recognition,” in Advances in Computational Intelligence, pp. 247–258, Springer, Berlin, Germany, 2013.
- S. Abdulla and M. Tokhi, “Fuzzy logic based FES driven cycling by stimulating single muscle group,” in Converging Clinical and Engineering Research on Neurorehabilitation, pp. 173–182, Springer, Berlin, Germany, 2013.
- C. W. Chen, P. C. Chen, and W. L. Chiang, “Modified intelligent genetic algorithm-based adaptive neural network control for uncertain structural systems,” Journal of Vibration and Control, vol. 19, no. 9, pp. 1333–1347, 2013.
- A. Zhang and Z. Yu, “Research on amplifier performance evaluation based on support vector regression machine,” Chinese Journal of Scientific Instrument, vol. 29, no. 3, pp. 618–622, 2008.
- A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statistics and Computing, vol. 14, no. 3, pp. 199–222, 2004.
- S. K. Shevade, S. S. Keerthi, C. Bhattacharyya, and K. R. K. Murthy, “Improvements to the SMO algorithm for SVM regression,” IEEE Transactions on Neural Networks, vol. 11, no. 5, pp. 1188–1193, 2000.
- M. Orchel, “Support vector regression based on data shifting,” Neurocomputing, vol. 96, pp. 2–11, 2012.
- K. Ucak and G. Oke, “An improved adaptive PID controller based on online LSSVR with multi RBF kernel tuning,” in Adaptive and Intelligent Systems, pp. 40–51, Springer, Berlin, Germany, 2011.
- J. A. K. Suykens, J. de Brabanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: robustness and sparce approximation,” Neurocomputing, vol. 48, pp. 85–105, 2002.
- V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, Berlin, Germany, 2000.
- S. Rüping, “Incremental learning with support vector machines,” in Proceedings of the 1st IEEE International Conference on Data Mining (ICDM '01), pp. 641–642, December 2001.
- M. Orchel, “Regression based on support vector classification,” in Adaptive and Natural Computing Algorithms, pp. 353–362, Springer, Berlin, Germany, 2011.
- V. Cherkassky and F. M. Mulier, Learning from data: Concepts, Theory, and Methods, John Wiley & Sons, New York, NY, USA, 2007.
- V. Cherkassky and Y. Ma, “Practical selection of SVM parameters and noise estimation for SVM regression,” Neural Networks, vol. 17, no. 1, pp. 113–126, 2004.
- P. S. Yu, S. T. Chen, and I. F. Chang, “Support vector regression for real-time flood stage forecasting,” Journal of Hydrology, vol. 328, no. 3-4, pp. 704–716, 2006.
- M. Aminian and F. Aminian, “Neural-network based analog-circuit fault diagnosis using wavelet transform as preprocessor,” IEEE Transactions on Circuits and Systems II, vol. 47, no. 2, pp. 151–156, 2000.
Copyright © 2014 Xing Huo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.