Mathematical Problems in Engineering

Volume 2015, Article ID 125868, 7 pages

http://dx.doi.org/10.1155/2015/125868

## Twin Support Vector Machine Method for Identification of Wiener Models

King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia

Received 7 January 2015; Revised 11 April 2015; Accepted 16 April 2015

Academic Editor: Shiliang Sun

Copyright © 2015 Mujahed Al-Dhaifallah. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Twin support vector regression is applied to identify nonlinear Wiener system, consisting of a linear dynamic block in series with static nonlinearity. The linear block is expanded in terms of basis functions, such as Laguerre or Kautz filters, and the static nonlinear block is determined using twin support vector machine regression. Simulation of a control valve model and pH neutralization process have been presented to show the features of the proposed algorithm over support vector machine based algorithm.

#### 1. Introduction

The goal of system identification is to find a model, within a selected class of models, that produces the best predictions of a system’s output. In general, one forms a cost function that depends on some norm of the prediction errors and finds the model that minimizes this cost function. Since the model is an approximation to the true system, there is a trade-off between the complexity of the structure of the model and the accuracy of its predictions. In many cases, linear models can be used to produce accurate predictions of a system behavior, particularly, if it is restricted to operating within a narrow region. However, if the model is required to cover a broader operating region, then a nonlinear model may be required [1]. Block structured models, cascades of static nonlinearities and dynamic linear systems, are often a good trade-off as they can represent some dynamic nonlinear systems very accurately but are nonetheless quite simple. Common nonlinear models of this type are the Wiener and Hammerstein models [2]. Many algorithms have been proposed to identify Wiener models [3–11]. As one might notice from these researches, the extensive knowledge about linear time invariant (LTI) system representations was applied to the dynamic linear blocks. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Traditionally, the nonlinearity is represented by a polynomial because it is simple and easy to estimate. However, the problem with polynomial approximation is that it cannot deal with many common nonlinearities (saturation, threshold, dead zone, etc.). Better approximation of these nonlinearities can be achieved by spline functions. However, spline functions are defined by a series of knot points which must either be chosen a priori or treated as model parameters and included in the (nonconvex) optimization. Neural networks are another tool to approximate nonlinear functions. Their powerful approximation abilities make them attractive. However, the need to specify the neural network topology in terms of the number of nodes and layers and the need to solve nonconvex optimization complicate their implementation. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions [12, 13]. In contrast with other approximation methods, SVMs do not require a priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs [14]. One of the common drawbacks of SVM based algorithms is that they are computationally heavy. Recently, a twin support vector machine (TSVM) regression algorithm has been proposed [15]. The formulation of TSVM is very close to classical SVM except that it aims to enclose data points between two parallel planes such that each plane is closer to one class and is as far as possible from the other. Such formulation reduces the computation complexity which makes the TSVM one of the common methods in machine learning. Lately, many extensions of TSVM classifier have been proposed to improve its performance in certain aspects. A TSVM classifier has been extended to put the TSVM to multitask learning [16]. A generalized framework of TSVM for learning from labeled and unlabeled data was investigated by [17, 18].

Recently, Tötterman and Toivonen [19] have developed a new algorithm to identify Wiener models based on SVM regression, where the linear part is described by a basis filter expansion while the nonlinear part is represented by SVM. In this work, TSVM regression is used to formulate an identification algorithm for Wiener models. Simulation examples are presented to show the virtues of the proposed algorithm over Tötterman’s algorithm. The outline of this paper is as follows: TSVM theory is reviewed in Section 2. In Section 3, an algorithm for the identification of Wiener models based on twin support vector machine is proposed. Section 4 presents two illustrative examples to test the proposed algorithm. In Section 5, concluding remarks are given.

#### 2. Twin Support Vector Machines Regression

Twin support vector regression (TSVR) is obtained by solving the following pair of quadratic programming problems (QPPs):where , are parameters, , are slack variables, is vector of ones, and is a nonlinear kernel function. Given training data points where and represent input and output vectors, respectively, the TSVR algorithm finds two functions , which determines the -insensitive down bound regressor, and , which determines the -insensitive up bound regressor. The end regressor is computed as the mean of these two functions. The geometric interpretation is given in Figure 1. The objective function of (1) or (2) is the sum of the squared distances from the shifted functions or to the training points. Therefore, minimizing it leads to the function or . The constraints require the estimated function or to be at a distance of at least or from the training points. That is, the training points should be larger than the function at least , while they should be smaller than the function at least . The slack variables and are introduced to measure the error whenever the distance is closer than or . The second term of the objective function minimizes the sum error variables, thus attempting to overfit the training points.