Abstract

This paper presents a novel prior knowledge-based Green's kernel for support vector regression (SVR). After reviewing the correspondence between support vector kernels used in support vector machines (SVMs) and regularization operators used in regularization networks and the use of Green's function of their corresponding regularization operators to construct support vector kernels, a mathematical framework is presented to obtain the domain knowledge about magnitude of the Fourier transform of the function to be predicted and design a prior knowledge-based Green's kernel that exhibits optimal regularization properties by using the concept of matched filters. The matched filter behavior of the proposed kernel function makes it suitable for signals corrupted with noise that includes many real world systems. We conduct several experiments mostly using benchmark datasets to compare the performance of our proposed technique with the results already published in literature for other existing support vector kernel over a variety of settings including different noise levels, noise models, loss functions, and SVM variations. Experimental results indicate that knowledge-based Green's kernel could be seen as a good choice among the other candidate kernel functions.

1. Introduction

Over the last decade support vector machines (SVMs) have been reported by several studies [1–4] to perform equal or better than other learning machines such as neural networks for the problem of learning from finite dataset and approximating a given function from sparse data. Vapnik [1, 2, 5, 6] has laid down the theoretical foundations of the structural risk minimization (SRM) principle to comprehend the problem of learning from a finite set of data in the context of regularization theory given by Tikhonov [7, 8]. SRM principle provides a connection between capacity of the hypothesis space that contains the learning models to approximate the given function and the size of the training set. Generally, the smaller the size of the training set is, the lower the capacity of the hypothesis space should be to avoid overfitting [1, 2, 6, 9]. This motivates one to understand SVM in the context of regularization theory and find a linear solution in the kernel space that minimizes a certain loss function while keeping capacity of the hypothesis space as small as possible. The kernel function in SVM provides a nonlinear mapping from input space to a higher-dimensional feature space. Research studies [10, 11] have affirmed that SVM's regularization properties are associated with the choice of kernel function used for mapping. In the literature, Babu et al. [12] proposed local kernel based color modeling for visual tracking. Maclin et al. [13] presented a method for incorporating and refining domain knowledge for support vector machines via successive linear programming. Yen et al. [14] used kernel-based clustering methods for detecting clusters in weighted, undirected graphs. Toma [15] proposed nonlinear differential equations capable of generating continuous functions similar to pulse sequence for modeling time series. M. Li and J.-Y. Li [16] introduced a generalized mean-square error (MSE) to address the predictability of long-range dependent (LRD) series. Bakhoum and Toma [17] presented an extension to the Fourier/Laplace transform for the analysis of signals that are represented by traveling wave equations and offered a mathematical technique for the simulation of the behavior of large systems of optical oscillators. Liu [18] gave analysis of chaotic, dynamic time series events. Poggio and Girosi [19, 20] described a general learning approach using regularization theory. Girosi et al. [21, 22] have provided a unified framework for regularization networks and learning machines. Evgeniou et al. [9], Smola and SchΓΆlkopf [23], and Williamson et al. [24] demonstrated a correspondence between regularization networks (RNs) and support vector machines (SVMs). Smola et al. [11] and Scholkopf and Smola [10] have shown a connection between regularization operators used in regularization networks and support vector kernels and presented a method of using Green's functions of their corresponding regularization operators to construct support vector kernels with equivalent regularization properties. However, the problem of choosing the optimal regularization operator to construct the corresponding SV kernel for a given training set still remains unanswered. The work presented herein is focused on using prior knowledge about the magnitude spectrum of the function to be predicted to design the support vector kernels from Green's functions having suitable regularization properties by utilizing the concept of matched filters, an idea inspired by Scholkopf and Smola [10]. The intuition of matching Green's kernel comes from the fact that most real world systems are inevitably contaminated with noise in addition to their intrinsic dynamics [25, 26] and matched filters are known to be the optimal choice to recover signals in the presence of additive white noise [27, 28]. However, no mathematical justification is given in the literature for the use of matched filter theorem to obtain the matching Green's kernel. No experimental results are so far available in the literature to compare the performance of knowledge-based Green's kernel with existing support vector kernels. In this paper, we provide a mathematical framework for utilizing the matched filter theorem to design knowledge-based Green's kernel and conduct experiments on different datasets (mostly benchmarks) with different levels and models (Gaussian and Uniform) of additive white noise to evaluate the performance of our proposed kernel function. Although the assumption of additive white noise will not exactly hold in all real world cases, we keep up with the time-honored tradition [2–4, 25, 26, 29, 30] of using benchmark datasets with additive white noise assumption to evaluate the performance of our proposed method. The focus is on support vector regression (SVR). The rest of the paper is organized as follows. Section 2 reviews the theory of support vector regression, SV kernels, regularization networks, and the connection between support vector method and the theory of regularization networks. Section 3 provides the theory of Green's functions and how they can be used to construct SV kernels. Section 4 describes the theory of matched filters and lays down the mathematical foundation for building knowledge-based Green's kernel. Section 5 presents the experimental results and Section 6 concludes the paper.

2. Support Vector Machines and Regularization Networks

Support vector machines introduced by Vapnik and coworkers for pattern recognition and regression estimation tasks have been reported to be an effective method during the last decade [31–33]. Initially developed for classification problems, a generalization of support vector (SV) algorithm known as πœ–-insensitive SV regression [1, 33] was derived to solve the problems where the function to be estimated belongs to the set of real numbers.

2.1. πœ–-Insensitive Support Vector Regression

Suppose that we have {(𝐱1,𝑦1),…,(𝐱𝑁,𝑦𝑁)} as the training set with π±π‘–βˆˆβ„π‘‘ and π‘¦π‘–βˆˆβ„, where 𝑦𝑖 are the training targets. The problem of calculating an estimate 𝑓(𝐱𝑖) of 𝑦𝑖 for training data {|𝐱𝑖,𝑦𝑖|𝑖=1,…,𝑁} can be formulated as𝑓(𝐱)=βŸ¨π°β‹…π±βŸ©+𝑏.(2.1) The goal of πœ–-insensitive SV algorithm is to calculate an estimate 𝑓(𝐱𝑖) of 𝑦𝑖 by selecting the optimal hyperplane 𝐰 and bias 𝑏 such that 𝑓(𝐱𝑖) is at the most πœ– distance from 𝑦𝑖 while keeping the norm ‖𝐰‖2 of the hyperplane minimum. The corresponding quadratic optimization problem can be written in terms of regularized risk functional as described by [10, 11], that is, to minimizeβ„›[𝑓]=𝛾2‖𝐰‖2+1𝑁𝑁𝑖=1||π‘¦π‘–ξ€·π±βˆ’π‘“π‘–ξ€Έ||πœ–,(2.2) where β„› is the regularized risk functional, 𝛾 is the regularization constant such that 𝛾β‰₯0, and the second term on the right-hand side of (2.2) is the empirical risk functional with Vapnik's πœ–-insensitive loss function [1, 2, 34]. By introducing the slack variables, in the sense of [1, 2, 34] and rewriting the problem in (2.2), we get, that is, to minimizeβ„›[𝑓]=𝛾2‖𝐰‖2+1𝑁𝑁𝑖=1ξ€·πœπ‘–+πœβˆ—π‘–ξ€Έ(2.3) subject toπ‘¦π‘–βˆ’βŸ¨π°β‹…π±π‘–βŸ©βˆ’π‘β‰€πœ–+πœπ‘–,βŸ¨π°β‹…π±π‘–βŸ©+π‘βˆ’π‘¦π‘–β‰€πœ–+πœβˆ—π‘–,πœπ‘–,πœβˆ—π‘–β‰₯0.(2.4) In order to obtain the SV expansion of the function 𝑓(𝐱), we use the standard [1, 2, 34, 35] Lagrangian technique to form the objective function and the corresponding constraints. The well known formulation of the quadratic optimization problem can be reached by taking the partial derivatives of the objective function, putting them equal to zero for optimal solution and substituting the values obtained into the objective function. We follow the lines of [11, 23] and write the quadratic optimization problem as⎧βŽͺβŽͺ⎨βŽͺβŽͺ⎩1minimize2𝑁𝑖,𝑗=1ξ€·π›Όβˆ—π‘–βˆ’π›Όπ‘–π›Όξ€Έξ€·βˆ—π‘—βˆ’π›Όπ‘—π±ξ€Έξ«π‘–β‹…π±π‘—ξ¬,+πœ–π‘ξ“π‘–=1ξ€·π›Όπ‘–βˆ’π›Όβˆ—π‘–ξ€Έβˆ’π‘ξ“π‘–=1ξ€·π›Όβˆ—π‘–βˆ’π›Όπ‘–ξ€Έπ‘¦π‘–(2.5) subject to𝑁𝑖=1ξ€·π›Όπ‘–βˆ’π›Όβˆ—π‘–ξ€Έ=0,0≀𝛼𝑖,π›Όβˆ—π‘–β‰€1.𝛾𝑁(2.6) This leads to the well-known formulation of SV regression, that is,𝑓(𝐱)=𝑁𝑖=1ξ€·π›Όβˆ—π‘–βˆ’π›Όπ‘–ξ€ΈβŸ¨π±π‘–.𝐱⟩+𝑏.(2.7) Comparing (2.7) and (2.1), it shows that the training examples that lie inside the πœ–-tube contribute to a sparse expansion of 𝐰 because the corresponding Lagrange multipliers 𝛼𝑖,π›Όβˆ—π‘– are zero [1, 2, 10, 34, 35].

The expression given by (2.7) corresponds to linear SV regression. In order to obtain nonlinearity, SV algorithms can be quipped with nonlinear operators πœ‘(β‹…) mapping from input space into a high-dimensional feature space, πœ‘βˆΆπ’³β†’β„± as described in [36, 37]. The kernel function is defined asπ‘˜ξ€·π±,π±ξ…žξ€Έξ€œ=βŸ¨πœ‘(𝐱)β‹…πœ‘(𝐱′)⟩,(2.8)𝑋2π‘˜ξ€·π±,π±ξ…žξ€Έπ‘“ξ€·π±(𝐱)π‘“ξ…žξ€Έπ‘‘π‘₯𝑑π‘₯ξ…žβ‰₯βˆ€π‘“βˆˆπΏ2(π‘₯).(2.9) According to Mercer theorem, the kernel is any continuous and symmetric function that satisfies the positivity condition given by (2.9). Such a function π‘˜(𝐱,π±ξ…ž) defines a dot product in the feature space given by (2.8) [36].

Hence, by making use of (2.8), (2.7) can be written as𝑓(𝐱)=𝑁𝑖=1ξ€·π›Όβˆ—π‘–βˆ’π›Όπ‘–πœ‘ξ€·π±ξ€Έξ«π‘–ξ€Έξ¬β‹…πœ‘(𝐱)+𝑏=𝑁𝑖=1ξ€·π›Όβˆ—π‘–βˆ’π›Όπ‘–ξ€Έπ‘˜ξ€·π±π‘–ξ€Έ,𝐱+𝑏.(2.10)

2.2. Regularization Networks

The idea of regularization method was first given by Tikhonov [8] and Tikhonov and Arsenin [7] for the solution of ill-posed problems. Assume that we have a finite dataset {(𝐱1,𝑦1),…,(𝐱𝑁,𝑦𝑁)}, independently and identically drawn from a probability distribution 𝑝(𝐱,𝑦) in the presence of noise. Assume that the probability distribution 𝑝(𝐱,𝑦) is unknown. One way of approaching the problem is to estimate the function 𝑓 by minimizing a certain empirical risk functional:β„›emp[𝑓]=1𝑁𝑁𝑖=1||π‘¦π‘–ξ€·π±βˆ’π‘“π‘–ξ€Έ||πœ–.(2.11) The problem of approaching the solution through minimizing (2.11) is ill-posed because the solution is unstable [2]. Hence, the solution is to utilize the idea proposed by [7, 8] and add a capacity control or stabilizer [22] term to (2.11) and minimize a regularized risk functional:β„›RN[𝑓]=1𝑁𝑁𝑖=1||π‘¦π‘–ξ€·π±βˆ’π‘“π‘–ξ€Έ||πœ–+𝛾2β€–πœ†π‘“β€–2,(2.12) where πœ† is a linear, positive semidefinite regularization operator. The first term in (2.12) corresponds to finding a function that is as close to the data examples as possible in terms of Vapniks πœ–-insensitive loss function whereas the second term is the smoothness functional and its purpose is to restrict the size of the functional space and to reduce the complexity of the solution [23]. The filter properties of πœ† are given by πœ†βˆ—πœ†, where βˆ— represents the complex conjugate. Following the lines of [11, 23], the problem of minimizing the regularized risk functional given by (2.12) can be transformed into constrained optimization problem by utilizing the standard Lagrange multipliers technique and a formulation similar to (2.5) can be obtained where a direct relationship between SV technique and the RN method can be observed. In other words, training an SVM with a kernel function obtained from the regularization operator πœ† is equivalent to implementing RN to minimize the regularized risk functional given by (2.12) with πœ† as the regularization operator. We refer the reader to [2, 11, 23] for the detailed discussion on the relationship between the two methods, that is, SV and RN.

3. Green's Functions and Support Vector Kernels

The idea of Green's functions was introduced in the context of solving inhomogeneous differential equations with boundary conditions. However, Green's functions of their corresponding regularization operators can be used to design kernel functions that exhibit the regularization properties given by their corresponding regularization operators, satisfy the Mercer condition, and qualify to be SV kernels [11, 23, 35]. Green's kernel of a discrete regularization operator πœ†[𝑛] can be written as [38]𝐺π‘₯,π‘₯ξ…žξ€Έ=𝑁𝑖=1πœ†[𝑛]πœ™π‘›(π‘₯)πœ™π‘›ξ€·π‘₯ξ…žξ€Έ,(3.1) where πœ™π‘›{𝑛=1,…,𝑁} are the basis of the orthonormal eigenvectors of 𝐺 corresponding to nonzero eigenvalues πœ†[𝑛] such that πœ†[𝑛] confers the spectrum of 𝐺. The expression given by (3.1) assumes one-dimensional case. A generalization to multidimension is straight forward and will be discussed later. From [38], it can be easily shown that 𝐺 satisfies the condition of positive definiteness and the series converges absolutely uniformly since all the eigenvalues of 𝐺 are positive. As 𝐺(π‘₯𝑖,π‘₯𝑗)=𝐺(π‘₯𝑗,π‘₯𝑖), it also satisfies the symmetry property and Mercer theorem can be applied to prove that 𝐺 is an admissible support vector kernel [39] and it can be written as a dot product in feature space, that is,𝐺π‘₯𝑖,π‘₯𝑗=ξ«πœ™ξ€·π‘₯𝑖π‘₯β‹…πœ™π‘—ξ€Έξ¬.(3.2) At this point, we refer the reader to literature [10, 11, 23, 35] for useful discussion on regularization properties of commonly used SV kernels. We can also utilize (3.1) to obtain periodic kernels for given regularization operators. For example [10], by taking πœ†[𝑛] as eigenvalues of the given discrete regularization operator and Fourier basis {1/2πœ‹,sin(𝑛x),cos(𝑛π‘₯),π‘›βˆˆπ‘} as corresponding eigenvectors, we get Green's kernel:π‘˜ξ€·π‘₯,π‘₯ξ…žξ€Έ=𝑀𝑛=1πœ†[𝑛]ξ€·ξ€·sin(𝑛π‘₯)sin𝑛π‘₯ξ…žξ€Έξ€·+cos(𝑛π‘₯)cos𝑛π‘₯ξ…ž,π‘˜ξ€·ξ€Έξ€Έπ‘₯,π‘₯ξ…žξ€Έ=𝑀𝑛=1πœ†[𝑛]𝑛cosπ‘₯βˆ’π‘₯ξ…ž.ξ€Έξ€Έ(3.3)

Capacity control can be achieved by restricting the summation to different eigensubspaces with different values of 𝑀. Excluding the eigenfunctions that correspond to high frequencies would result in increased smoothness thereby decreasing the system capacity and vice versa. A general lowpass smoothing functional is a good choice if there is no prior information available about the frequency distribution of the signal to be predicted. However, (3.1) can be seen as a reasonable choice for building kernels if there is some prior information available about the magnitude spectrum of the signal that we would like to approximate by utilizing the concept of matched filters [10].

4. Matched Filter and Knowledge-Based Matched Green's Kernel

Matched filter [40, 41] is the optimum time invariant filter among all linear or nonlinear filters to recover a known signal from additive white noise [42]. Assume the input signal 𝑓(π‘₯) in the presence of additive white noise 𝑛(π‘₯) passing through the matched filter with impulse response β„Ž(π‘₯). The output of the filter is given by𝑦(π‘₯)=β„Ž(π‘₯)βŠ—(𝑓(π‘₯)+𝑛(π‘₯)),(4.1) where βŠ— denotes the convolution operation. Our aim is to obtain the conditions for which signal-to-noise ratio (SNR) at the filter output takes its maximum value since it is understandable that the probability of recovering a signal from noise is high when SNR is maximum [27]. From [27, 42, 43] the impulse response of the matched filter is given byβ„Žξ€·π‘₯(π‘₯)=𝐴𝑓1ξ€Έβˆ’π‘₯,(4.2) where 𝐴 is the filter gain constant and π‘₯1 is the point at which the output power of the filter takes its maximum value. The maximum SNR is given by(SNR)max=2𝑁0ξ€œβˆžβˆ’βˆžπ‘“2ξ€·π‘₯1ξ€Έβˆ’π›Όπ‘‘π›Ό,(4.3) where 𝑁0 is the noise power density. We refer the reader to original literature [27, 42, 43] for details and the proof of matched filter theorem. For simplicity we will assume unity gain, that is, 𝐴=1. It is noteworthy in (4.2) that the filter impulse response is independent of noise power density 𝑁0 with the prior assumption of white noise. Secondly, maximum SNR (4.3) is a function of signal energy and is independent of signal shape [42].

In order to design a matching kernel based on prior knowledge, it is sufficient to have an estimate of the magnitude spectrum of the signal to be predicted as prior knowledge about the signal as opposed to the theory of matched filters where complete knowledge of the signal is required to recover the signal from noise. From (4.2) it can be seen that the impulse response of the optimum filter is time reversed signal 𝑓(π‘₯) with π‘₯1 delay. Nevertheless, in order to obtain matching kernel we are only interested in magnitude spectrum of matched filter which can be obtained by taking the Fourier transform of β„Ž(π‘₯) in (4.2) and multiplying it with its complex conjugate:π»ξ€·π‘’π‘—πœ”ξ€Έ=ξ€œβˆžβˆ’βˆžβ„Ž(π‘₯)π‘’βˆ’π‘—πœ”π‘₯ξ€œπ‘‘π‘₯=βˆžβˆ’βˆžπ‘“ξ€·π‘₯1ξ€Έπ‘’βˆ’π‘₯βˆ’π‘—πœ”π‘₯𝑑π‘₯=π‘’βˆ’π‘—πœ”π‘₯1ξ€œβˆžβˆ’βˆžπ‘“ξ€·π‘₯1ξ€Έπ‘’βˆ’π‘₯π‘—πœ”(π‘₯1βˆ’π‘₯)𝑑π‘₯=πΉβˆ—ξ€·π‘’π‘—πœ”ξ€Έπ‘’βˆ’π‘—πœ”π‘₯1,||||(4.4)𝐻(πœ”)2𝑒=π»π‘—πœ”ξ€Έπ»βˆ—ξ€·π‘’π‘—πœ”ξ€Έ=||||𝐹(πœ”)2||||=||||,(4.5)𝐻(πœ”)𝐹(πœ”),(4.6) where 𝐻(π‘’π‘—πœ”) is the frequency response of matched filter, |𝐻(πœ”)| is the magnitude response and |𝐹(πœ”)| is the magnitude spectrum of the matched filter and the signal 𝑓(π‘₯), respectively. An important note at this point is that (4.6) does not depend on delay π‘₯1 whereas in the case of matched filters it is necessary to have a delay to make the impulse response realizable. Hence the matching kernel can be obtained by simply calculating the magnitude spectrum of 𝑓(π‘₯) and utilizing (3.1). As 𝑓(π‘₯) is the signal to be predicted, we assume that its magnitude spectrum does not significantly change from the training targets 𝑦(π‘₯) in (2.2) and this is the prior knowledge that we acquire from 𝑦(π‘₯) about 𝑓(π‘₯) to obtain Green's kernel. This is a weak condition since many signals with completely different characterization in time domain share the similar magnitude spectrum.

Figure 1 shows the time and the frequency domain representation of two different signals. Signal in Figure 1(a) is a sinusoid whereas signal in Figure 1(b) is the modified Morlet wavelet. Despite their completely different time domain characterization they share similar frequency localization given by Figures 1(c) and 1(d), respectively.

In order to be capable of using (3.1) to obtain our desired kernel function we need its eigenvalues and we will use Fourier basis {1/2πœ‹,sin(𝑛π‘₯),cos(𝑛π‘₯),π‘›βˆˆπ‘} as the corresponding eigenfunctions since complex exponentials are the eigenfunctions of any linear time-invariant (LTI) system that includes matched filters and sinusoids can be expressed as linear combination of complex exponentials using Eulers formula [44]. The eigenvalues of and LTI system are given by frequency response 𝐻(π‘’π‘—πœ”) which is a complex-valued quantity [45]. Frequency response can however be written as||||𝑒𝐻(πœ”)=𝐻(πœ”)π‘—πœƒ(πœ”),(4.7) namely as a product of magnitude response |𝐻(πœ”)| and phase response πœƒ(πœ”) [3]. Since we are only interested in smoothness properties of the kernel and not the phase response, it is adequate to take the magnitude response |𝐻(πœ”)| as eigenvalues of the system. Another reason for this is that in order to have positive definite Green's kernel function the eigenvalues need to be strictly positive [38]. Hence, matching Green's kernel function can be obtained by using (3.1):𝐺(π‘₯,π‘₯β€²)=π‘βˆ’1𝑛=1||π»ξ€·πœ”π‘›ξ€Έ||ξ€·ξ€·πœ”sin𝑛π‘₯ξ€Έξ€·πœ”sin𝑛π‘₯ξ…žξ€Έξ€·πœ”+cos𝑛π‘₯ξ€Έξ€·πœ”cos𝑛π‘₯ξ…ž=ξ€Έξ€Έπ‘βˆ’1𝑛=1||π»ξ€·πœ”π‘›ξ€Έ||ξ€·πœ”cos𝑛π‘₯βˆ’π‘₯ξ…ž,ξ€Έξ€Έ(4.8) where πœ”π‘› is the discrete time counterpart of continuous frequency variable πœ”, such that πœ”π‘›=2πœ‹π‘›/𝑁,0β‰€π‘›β‰€π‘βˆ’1, that is, normalized to have a range of 0β‰€πœ”π‘›β‰€2πœ‹. By making use of (4.6) and ignoring the constant eigenfunction with 𝑛=0, we get𝐺π‘₯,π‘₯ξ…žξ€Έ=π‘βˆ’1𝑛=1||πΉξ€·πœ”π‘›ξ€Έ||ξ€·πœ”cos𝑛π‘₯βˆ’π‘₯ξ…žξ€Έξ€Έ,(4.9) which is a positive definite SV kernel that exhibits matched filter regularization properties given by |𝐹(πœ”)|. From the algorithmic point of view, we only need to compute magnitude of the discrete Fourier transform of the training targets with the assumption that the function 𝑓(π‘₯) to be predicted takes a similar magnitude spectrum with additive noise. To control the model complexity of the system we introduce two variables to restrict the summation calculation to desired eigensubspaces and write (4.9) as𝐺π‘₯,π‘₯ξ…žξ€Έ=𝑗𝑛=𝑖||πΉξ€·πœ”π‘›ξ€Έ||ξ€·πœ”cos𝑛π‘₯βˆ’π‘₯ξ…ž,ξ€Έξ€Έ(4.10) where 𝑖 and 𝑗 are the kernel parameters for Green's kernel similar to the kernel parameters of other SV kernel such as kernel width 𝜎 in the case of Gaussian RBF kernel or degree of the kernel 𝑑 in the case of polynomial kernel. Similar to other SV kernels an optimal value for 𝑖 and 𝑗 is required to achieve the best results.

Analogous to the conventional Gaussian kernel that exhibits Gaussian lowpass filter behavior, that is, πœ†(πœ”)=exp[𝜎2β€–πœ”β€–2/2] [10, 11] (recall that the Fourier transform of a Gaussian function is also a Gaussian function) the knowledge-based Green's kernel obtained from the eigenvalues of the matched filter exhibits the matched filter properties. This property makes the knowledge-based Green's kernel an optimal choice for noise regime since matched filters are the optimal filters for noise-corrupted data regardless of the signal shape and the noise level. Since most of the real world systems are unavoidably contaminated with noise in addition to their intrinsic dynamics [25, 26, 30], we keep up with the long-established tradition [2–4, 25, 26, 29, 30] of using benchmark datasets with additive white noise to evaluate the performance of the proposed techniques and conduct several experiments on mostly benchmark datasets ranging from simple regression models to chaotic and nonlinear time series with additive white noise in order to compare the performance of our technique with that of existing support vector (SV) kernels. Nevertheless, the advantage of knowledge-based Green's kernel comes at the cost of slightly increased computational complexity. However for most of the practical signals only a small portion of the whole eigensubspace turns out to be nonzero thereby lessening the computational load. Another way to overcome this problem is the efficient algorithmic implementation.

A generalization of the kernel function given by (4.10) to 𝑁 dimensions can be easily made by𝐾(𝐱,𝐲)=𝑑𝑖=1π‘˜π‘–ξ€·π‘₯𝑖,𝑦𝑖(4.11) (see [2] for proof of the theorem). Alternatively,𝐾(𝐱,𝐲)=π‘˜(β€–π±βˆ’π²β€–)(4.12) can also be used [10].

5. Experimental Results

5.1. Model Complexity Control

The purpose of this experiment is to examine the ability of Green's kernel to control the complexity of an SV model trained with Green's kernel. Sinc function is used as training and testing data. The training data is approximated with different models built only by reducing the size of eigensubspace in kernel matrix computation, that is, by reducing the value of kernel parameter 𝑗 while keeping the SV regularization parameter 𝐢, and kernel parameter 𝑖 constant throughout the experiment. In other words, the complexity of the model is reduced by reducing the number of nonzero eigenvalues, that is, reducing the value of 𝑗 thereby removing the high capacity eigenfunctions to obtain a smoother approximation. The value of 𝑖=1 was used for all the models.

Figure 2 shows the regression results obtained for different values of 𝑗. It is evident from the figure that reducing the size of eigensubspace produces smoother approximations which highlights the ability of Green's kernel as a regularizer. No GOF criteria are used in this experiment since the point of interest is to produce a smoother approximation not necessarily a good approximation.

5.2. Regression on Sinc Function

Sinc function given by (5.1) has become a benchmark to validate the results of SV regression [2, 3, 10, 34, 35, 46]:sinc(π‘₯)=sin(πœ‹π‘₯)πœ‹π‘₯.(5.1) The training data is 27 points with zero mean, 0.2 variance additive Gaussian white noise. Mean square error was used as the figure of merit. Figure 3 shows the regression results obtained by Green's kernel and other commonly used SV kernels. Although the results obtained by Gaussian RBF and Bspline kernel are very similar, we prefer to use Gaussian RBF because only Bspline of odd order 𝑛 is admissible support vector kernels [10] and this restricts the model complexity control.

Figure 4 shows the magnitude spectrum of the training signal and the actual sinc function. Magnitude spectrum of the training signal is used as the prior knowledge about the actual signal, that is, the signal to be predicted and used to construct the matching Green's kernel. Table 1 shows the regression results obtained with different kernel functions. Results indicate that Green's kernel achieved better performance than any other support vector kernel for the given function. The CPU time is the kernel matrix computation time in seconds on an Intel (R) 2.8 GHz, 2 GB Memory system using Matlab 7. The CPU time for other kernel functions was computed using [47]. The lesser computational time of knowledge-based Green's kernel is owed to efficient algorithmic implementation which only includes nonzero eigenvalues in kernel matrix computation. The (near) optimal values of SVM hyper-parameters for each kernel function were selected after several hundred trials.

5.3. Regression on Modified Morlet Wavelet Function

Modified Morlet wavelet function is described byξ€·πœ”ModifiedMorletWaveletFunction,πœ“(π‘₯)=cos0π‘₯ξ€Έ.cosh(π‘₯)(5.2) This function was selected because of its complex model. A signal of 101 data points with zero mean, 0.3 variance white noise was used the training set. The (near) optimal values of SVM hyperparameters for each kernel function were selected after several hundred trials. Figure 5 shows the performance of the different SV kernels for modified Morlet wavelet function and the magnitude spectrum of training and actual signals. Although the training function is heavily corrupted with noise, there is still some similarity between the magnitude spectrum of two functions and this similarity is used as the prior knowledge about the problem. As shown in Table 2, again, Green's kernel performed better than any other kernel for heavily noise corrupted data.

The purpose of next two experiments is to evaluate the performance of the proposed kernel function against the conventional Gaussian kernel in a broader perspective, that is, across different noise models, noise levels, prediction steps (short-term and long-term prediction for time series), and different variations of SVM that use different loss functions and optimization schemes. To perform a faithful comparison, we use the results already published in literature as our reference point and use the same datasets, noise model, noise level, and loss function as suggested by the corresponding authors. For the next two experiments, long-term and short-term prediction of chaotic time series is considered as a special case of regression. We use Mackey-Glass, a high-dimensional chaotic benchmark time series, originally introduced as a model of blood cell regulation [48]. Mackey Glass is generated by the following delay differential equation [3]:𝑑π‘₯(𝑑)=π‘‘π‘‘π‘Žπ‘₯(π‘‘βˆ’πœ)1+π‘₯10(π‘‘βˆ’πœ)βˆ’π‘π‘₯(𝑑)(5.3) with π‘Ž=0.2, 𝑏=0.1, and 𝜏=17.

5.4. Chaotic Time Series Prediction Using SVM and LS-SVM

For comparison purposes, we use Muller et al. [3] that employs SVM with Gaussian RBF kernel and Zhu et al. [4] that utilizes LS-SVM with Gaussian RBF kernel for short-term (1 step) and long-term (100 step) prediction of Mackey-Glass system for different noise models and noise levels. Table 3 shows the mean square error obtained by Green's kernel using SVM and LS-SVM over different noise settings in comparison to the results reported by [3, 4]. 1S and 100S denote the 1 step and 100 step prediction of time series. We use the same definition of SNR as used by the corresponding authors, that is, ratio between the standard deviation of the respective noise and the underlying time series. Experimental results indicate that knowledge-based Green's kernel should be considered as a good kernel choice for noise-corrupted data.

6. Conclusion

This paper provides a mathematical framework for using Green's functions to construct problem specific admissible support vector kernel functions based on the prior knowledge about smoothness properties of the function to be predicted. Matched filter theorem is used to incorporate domain knowledge of the magnitude spectrum of the signal to be predicted into support vector kernels to achieve desired regularization properties. It has been shown that the knowledge-based matching Green's kernel is a positive definite SV kernel that exhibits matched filter behavior. Since matched filters are known to be the optimal choice for noise corrupted data, the key contribution of the proposed technique is its noise robustness (see Figure 5) which makes it suitable for many real world system. Experimental results show that the knowledge-based Green's kernel has the ability to control the model complexity (see Figure 2) of the system and shows good generalization performance compared to other existing support vector kernels (see Tables 1, 2, and 3). Future research would include implementation of Green's kernel on real world problems such as speech synthesis and ultra sound image analysis.