Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9911871 | https://doi.org/10.1155/2021/9911871

Ling Wang, Hongqiao Wang, Guangyuan Fu, "Multi-Nyström Method Based on Multiple Kernel Learning for Large Scale Imbalanced Classification", Computational Intelligence and Neuroscience, vol. 2021, Article ID 9911871, 11 pages, 2021. https://doi.org/10.1155/2021/9911871

Multi-Nyström Method Based on Multiple Kernel Learning for Large Scale Imbalanced Classification

Academic Editor: Cornelio Yáñez-Márquez
Received09 Mar 2021
Accepted27 May 2021
Published14 Jun 2021

Abstract

Extensions of kernel methods for the class imbalance problems have been extensively studied. Although they work well in coping with nonlinear problems, the high computation and memory costs severely limit their application to real-world imbalanced tasks. The Nyström method is an effective technique to scale kernel methods. However, the standard Nyström method needs to sample a sufficiently large number of landmark points to ensure an accurate approximation, which seriously affects its efficiency. In this study, we propose a multi-Nyström method based on mixtures of Nyström approximations to avoid the explosion of subkernel matrix, whereas the optimization to mixture weights is embedded into the model training process by multiple kernel learning (MKL) algorithms to yield more accurate low-rank approximation. Moreover, we select subsets of landmark points according to the imbalance distribution to reduce the model’s sensitivity to skewness. We also provide a kernel stability analysis of our method and show that the model solution error is bounded by weighted approximate errors, which can help us improve the learning process. Extensive experiments on several large scale datasets show that our method can achieve a higher classification accuracy and a dramatical speedup of MKL algorithms.

1. Introduction

Real-world problems in computer vision [1], natural language processing [2, 3], and data mining [4, 5] present imbalanced traits in their data, which may be developed by the inherent properties of the data or some external factors such as sampling bias or measurement error. Unfortunately, most traditional learning algorithms are designed based on balanced data and target the overall classification accuracy, leading the minority class to be overwhelmed by the majority class. However, the minority class in these real-world problems is usually more important and expensive than the majority class.

In the past few decades, many algorithms have been proposed to solve the class imbalance problems [68]. The data-level methods artificially balance the skewed class distributions by data sampling [9, 10]. The algorithm-level methods lift the importance of minority instances via the modification of existing learners [11, 12]. However, there usually exist complex nonlinear structures in these real-world imbalanced data. In this case, the extensions of kernel methods for the class imbalance problems have been proven very effective [1315]. In [16], Mathew et al. overcome the limitations of the synthetic minority oversampling technique (SMOTE) for nonlinear problems by oversampling in the feature space of the support vector machine. In [17], a kernel boundary alignment algorithm is proposed to adjust the class boundary by modifying the kernel matrix according to the imbalanced data distribution. The kernel-based adaptive synthetic data generation (KernelADASYN) for imbalanced learning is proposed in [18], which uses kernel density estimation (KDE) to estimate the adaptive oversampling density. However, with the development of data storage and data acquisition equipment, the scale of data continues to grow. The existing kernel-based class imbalanced learning (kernel CIL) methods suffer from serious challenges that the cost of calculating and storing a vast kernel matrix is very expensive.

A general technique for making kernel methods scalable is kernel approximation, of which the Nyström method is the most popular one [19]. The Nyström method constructs a low-rank approximation of the original kernel matrix from a subset of landmark points, where is the data size. Computationally, it only needs to decompose a smaller matrix (denoted as ). However, according to the approximation error bound for the Nyström method in [20], there is a trade-off between accuracy and efficiency. The more landmark points sampled provide improved approximation accuracy but require more computing resources, which results in the rapid expansion of the subkernel matrix as the data size increases and seriously affects the efficiency of the Nyström method.

Some works study the efficacy of a variety of fixed and adaptive sampling schemes for the Nyström method. For example, Musco et al. presented a new Nyström algorithm based on recursive leverage score sampling, which runs in linear time in the number of training points [21]. An ensemble Nyström method has been proposed to yield more accurate low-rank approximations by running mixtures of the Nyström method based on several subsets of landmark points randomly sampled [22]. However, the mixture weights of the ensemble Nyström method are defined according to the approximation error of each Nyström approximation, which may lead to the performance not as expected when applied to practical classification or regression applications. Recently, there emerges a fast and accurate refined Nyström-based kernel classifier to improve the performance of the Nyström-based kernel classifier [23]. Although the Nyström method has been studied extensively, there still exists a potentially large gap between the performance of learner learned with the Nyström approximation and that learned with the original kernel.

In this study, we propose a novel method, multi-Nyström, for large scale imbalanced classification. We incorporate the multi-Nyström method and multiple kernel learning to learn an improved low-rank approximation kernel superior to any one of each multi-Nyström approximation, where each approximation is defined by different kernel functions and subsets of landmark points. Moreover, unlike existing sampling schemes for the multi-Nyström method, our method selects subsets of landmark points according to the imbalance distribution to deal with the problem of skewed data. Without computing and storing the full kernel matrix, our method can scale to large scale scenarios. The main contributions of this study are summarized as follows:(1)We propose a multi-Nyström method to overcome the computational constraints of the Nyström method. Due to our method parallelized easily, it can generate more accurate approximates in large scale scenarios.(2)We optimize the mixture weights according to the data and the problem at the hand, so that the combined approximation kernel matrix can produce better performance. Moreover, the low-rank approximation can significantly speed up the existing MKL algorithms process.(3)We provide a stability analysis of our method, showing us the impact of kernel approximation error on the model solution and help determine the acceptable approximation error in the approximation of the kernel matrix.

The rest of this study is organized as follows. Section 2 introduces some related concepts. Section 3 then describes the proposed multi-Nyström approximation algorithm in detail. Experimental results and analysis compared with other algorithms are presented in Section 4. Finally, Section 5 summarizes the full work.

2.1. Kernel Methods

Kernel methods such as support vector machines (SVMs) have become one of the most popular technologies of machine learning [24]. It can extend linear learners to nonlinear cases by introducing kernel trick. Consider a binary-class dataset , where denotes an s-dimensional vector and denotes its label. Define a nonlinear descriptor as

The input data are mapped to a high-dimensional or even infinite-dimensional feature space, and the inner product in the feature space is calculated implicitly through the kernel function defined in the input space.where is the kernel function that satisfies Mercer’s theorem [25], and is the corresponding reproducing kernel Hilbert space (RKHS). can simply be a classical kernel like the radial basis function (RBF) kernel. Unfortunately, the kernel matrix expands quadratically with the increase of data scale. The poor scalability limits the applicability of kernel methods in large scale scenarios.

2.2. Multiple Kernel Learning

Due to different kernels corresponding to different similarity concepts or using features from different views, MKL can obtain more complete representations of the input data by combining multiple kernels. In MKL, each instance is mapped into different feature spaces by a series of descriptors [26]:where represents feature from the mth view of instance , , is the corresponding weight, and is the total number of predefined kernels. Then, substitute any dot product term with kernels:where each base kernel function is a positive definite kernel associated with an RKHS . The purpose of MKL is to learn a resulting discriminant function of the form with .

Based on the aforementioned definition, the seminal work in MKL proposes the following structural risk minimization framework as MKL primal problem with kernel weights on a simplex [27].where is the regularization parameter of the error term. is the slack variable. The L1-norm constraint on the weight vector enforces the kernel combination to be sparse. We assume whenever in order to reach a finite objective. That implies if the weight of a certain kernel reaches , stop the optimization of since the solution is known [28].

Although MKL is an ideal candidate for combining multiview data, scalability is a key issue for MKL: (1) the computation and memory costs for maintaining several kernel matrices are heavy and (2) the computational efficiency of MKL solvers is not high.

2.3. Standard Nyström Method

Let , where denotes a set of landmark points randomly selected from uniformly without replacement, denotes the subkernel matrix between all instances and the landmark points, and be a symmetric positive semidefinite (SPSD) subkernel matrix among the points in . Then, the Nyström method uses and to generate a rank- approximation of kernel matrix for [20]:where is the best rank- approximation to with respect to the Frobenius norm, that is, , and denotes the pseudoinverse of . Given the matrix , the feature of each instance can be evaluated as

Calculate the singular value decomposition (SVD) of as , where is the orthonormal and is the diagonal with . Then, the final approximate decomposition of is denoted as the following form:where is the diagonal formed by the top singular values of , and is formed by the associated singular vectors.

The total time complexity of the Nyström method is including for SVD on and for matrix multiplication with [29]. For , it is much lower than the complexity taken by SVD on .

3. Proposed Algorithms

3.1. Multi-Nyström Method

We divide the imbalance dataset into the minority class set and the majority class set . When there are irregularities in the imbalanced data (such as small disjuncts, overlapping, and noise [30]) and the data scale is large, applying a single kernel may make the model biased, skew, or misleading. Inspired by the MKL algorithm [31], we construct a low rank approximate multiple kernel framework as follows:where corresponds to the rank- approximation of each base kernel matrix , and is the corresponding mixture weight. As for the Nyström method, a key aspect is the sampling scheme [32]. For reducing the sensitivity to skewness in data, we adopt the stratified undersampling of the majority class to select subsets of landmark points written as with each . The subkernel matrix between all instances and the landmark points can be expressed aswhere . Then, we perform the standard Nyström method on each independently to get a rank- approximation of each base kernel matrix . Finally, by linearly combining these approximations, we can get the general form of approximation multiple kernel :

Given the mixture weight , the feature of each instance can be evaluated as

Similarly, for the convenience of subsequent calculations, formula (11) can be rewritten aswhere , and denotes the approximate decomposition of obtained by (8). Figure 1 shows the proposed multi-Nyström method and includes an optimization process of the mixture weights detailed futher in next subsection.

When the mixture weight is fixed or known, the total time complexity of the multi-Nyström method is . Although our method requires times more CPU resources than the standard Nyström method, is typically O(1) for large scale data, and our method can compute in parallel in the distributed computing environment. Moreover, the SVD on the subkernel matrix is decomposed into that on much smaller matrices would also accelerate the calculation process.

3.2. Optimization to Mixture Weights

The purpose of MKL is to learn an optimal convex combination of a series of kernels during training. Based on the aforementioned definition, we propose an approximate multiple kernel learning framework for large scale imbalanced classification by modifying the original MKL framework in [26]wherewhere is the Lagrange multipliers vector, and . To avoid numerical instability caused by ill-conditioning [19], we substitute , where is a small positive constant called jitter factor. Moreover, to calculate the inverse of the approximate matrix and avoid storing the complete matrix , we iteratively perform the following series of operations:where is calculated using the SMW formula according to the last result . After performing the series of operations, we can obtain .

Lemma 1 (see [33]). Let and both be invertible; then, Sherman–Morrison–Woodbury (SMW) formula gives an explicit formula for the inverse of matrices if is invertible.

We can find that when the mixture weight is known, formula (15) is same as the dual problem of SVM. Hence, we havewhere is the optimal solution minimizing (15). With considered a constant in , can be regarded as a function of , and we calculate the gradient of the objective with respect to .

We use the reduce gradient method in [27] to deal with problem (14). First, for satisfying the L1-norm constraint on the weight vector in (14), we calculate the reduced gradient of :where denotes the reduced gradient of . Let be the largest element of the vector , and be the corresponding index. Obviously, would be a descent direction. However, if that makes with , then , which does not meet the nonnegative restriction. Therefore, needs to be set to 0. Update descent direction is as follows:

In general, MKL uses a two-step training method. It requires frequent calls to support vector machine solvers, which is prohibitive for large scale problems. Therefore, after each update on , we are not eager to substitute it into support vector machine solvers to update , but continue to look for the maximum allowable step length in this descent direction until the objective function value stops declining. Finally, we get the optimal step length by the line search method. The complete algorithm of the multi-Nyström method with MKL is summarized in Algorithm 1.

Input. Dataset ; number of landmark points ; rank ; number of kernels ; predefined kernel function .
Output. Classification result for instance
(1)Draw subsets of balanced landmark points with each
(2)Calculate subkernel matrices between the instances in and and among the instances in with kernel
(3)Calculate the singular value decomposition on
(4)Approximate according to (8)
(5)Initialize mixture weights
(6)While stopping criterion not met do
(7) Calculate by SVM solver with according to (15)
(8) Calculate descent direction according to (19)–(21)
(9) Set
(10)Whiledo
(11)  Set ,
(12)  Set ,
(13)  Set , ,
(14)  Calculate with
(15)end while
(16) Line search along for optimal
(17) Assign
(18)end while
3.3. Kernel Stability Analysis

In some previous related works, Nyström is usually considered as a preprocessing method and mostly only study the approximate error bounds without considering the impact of the approximate on the performance of the kernel machine. In the following, we analyze the kernel stability of our method, bounding the relative performance based on the weighted kernel approximation error. It provides performance guarantees for our multi-Nyström approximate method in the context of large scale imbalanced classification.

Proposition 1. Let be the optimal solution for kernel SVM with kernel and be the solution of kernel SVM with kernel obtained by Nyström approximation. Then,where is the smallest eigenvalue of , and is the constant from Hoffman’s bound independent on and .

Proof. Define be the projected gradient, where is the bounded constraint and is the convex projection operator. It can be used to define an error bound according to the following theorem:

Theorem 1 (see [34]). Let be the nearest optimal solution of the convex optimization problem:with being strongly convex, being Lipschitz continuous, and is a polyhedral set. The optimization problem admits a global error bound:where is the constant from Hoffman’s bound.

Considering now the problem with and bounded constraint , then

Note that the above problem is equivalent to problem (15) with the equality ( is SPSD), and we have

Let be the dual objective function of multiple kernel learning problem (5) with the original kernel , and be the objective function of approximate multiple kernel learning problem (9) with kernel obtained by our multi-Nyström method (13). Consider now and as the optimal solutions of and , respectively. We havewhere we use the fact that ; therefore,where is the spectral norm error of the th Nyström approximate based on the th subset of landmark points.

Furthermore, we use the inequality of the kernel SVM given by [35] (proof of Theorem 2) along with Theorem 1 to upper bound the norm difference between the optimal solutions of and :

The proposition shows us the norm difference is controlled by a weighted Nyström approximate error. And it guides us to focus on approximating the kernel matrices with greater weights for getting a better learning performance.

4. Experiments

In this section, in order to validate the efficiency of the proposed method in solving large scale imbalanced problems, we compare our method against kernel methods including SVM and MKSVM (multiple kernel SVM), as well as the Nyström approximation method. All experiments are implemented on a PC with Intel quad-core i7-8565U CPU@1.80 GHz and 8 GB memory.

4.1. Implementation

We implement our experiments on five real-world imbalanced datasets from the KEEL data repository (https://keel.es/) and the LIBSVM archive (https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/) (Table 1). For a fair comparison, we perform 10 times stratified 5-fold cross-validation and report the average result. We use LIBSVM (https://www.csie.ntu.edu.tw/cjlin/libsvm/index.html) and SimpleMKL (https://asi.insa-rouen.fr/enseignants/arakoto/code/mklindex.html) to run kernel SVM and MKSVM, respectively. As the kernel type, all experiments use the Gaussian kernel with bandwidth in the range of . Because we are interested in relative performance, we empirically set the trade-off parameter C = 100. In this study, we adopt the following three evaluation measures of the classification performance on imbalanced datasets: F1 score, G-mean, and area under ROC curve (AUC).where TP, TN, FP, and FN represent the number of true-positive, true-negative, false-positive, and false-negative instances, respectively. F1 score measures the classification performance on the minority class. G-mean reflects the overall classification performance. AUC works well for comparing performance between algorithms [36].


Dataset feature instanceIR

Poker-8-9_vs_510207582
Abalone1984174129.44
Page-blocks01054728.79
USPS (class 9 against all)256929812.13

4.2. Experimental Results

Table 2 provides the average experimental results of the proposed method and the other three algorithms on the four imbalanced datasets using the above three measures. We first compare SVM and the standard Nyström method. The Nyström method uses uniform sampling without replacement to approximate the kernel matrix, which relieves the model’s sensitivity to class imbalance to a certain extent. For example, on the Poker-8-9_vs_5 dataset, in terms of G-mean, the Nyström method improves nearly 7 times more than SVM. However, we can also see that in terms of AUC and F1 score, there still exits a large gap in model accuracy as compared with SVM.


DatasetsMeasuresSVMNyströmMulti-NyströmMKSVM

Poker-8-9_vs_5F10.05710.03270.05850.1906
G-mean0.03990.33570.51400.1589
AUC0.81070.61060.79530.7942

Abalone19F10.06110.02000.03340.0569
G-mean0.06610.29140.35000.0661
AUC0.74870.52030.60940.7263

Page-blocks0F10.80610.79540.81710.8342
G-mean0.70180.72920.71150.7381
AUC0.98570.97530.95850.9904

USPSF10.89910.66880.88530.9102
G-mean0.87880.85930.84080.8807
AUC0.99390.96080.98740.9963

Next, we compare our multi-Nyström method with the standard Nyström method. The experimental results clearly demonstrate that our method outperforms the Nyström method, especially in the context of extreme imbalance. This mainly benefits from the use of undersampling of the majority class, which can effectively balance the class distribution. Moreover, it can be seen that multi-Nyström can improve the accuracy of the model. For example, with the same number of landmark points, the F1 score and AUC value of multi-Nyström on the USPS dataset are closer to that of SVM or even higher on Poker-8-9_vs_5 and Page-blocks0 datasets.

Note that our method is also a type of approximation of MKL, and finally, we also examine the performance of MKL-based MKSVM. From the results, we can see the effect of using MKL to represent input data, which also implicitly explains how our method achieves better accuracy at the expense of more computations.

4.3. Discussion

In this part, we further discuss the impact of different parameters on performance. In the first experiment, in order to study the impact of the number of sampling landmark points on the classification performance, we fix the approximate rank parameter and successively increase the number of sampling landmark points, and then train and test the SVM model on four datasets, with results as shown in Figure 2. We can see that as the number of sampling landmark points increases, although there are some fluctuations, the performance of our method and Nyström still presents a rising trend. Moreover, except for few cases, our method uses fewer landmark points and can still yield higher G-mean.

In the second experiment, we study the performance with the variance of the rank parameter. Figure 3 shows the G-mean on four datasets by varying the approximate rank. They show us that with the same approximate kernel rank, our method can achieve better classification performance than others.

Finally, we further compare the running time of our method and MKSVM. We report the results on two datasets USPS and Page-blocks in Figure 4. The results show that our method can significantly speedup the MKL process under guaranteed performance. For example, on the USPS dataset, our method can reduce the running time by more than one order of magnitude. The main reason is due to the low-rank attribute of the approximate kernel matrix that speeds up the MKL algorithm process.

For further analysis of the experimental results, we perform the Friedman test with respect to the F1 score. First, we calculate the average ranks of SVM, Nyström, multi-Nyström, and MKSVM as shown in Figure 5. It can be noticed that MKSVM gives the best performance. Meanwhile, the SVM and the proposed multi-Nyström rank similarly. In a comparison of algorithms on datasets, considering as the average ranking of the algorithm, the Friedman variable can be calculated as follows:withwhere is distributed to and degrees of freedom. For our experiments, . The critical value of is 3.8625 for . Since , we can reject the null hypothesis that all the algorithms have the same performance. Then, we perform the Nemenyi test to compare algorithms pairwise. The critical difference is calculated as follows:considering and . The difference between the average ranking of the SVM, Nyström, and multi-Nyström with MKSVM is 1.0, 2.75, and 1.25, respectively. Hence, we can state that the best MKSVM is significantly better than Nyström at . However, the difference between the best MKSVM and the proposed multi-Nyström is not significant, which indicates the proposed method achieves better performance than the standard Nyström kernel classifier and more efficiency than the best MKSVM.

5. Conclusions

In this study, we propose a novel method to overcome the time and memory limitations of the standard Nyström method and extend it to the case of large scale imbalanced classification. In general, kernel approximation and model training are carried out separately. To obtain more accurate results, our method mixes multiple Nyström approximations and embeds them in the model training process to learn the model parameters and mixture weights simultaneously. In particular, the approximate kernel matrix yielded by our method is low rank and balanced. We also provide an error bound of the model solution based on our approximate method to guide us in improving the learning process. Experimental results show that our method can achieve a higher classification accuracy. On the other hand, it can dramatically improve the efficiency of exiting MKL algorithms.

Potential improvements: there are still some caveats in our current solution. For example, due to the curse of kernelization, the number of support vectors grows in an unbounded manner when suffered the nonzero loss. This significantly increases the computational cost and can be infeasible for large scale problems. Future work will chiefly focus on more efficient variants of multi-Nyström involving budget kernel learning to address the issue.

Data Availability

The data used to support the findings of this study have been deposited in the KEEL repository (http://keel.es/) and the LIBSVM archive (https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was jointly supported by the National Natural Science Foundation of China (61403397) and Natural Science Basic Research Plan in Shaanxi Province of China (2020JM-358 and 2015JM6313).

References

  1. C. Huang, Y. Li, C. Change Loy, and X. Tang, “Learning deep representation for imbalanced classification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5375–5384, Las Vegas, NV, USA, June 2016. View at: Google Scholar
  2. J. Zhu and E. Hovy, “Active learning for word sense disambiguation with methods for addressing the class imbalance problem,” in Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 783–790, Prague, Czech Republic, June 2007. View at: Google Scholar
  3. W. Sun, J. Sun, Y. Zhu, and Y. Zhang, “Video super-resolution via dense non-local spatial-temporal convolutional network,” Neurocomputing, vol. 9, no. 403, pp. 1–12, 2020. View at: Publisher Site | Google Scholar
  4. N. V. Chawla, “Data mining for imbalanced datasets: an overview,” in Data Mining and Knowledge Discovery Handbook, pp. 875–886, Springer, Berlin, Germany, 2009. View at: Publisher Site | Google Scholar
  5. X. Luo, J. Sun, L. Wang et al., “Short-term wind speed forecasting via stacked extreme learning machine with generalized correntropy,” IEEE Transactions on Industrial Informatics, vol. 14, no. 11, pp. 4963–4971, 2018. View at: Publisher Site | Google Scholar
  6. Y. Sun, A. K. C. Wong, and M. S. Kamel, “Classification of imbalanced data: a review,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 23, no. 4, pp. 687–719, 2009. View at: Publisher Site | Google Scholar
  7. M. Galar, A. Fernandez, E. Barrenechea, H. Bustince, and F. Herrera, “A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 4, pp. 463–484, 2011. View at: Google Scholar
  8. G. Haixiang, L. Yijing, J. Shang, G. Mingyun, H. Yuanyue, and G. Bing, “Learning from class-imbalanced data: review of methods and applications,” Expert Systems with Applications, vol. 73, pp. 220–239, 2017. View at: Publisher Site | Google Scholar
  9. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, 2002. View at: Publisher Site | Google Scholar
  10. H. He, Y. Bai, A. Edwardo Garcia, and S. Li, “Adasyn: adaptive synthetic sampling approach for imbalanced learning,” in Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pp. 1322–1328, IEEE, Hong Kong, China, June 2008. View at: Google Scholar
  11. R. Batuwita and V. Palade, “FSVM-CIL: fuzzy support vector machines for class imbalance learning,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 3, pp. 558–571, 2010. View at: Publisher Site | Google Scholar
  12. H. Yu, C. Sun, X. Yang, S. Zheng, and H. Zou, “Fuzzy support vector machine with relative density information for classifying imbalanced data,” IEEE Transactions on Fuzzy Systems, vol. 27, no. 12, pp. 2353–2367, 2019. View at: Publisher Site | Google Scholar
  13. X. Hong, S. Chen, and C. J. Harris, “A kernel-based two-class classifier for imbalanced data sets,” IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 28–41, 2007. View at: Publisher Site | Google Scholar
  14. Y. Tang and Y.-Q. ZhangN. V. Chawla and S. Krasser, ““SVMS modeling for highly imbalanced classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 1, pp. 281–288, 2008. View at: Google Scholar
  15. A. Maratea, A. Petrosino, and M. Manzo, “Adjusted F-measure and kernel scaling for imbalanced data learning,” Information Sciences, vol. 257, pp. 331–341, 2014. View at: Publisher Site | Google Scholar
  16. J. Mathew, C. Khiang Pang, M. Luo, and W. H. Leong, “Classification of imbalanced data by oversampling in kernel space of support vector machines,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 9, pp. 4065–4076, 2017. View at: Google Scholar
  17. G. Wu and E. Y. Chang, “KBA: kernel boundary alignment considering imbalanced data distribution,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, pp. 786–795, 2005. View at: Publisher Site | Google Scholar
  18. Bo Tang and H. He, “Kerneladasyn: kernel based adaptive synthetic data generation for imbalanced learning,” in Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), pp. 664–671, IEEE, Sendai, Japan, May 2015. View at: Google Scholar
  19. C. Williams and M. Seeger, “Using the Nyström method to speed up kernel machines,” Advances in Neural Information Processing Systems, vol. 13, pp. 682–688, 2000. View at: Google Scholar
  20. P. Drineas and M. W. Mahoney, “On the Nyström method for approximating a gram matrix for improved kernel-based learning,” Journal of Machine Learning Research, vol. 6, pp. 2153–2175, 2005. View at: Google Scholar
  21. C. Musco and C. Musco, “Recursive sampling for the Nyström method,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 3836–3848, Long Beach, CA, USA, December 2017. View at: Google Scholar
  22. S. Kumar, M. Mohri, and A. Talwalkar, “Ensemble Nyström method,” Advances in Neural Information Processing Systems, vol. 22, pp. 1060–1068, 2009. View at: Google Scholar
  23. Z. Li, T. Yang, L. Zhang, and R. Jin, “Fast and accurate refined Nyström-based kernel SVM,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, Phoenix, AZ, USA, February 2016. View at: Google Scholar
  24. T. Hofmann, B. Schölkopf, and A. J. Smola, “Kernel methods in machine learning,” The Annals of Statistics, vol. 36, no. 3, pp. 1171–1220, 2008. View at: Google Scholar
  25. V. Vapnik, The Nature of Statistical Learning Theory, Springer Science & Business Media, Berlin, Germany, 2013.
  26. G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan, “Learning the kernel matrix with semidefinite programming,” Journal of Machine Learning Research, vol. 5, pp. 27–72, 2004. View at: Google Scholar
  27. A. Rakotomamonjy, F. R. Bach, S. Canu, and Y. Grandvalet, “Simplemkl,” Journal of Machine Learning Research, vol. 9, pp. 2491–2521, 2008. View at: Google Scholar
  28. M. Alioscha-Perez, M. Cédric Oveneke, and H. Sahli, “SVRG-MKL: a fast and scalable multiple kernel learning solution for features combination in multi-class classification problems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 5, pp. 1710–1723, 2019. View at: Google Scholar
  29. S. Kumar, M. Mohri, and A. Talwalkar, “Sampling techniques for the Nyström method,” in Proceedings of the Artificial Intelligence and Statistics, pp. 304–311, Clearwater Beach, FL, USA, April 2009. View at: Google Scholar
  30. L. Wang, H. Wang, and G. Fu, “Multiple kernel learning with minority oversampling for classifying imbalanced data,” IEEE Access, vol. 9, pp. 565–580, 2021. View at: Publisher Site | Google Scholar
  31. M. Gönen and E. Alpaydın, “Multiple kernel learning algorithms,” The Journal of Machine Learning Research, vol. 12, pp. 2211–2268, 2011. View at: Google Scholar
  32. S. Kumar, M. Mohri, and A. Talwalkar, “Sampling methods for the Nyström method,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 981–1006, 2012. View at: Google Scholar
  33. C. Y. Deng, “A generalization of the Sherman-Morrison-Woodbury formula,” Applied Mathematics Letters, vol. 24, no. 9, pp. 1561–1564, 2011. View at: Publisher Site | Google Scholar
  34. P.-W. Wang and C.-J. Lin, “Iteration complexity of feasible descent methods for convex optimization,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1523–1548, 2014. View at: Google Scholar
  35. C.-J. Hsieh, Si Si, and I. S. Dhillon, “Fast prediction for large-scale kernel machines,” in Proceedings of the Neural Information Processing Systems, pp. 3689–3697, Citeseer, Montreal, Quebec, Canada, December 2014. View at: Google Scholar
  36. S. Barua, Md Monirul Islam, X. Yao, and K. Murase, “Mwmote–majority weighted minority oversampling technique for imbalanced data set learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 2, pp. 405–425, 2012. View at: Google Scholar

Copyright © 2021 Ling Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views226
Downloads631
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.