Complexity

Complexity / 2021 / Article
Special Issue

Complex System Modelling in Engineering Under Industry 4.0

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 6877284 | https://doi.org/10.1155/2021/6877284

Liping Chen, Jiabao Jiang, Yong Zhang, "HSDP: A Hybrid Sampling Method for Imbalanced Big Data Based on Data Partition", Complexity, vol. 2021, Article ID 6877284, 9 pages, 2021. https://doi.org/10.1155/2021/6877284

HSDP: A Hybrid Sampling Method for Imbalanced Big Data Based on Data Partition

Academic Editor: Huihua Chen
Received30 Apr 2021
Accepted06 Jun 2021
Published22 Jun 2021

Abstract

The classical classifiers are ineffective in dealing with the problem of imbalanced big dataset classification. Resampling the datasets and balancing samples distribution before training the classifier is one of the most popular approaches to resolve this problem. An effective and simple hybrid sampling method based on data partition (HSDP) is proposed in this paper. First, all the data samples are partitioned into different data regions. Then, the data samples in the noise minority samples region are removed and the samples in the boundary minority samples region are selected as oversampling seeds to generate the synthetic samples. Finally, a weighted oversampling process is conducted considering the generation of synthetic samples in the same cluster of the oversampling seed. The weight of each selected minority class sample is computed by the ratio between the proportion of majority class in the neighbors of this selected sample and the sum of all these proportions. Generation of synthetic samples in the same cluster of the oversampling seed guarantees new synthetic samples located inside the minority class area. Experiments conducted on eight datasets show that the proposed method, HSDP, is better than or comparable with the typical sampling methods for F-measure and G-mean.

1. Introduction

In the era of big data, tremendous amount of data generated by various real-world applications brings the challenges to data mining. Among the challenges, classification of imbalanced datasets has drawn interest in various application areas. A dataset is imbalanced when the number of samples in one category is much less than the number of samples in other categories. If the samples come from two classes, the data of the larger number is called the majority class, and the data of the smaller number is called the minority class. Our research focuses on the binary classification problem (the two-class classification) and the prediction of minority samples is more important, because the cost of misclassification for minority samples is greater than the cost of misclassification for majority samples. The issue of binary classification of imbalanced data exists in various applications, such as medical diagnosis.

Most classifiers aim at maximizing the overall classification accuracy of a dataset. Therefore, when classifying imbalanced data, the classifier is biased to meet the classification accuracy of the majority samples, causing low classification accuracy over the minority class. In addition, an imbalanced dataset combination with other difficulty factors such as class overlapping, presence of outliers, and small disjunctions will be more difficult for the classifier to predict minority class [1]. Figure 1(a) shows the skewness distribution between classes. Figure 1(b) shows class overlapping, and Figure 1(c) shows the small disjunctions of minority class. Therefore, how to improve the classification accuracy of minority samples while ensuring the overall classification performance of the classifier for imbalance data is an urgent problem to be solved.

The remainder of this paper is organized as follows. Section 2 presents related works. Section 3 describes the proposed HSDP method. Section 4 introduces the experimental settings. Section 5 presents the experimental results and compares our approach with some typical techniques. Finally, the conclusion is drawn in Section 6.

The techniques proposed to improve classification for imbalanced data can be categorized into two major groups: data-level methods and algorithm-level methods. The algorithm-level methods modify the classifier in order to improve the accuracy of imbalanced data. The algorithm-level methods mainly include cost-sensitive methods and ensemble learning methods. The data-level methods mainly include undersampling for the majority class [2], oversampling for the minority class. In contrast, the data-level methods are conducive to enhancing the generalization ability of the model, and the oversampling methods have more advantages because they do not lose the data sample information [3, 4]. SMOTE is the one of most popular oversampling algorithms [5]. SMOTE first selects a random seed x from the minority samples and then randomly selects sample y among its k neighbors in the same class, Finally, a new synthetic sample s is generated by the linear interpolation. This can be expressed as where gap is a random number between 0 and 1.

Although the SMOTE algorithm has shown successful performance in various classification scenes, the SMOTE algorithm also has some weaknesses: (1) if the noisy sample is selected, oversampling process may generate more noisy samples. (2) It does not consider the data distribution when generating the synthetic sample, thereby increasing the overlaps between different classes [6]. (3) It oversamples uninformative minority samples because it chooses a minority sample seed to oversample with uniform probability. However, those minority samples on the boundary area contain more information than ones far from the boundary [7]. Therefore, researchers have proposed some improved versions of SMOTE. The Borderline-SMOTE algorithm [8] oversamples the borderline minority samples. However, sometimes the Borderline-SMOTE generates new synthetic samples in unsuitable areas, such as noise regions and overlapped areas. ADASYN algorithm [9] pays more attention to the minority samples that are difficult to learn. It can adaptively generate minority samples according to the ratio of samples of majority class in the neighborhood samples. The K-means-SMOTE algorithm [10] combats between-class imbalance and within-class imbalance. But it does not provide a strategy for determining the optimal number of clusters k, which has a great impact on the performance of oversampling. The MWMOTE technique [11] analyzes the hard-to-learn minority samples and assign them weights according to their importance in learning.

In summary, the methods above have mitigated some of the problems of SMOTE, but neither of them has effectively solved all the three problems. So, the proposed hybrid sampling method based on data partition attempts to overcome all three problems. It is able to select proper minority samples for oversampling and improve the synthetic sample generation scheme. The generation scheme includes the size of synthetic samples for selected minority samples and the control of the location of the generated samples in data space.

3. Proposed Method

3.1. Overview

The data samples present different distribution characteristics in data space, and the data distribution can be considered when undersampling or oversampling. Different sampling methods are used in different regions that may improve classification performance, and we propose the hybrid sampling method of imbalanced data based on data partitioning (HSDP). The method consists of four stages: (1) partitioning space of the input imbalanced data into five regions; (2) removing the samples in the noise minority samples region; (3) using agglomerative hierarchical clustering method to cluster the minority samples; (4) oversampling process. In the first stage, the data space is divided into five regions: the boundary minority samples region, the noise minority samples region, the safe minority samples region, the boundary majority samples region, and the safe majority samples region. And the first two stages are performed because our aim is to oversample the borderline minority samples while ignoring the noisy minority samples. The basic idea is that the borderline samples are apt to be misclassified. In the third stage, clustering the minority samples is to ensure that the generated samples must be inside the minority class regions. In the fourth stage, the oversampling process is performed, which adaptively generates synthetic samples for borderline minority samples in the same cluster of the oversampling seed. The oversampling process in the same cluster ensures that the generated samples locate inside the minority class regions.

3.2. Data Partition

According to the proportion of minority samples in neighborhoods of each minority sample, the data space is divided into five regions [12]: the boundary minority samples region, the noise minority samples region, the safe minority samples region, the boundary majority samples region, and the safe majority samples region, as shown in Figure 2.

Given an imbalanced training dataset S and minority class label class (min), the training dataset is divided into majority class set and minority class set firstly. Then, for each sample xi in the minority class set , we calculate k neighbors around xi through the K-nearest neighbor algorithm. Next, in these k neighbors, the number of the minority class samples is computed and the majority class samples are put into the boundary majority samples region . Finally, by judging , each sample is added to corresponding region. If , the minority class sample is added to the noise minority samples region . If , the minority class sample is added to the safe minority samples region . If , the minority class sample is added to the boundary minority samples region . The safe majority samples region and the set with the sample in removed are determined at the end. The DP algorithm for the data partitioning is described as follows (Algorithm 1):

Input: imbalanced training dataset S, the number of nearest neighbors k, Minority class label class (min)
Output: five regions: , , , , , and dataset S without the samples in
Procedure:
(1)forin S
(2)  if (class () = = class (min))//label of sample is the class (min)
(3)   //add the sample into minority class dataset
(4)  else
(5)   //add the sample into majority class dataset
(6)   end if
(7)end for
(8)forin
(9)   //find k neighbor samples of each minority class samples
(10)   //initialize the number of minority class samples
(11)   forin list
(12)    if (class () = = class (min))//label of sample is the class (min)
(13)     
(14)    else
(15)     //add majority class sample into boundary majority samples region
(16)     
(17)    end if
(18)   end for
(19)   if ()
(20)     //add minority class sample into the safe minority samples region
(21)   else if ()
(22)     //add minority class sample into the noise minority samples region
(23)    else
(24)     //add minority class sample in to the boundary minority samples region
(25)end for
(26)forin
(27)  if (!(in))
(28)  //add majority class sample into the safe majority samples region
(29)  end if
(30)end for
(31)for in S
(32)  if (in)
(33)   //delete samples in the noise minority class region and retain samples from other regions
(34)   end if
(35)end for
3.3. Clustering Minority Class Samples Based on Hierarchical Clustering

Most of the existing oversampling methods are the K-NN based approach. To generate a synthetic sample from the minority class sample B and k = 5, sample A may be chosen (as shown in Figure 3). By this way, the generation of a synthetic sample (shown by square) may locate in the majority class region.

Our proposed method chooses the sample from the same cluster (Cluster1) of B. It ensures that A will not be chosen, because B and A are not in the same cluster. Thus, the oversampling process is performed in a safe range and the generation of minority samples must locate inside the minority class region.

The hierarchical clustering algorithm is used to cluster the minority class samples in this work. And, the key steps of the agglomerative hierarchical clustering algorithm are described as follows:(1)Assign each data sample to a cluster initially.(2)Find the two closest clusters and merge them into a single cluster. And, this will reduce the total number of clusters by one.(3)Compute the distance between the newly generated cluster and all the previous clusters.(4)Repeat steps 2-3 until a certain termination condition is reached. The termination condition is the number of clusters set in advance or distance threshold.

However, using agglomerative hierarchical clustering algorithm to cluster the minority class samples, whether the two minority clusters are merged or not, not only the distance between the minority clusters but also the distribution of the majority class samples should be examined. If the distance between two minority class clusters is , the distances from a certain majority class cluster to these two minority clusters are d1 and d2, respectively; where and . Then, these two minority class clusters cannot been merged.

Therefore, modifications to agglomerative hierarchical clustering algorithm have been made. First, agglomerative hierarchical clustering algorithm is used to cluster the majority class samples to obtain the majority cluster set . Then, the minority class samples are clustered. The minority class cluster algorithm based on hierarchical clustering algorithm [13] (MDH) is described below (Algorithm 2).

Input: majority class cluster , minority class without samples of the noise minority samples region , threshold
Output: minority class cluster
Process:
(1)d = 0;//initialize the minimum distance between clusters
(2)for
(3)  //initialize the minority class cluster
(4)end for
(5)while ()
(6)  //the distance matrix between clusters
(7)  //the minimum distance d and corresponding cluster numbers and
(8)  for Ciin//each majority class cluster
(9)   if () && () < d))
(10)    //the flag of cluster mergence is not
(11)   else
(12)    //merge and into a single cluster
(13)    
(14)    //reduce the total number of clusters by one
(15)   end if
(16)  end for
(17)end while

Because the distance threshold is the termination condition of the clustering process, the setting is particularly critical. In this work, is computed as follows:

The parameter represents the average of the distance from each minority class sample to any other minority samples in the Set S. The parameter r is used to tune the output distance of the cluster algorithm. And the specific value analysis of r is discussed in Section 5.

3.4. Description of Hybrid Sampling Algorithm Based on Data Partition

We proposed a hybrid sampling algorithm based on data partition. Firstly, the boundary region can be obtained by DP algorithm. Then, the total number of synthetic samples generated in the boundary region is calculated. Next, the weight of each sample in the boundary minority samples region is computed by the ratio between the proportion of majority class in the neighbors of this selected sample and the sum of all these proportions. Finally, for each sample xi in the boundary minority samples region , synthetic samples should be generated in the same cluster of . The hybrid sampling algorithm based on data partition (HSDP) is implemented as follows (Algorithm 3):

Input: imbalanced dataset S
Output: balanced dataset S
Process:
Step 1:, , , can be obtained by DP algorithm.
Step 2: count the number (m) of samples in the and . Count the number (n) of samples in the and . Meanwhile, count the number (s) of samples in the .
Step 3: calculate the number of synthetic data samples that need to be generated for minority class: , where is the synthesis scaling factor. b = 1 means a balanced dataset is obtained after the oversampling process.
Step 4: for each sample , calculate the ratio of majority class samples belonging to the k neighbors of . This ratio is defined as
Step 5: the weight is determined by .
Step 6: calculate the number of synthetic data samples for each sample xi in the boundary minority samples region: .
Step 7: for each sample xi in the boundary minority samples region, generate synthetic data samples according to the following steps:
 Do the loop from 1 to :
(a)  Randomly select another sample y from the same cluster of xi
(b)  Generate a synthetic data sample:
  ,
    where
 End loop
3.5. The Time Complexity Analysis of HSDP Algorithm

In the DP algorithm, supposing the number of minority samples is and the number of majority samples is , each minority sample needs to calculate the distance from other samples to find neighbor samples. Therefore, the time complexity of calculating the distance between samples is

In the MDH algorithm, the distance between each minority cluster and the majority clusters needs to be calculated. Suppose that the current number of minority clusters is and the number of majority clusters is . The time complexity of calculating the distance between minority clusters is , and find the two minority clusters with the smallest distance. Then, calculate the distance between the majority cluster and the two minority clusters with the smallest distance. So, the calculation time complexity to get the distance between the majority cluster and the minority cluster is , and then determine whether to merge the two minority clusters according to the distance between the clusters, and the possible number of merger time is . Therefore, the time complexity of the MDH algorithm is .

In the HSDP algorithm, assuming that the number of boundary minority samples is , the number of minority class samples is , and the number of majority class samples is , and the time complexity the step of determining the k neighbors of boundary minority samples is . In the step of sample generation, the computational time complexity is .

According to the analysis of the above steps, the time complexity of the HSDP algorithm should be .

4. Experiment Setup

4.1. Dataset Description

We test our algorithm on datasets from various filed, including 8 imbalanced datasets. All these datasets are available from KEEL Repository and UCI Repository. Table 1 describes the information of these datasets.


DatasetSamplesAttributesClassesImbalance ratio

Pima7688{Negative, positive}1.87
Yeast314848{Negative, positive}8.1
Abalone1941748{Negative, positive}129.44
Segment0230819{Negative, positive}6.02
Page-blocks0547210{Negative, positive}8.79
Glass52149{Negative, positive}22.78
Ecoli43367{Negative, positive}15.8
Haberman3063{Negative: “1” other, positive: “2”}2.78

In this study, we research the binary classification. In the two-classification problem, the majority of samples are usually also marked as negative samples, and the minority samples are also marked as positive samples.

4.2. Evaluation Metrics

For the classification problem of imbalanced data, the overall classification accuracy is not suitable for evaluation of classifiers performance, because sometimes a classification algorithm with a better overall accuracy may be at the expense of large prediction error over the minority class. Therefore, F-measure and G-mean are usually used to evaluate the performance of imbalanced classification algorithms.

F-measure and G-mean are calculated based on the confusion matrix, as shown in Table 2.


Predicted as positivePredicted as negative

Actually positiveTPFN
Actually negativeFPTN

Based on the confusion matrix, the following equations are derived:

F-measure is computed as shown in formula (6). F-measure is the harmonic mean between the Recall and Precision. The higher F-measure can ensure that both Recall and Precision are higher, where β is a coefficient to indicate the relative importance of Recall and Precision (usually β = 1). G-mean is calculated as shown in formula (7). G-mean is the geometric mean of the minority class accuracy and majority class accuracy, and it assigns equal importance to performance of the classifier on minority class and majority class.

5. Experiment and Result Analysis

The experiment platform is Anaconda. Since the purpose is evaluating the proposed sampling method, we do not choose any special classifier; rather, we apply several of them such as KNN and RF. In order to compare the performance of our proposed hybrid sampling method (HSDP) against the other techniques, comparative experiments were carried out, including SMOTE, ADASYN, and Borderline-SMOTE.

5.1. Analysis of Experimental Results

In order to guarantee the fair comparison, the experiment uses a 10-fold cross-validation method. Tables 36, respectively, show the F-measure and G-mean values of various algorithms on each dataset.


DatasetAlgorithm
KNNSMOTE + KNNADASYN + KNNBorderline-SMOTE + KNNHSDP (proposed) + KNN

Pima0.55430.57660.58240.57260.6105
Yeast30.63430.63010.62940.65280.6698
Abalone190.20190.33550.28010.24380.3045
Segment00.86530.87340.86970.88430.8928
Page-blocks00.71460.73010.70990.74010.6988
Glass50.71320.71020.71290.72650.6827
Ecoli40.54010.52680.60110.57010.6698
Haberman0.29220.30270.34170.32850.3636


DatasetAlgorithm
KNNSMOTE + KNNADASYN + KNNBorderline-SMOTE + KNNHSDP (proposed) + KNN

Pima0.65030.66730.67210.66350.6932
Yeast30.75360.77860.77980.78240.8034
Abalone190.31650.60280.60370.43660.5437
Segment00.91540.93790.94400.92610.9528
Page-blocks00.81530.86290.86010.84210.8757
Glass50.78870.78570.78980.78900.7715
Ecoli40.70120.74010.79280.74290.8438
Haberman0.45020.48100.52110.49830.5201


DatasetAlgorithm
RFSMOTE + RFADASYN + RFBorderline-SMOTE + RFHSDP (proposed) + RF

Pima0.58910.56250.56320.57100.6000
Yeast30.73260.74110.72140.73660.7611
Abalone190.30540.31230.36520.28640.3912
Segment00.91020.93750.88650.92920.9301
Page-blocks00.74430.77450.75860.77230.7438
Glass50.81480.78260.79310.77250.8201
Ecoli40.57010.57370.57230.58020.5623
Haberman0.34020.40110.43860.41540.4489


DatasetAlgorithm
RFSMOTE + RFADASYN + RFBorderline-SMOTE + RFHSDP (proposed) + RF

Pima0.61730.64120.63150.63850.6479
Yeast30.82430.85020.84730.83620.8532
Abalone190.27820.63520.65290.45640.7001
Segment00.93440.95010.93130.94210.9567
Page-blocks00.83480.89900.90780.88460.8815
Glass50.83660.82010.83020.81720.8401
Ecoli40.64900.74420.77510.74030.7239
Haberman0.46590.56860.59820.57430.5991

The best results of F-measure and G-mean are bold faced on each dataset in the above tables. It is evident that the KNN and RF combined with the sampling method are better than themselves without combining sampling method in most cases, On the F-measure, the HSDP algorithm obtained 5 best results on 8 datasets, and 6 best results on G-mean value. This shows that the HSDP algorithm proposed in this paper can improve the classification effect of minority class.

Compared with the SMOTE method and the ADASYN method, (1) the HSDP method does not oversample for all minority class samples but focuses on the minority samples in the boundary area that are more important in classification and (2) the HSDP method removes the noise data, thus avoiding the noisy samples generation.

In contrast to Borderline-SMOTE, our proposed HSDP method not only considers the importance of minority class samples in boundary area but also considers the distribution characteristics of data samples, avoiding any wrong synthetic sample generation.

5.2. Analyzing the Influence of the Parameter Value Used in HSDP Algorithm

The parameters involved in the proposed method (HSDP) include the number of neighbor samples k and the distance adjustment factor r.

The value of k cannot be too small, because this will take the boundary minority class samples as noisy data and delete them by mistake.

The value of r is used to control the number of clusters. With smaller r value, the number of clusters increases and the number of samples decreases in the clusters, which will result in a decrease in diversity when synthesizing samples.

In order to determine the optimal value range of r and k, we use Pima, Glass5, and Yeast3 as the test datasets. For k value (k = 3, 5, 7, 9 and 11), the G-mean are given as shown in Table 7. For pima dataset, G-mean obtains the maximum value when k is 5. When k is 9, the glass5 achieves the maximum G-mean value. And Yeast3 achieved the maximum G-mean value when k is 7. It is evident that the value of k is appropriate in the range of 5–9. For r value (r = 0.6, 0.8, 1.0, 1.2, and 1.6), the G-mean are given as shown in Table 8. The Pima dataset achieves the maximum G-mean value when r is 0.8. Glass5 achieved the maximum G-mean value when r was 1.0. And Yeast3 achieved the maximum G-mean value when r was 1.2. It can be seen that the value of k is appropriate in the range of 0.8–1.2.


k = 3k = 5k = 7k = 9k = 11

Pima0.60150.68320.63710.64830.6144
Glass50.69860.79020.82090.83940.6857
Yeast30.83510.85870.88760.81290.7576


r = 0.6r = 0.8r = 1.0r = 1.2r = 1.4

Pima0.62540.64830.64090.64570.6347
Glass50.76780.80450.82460.78920.7754
Yeast30.79840.82690.82660.85480.8026

6. Conclusion

Data resampling method is one of the effective methods to deal with imbalanced data classification. Aiming at the problems of undersampling method and oversampling method, this paper proposes a hybrid sampling method, HSDP, based on data partition. This method uses the appropriate sampling methods for samples in different regions. And, it assigns reasonable weight to the boundary minority samples. Furthermore, it is able to oversample the selected samples inside the minority class area in the data space. The effectiveness of proposed method for the imbalanced data classification was confirmed by experiments, yet the values of the parameters used in the algorithm are selected through experiments many times. The future research direction is how to determine values of the parameters adaptively of HSDP for different datasets.

Data Availability

We use datasets from KEEL Repository and UCI Repository; our method and related parameters are provided in our paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Department of Education of Anhui Province, China (no. KJ2017A452).

References

  1. S. Das, S. Datta, and B. B. Chaudhuri, “Handling data irregularities in classification: foundations, trends, and future challenges,” Pattern Recognition, vol. 81, pp. 674–693, 2018. View at: Publisher Site | Google Scholar
  2. P. Vuttipittayamongkol and E. Elyan, “Neighbourhood-based undersampling approach for handling imbalanced and overlapped data,” Information Sciences, vol. 509, pp. 47–70, 2020. View at: Publisher Site | Google Scholar
  3. V. Garcia, J. S. Sanchez, A. I. Marqués, R. Florencia, and G. Rivera, “Understanding the apparent superiority of over-sampling through an analysis of local information for class-imbalanced data,” Expert Systems with Applications, vol. 158, 2020. View at: Publisher Site | Google Scholar
  4. H. Yin and K. Gai, “An empirical study on preprocessing high-dimensionalclass-imbal anced data for classification,” in Proceedings of the 2015 IEEE 12th International Conference on Embedded Software and Systems, pp. 1314–1319, New York, NY, USA, August 2015. View at: Publisher Site | Google Scholar
  5. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” Journal of Artificial Intelligence Research, vol. 16, no. 1, pp. 321–357, 2002. View at: Publisher Site | Google Scholar
  6. W. A. Rivera, “Noise reduction a priori synthetic over-sampling for class imbalanced data sets,” Information Sciences, vol. 408, pp. 146–161, 2017. View at: Publisher Site | Google Scholar
  7. J. A. Sáez, J. Luengo, J. Stefanowski, and F. Herrera, “SMOTE-IPF: addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering,” Information Sciences, vol. 291, pp. 184–203, 2015. View at: Publisher Site | Google Scholar
  8. H. Han, W.-Y. Wang, and B.-H. Mao, “Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning,” in Proceedings of the 2005 International Conference on Intelligent Computing. LNCS, vol. 3644, pp. 878–887, Hefei, China, August 2005. View at: Publisher Site | Google Scholar
  9. H. He, Y. Bai, E. A. Garcia, and S. Li, “ADASYN: adaptive synthetic sampling approach for imbalanced learning,” in Proceedings of IEEE International Joint Conference on Neural Networks, pp. 1322–1328, IEEE, Hong Kong, China, June 2008. View at: Publisher Site | Google Scholar
  10. G. Douzas, F. Bacao, and F. Last, “Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE,” Information Sciences, vol. 465, pp. 1–20, 2018. View at: Publisher Site | Google Scholar
  11. S. Barua, M. M. Islam, X. Yao, and K. Murase, “MWMOTE-majority weighted minority oversampling technique for imbalanced data set learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 2, pp. 405–425, 2014. View at: Publisher Site | Google Scholar
  12. X. Gao, B. Ren, H. Zhang et al., “An ensemble imbalanced classification method based on model dynamic selection driven by data partition hybrid sampling,” Journal of Expert Systems with Applications, vol. 160, no. 113660, pp. 1–18, 2020. View at: Publisher Site | Google Scholar
  13. Y. Xia, L. J. Li, Z. Xu, and B. Hae-Young, “Weighted oversampling of imbalanced data based on hierarchical clustering,” Computer Science, vol. 46, no. 4, pp. 22–27, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Liping Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views67
Downloads113
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.