Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2018, Article ID 3419213, 12 pages
https://doi.org/10.1155/2018/3419213
Research Article

Evolutionary Multilabel Feature Selection Using Promising Feature Subset Generation

School of Computer Science and Engineering, Chung-Ang University, 221, Heukseok-Dong, Dongjak-Gu, Seoul 06974, Republic of Korea

Correspondence should be addressed to Dae-Won Kim; rk.ca.uac@mikwd

Received 8 June 2018; Accepted 7 August 2018; Published 18 September 2018

Academic Editor: Grigore Stamatescu

Copyright © 2018 Jaesung Lee et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recent progress in the development of sensor devices improves information harvesting and allows complex but intelligent applications based on learning hidden relations between collected sensor data and objectives. In this scenario, multilabel feature selection can play an important role in achieving better learning accuracy when constrained with limited resources. However, existing multilabel feature selection methods are search-ineffective because generated feature subsets frequently include unimportant features. In addition, only a few feature subsets compared to the search space are considered, yielding feature subsets with low multilabel learning accuracy. In this study, we propose an effective multilabel feature selection method based on a novel feature subset generation procedure. Experimental results demonstrate that the proposed method can identify better feature subsets than conventional methods.

1. Introduction

Recent progress in the development of sensor networks improves the precision of continuous data sensing [1], which increases the coverage of ambient applications such as activity monitoring in daily routines that may involve the concurrent prediction of the activity level and caloric expenditure [2, 3]. Owing to limitations in computational and storage capability [4, 5] and redundant data sensing for denoising [6, 7], composing a strategy that would produce the best accuracy under given data collection conditions is considered one of the most important issues in this field [8]. Consequently, multilabel learning is considered to be a promising approach because it allows for improvements in accuracy by exploiting the dependency among labels [9, 10].

Let denote the set of patterns described by a set of features . Then, each pattern , where , is assigned to a certain label subset in which and represents a finite set of labels. To attain additional improvements in accuracy, the algorithm has to exploit useful dependencies among labels based on input feature values [11]. For this purpose, the multilabel feature selection that identifies a subset with maximum features that provide the largest dependency on can be used as a promising preprocessing step because it remedies the complicated relation among features and labels by selecting important features and discarding unnecessary ones [12, 13].

Basically, multilabel feature selection is a search problem [14]; it can be achieved by identifying the optimal feature subset that gives the best prediction accuracy from candidate feature subsets [15]. Because the examination of all feature subsets is impractical, conventional methods employ a heuristic search method that identifies a feasible solution within limited computational costs by sacrificing optimality [16]. Of the many search methods, the evolutionary search method is considered a promising approach because it effectively narrows down the search space by examining neighbor solutions or feature subsets of the best solutions created from past generations [17, 18].

In the evolutionary search method, the best solution is replaced if a newly created neighbor solution yields a better fitness value. Therefore, generating promising solutions determines the effectiveness of the search. Owing to the extensively wide search space and limited computational cost, a conventional strategy tackling this difficulty is to employ a cheap evaluation method that measures the potential of possible solutions, filtering out unpromising solutions and then validating the exact fitness value of the remaining solutions [19]. However, to the best of our knowledge, there is no serious investigation on this direction from the literatures related to intelligent sensor applications and multilabel feature selection.

In this study, we propose a novel effective evolutionary search method for multilabel datasets. Previous studies considering the intelligent sensor applications incurring multilabel feature selection did not tackle the issue related to the generation of promising feature subsets, resulting in a degeneration of search effectiveness. Our contribution can be summarized as follows: (i)The proposed method improves search effectiveness by producing a large number of feature subsets with important features and then filters out unpromising feature subsets using a cheap evaluation method.(ii)A cheap feature subset evaluation method is employed to filter out unpromising feature subsets without checking the fitness value which demands expensive computational cost.(iii)We compared the performance of conventional multilabel feature wrapper methods and the proposed method on 14 multilabel datasets and conducted 53 standard statistical tests to validate the superiority of the proposed method

2. Related Work

Because multilabel feature selection can improve the learning accuracy as well as the efficiency of a later algorithm by highlighting important features such as multilabel classifier for the concurrent prediction, it gained significant attention from diverse fields [20, 21].

Feature selection methods come in two categories: filters and wrappers. Filter methods rank features based on their own criterion by evaluating the importance of each feature. For multilabel feature selection on multilabel datasets, a simple strategy that changes the label sets to a single label set was often considered, such as a label powerset [22]. This method is advantageous because it enables conventional feature selection methods for single-label datasets. Several conventional filter methods have been reported [23]; however, filter methods commonly suffer from low multilabel classification accuracy, owing to noninteraction with multilabel classifiers or subsequent problems such as imbalance in transformed single-label data. By contrast, wrapper methods evaluate created feature subsets and improve them. In detail, they locate promising feature subsets using a search method employed and then evaluate them using a later learning algorithm [17]. Although the learning algorithm can be different according to the application, recent review indicated that the most frequent choice for the search method is the evolutionary search [24] because it is effective at searching for feasible solution in global perspective. Zhang et al. [14] proposed a multilabel feature selection method based on genetic algorithms. However, a major drawback of the genetic algorithm is their premature convergence to unrefined solutions [17]. On the other hand, a genetic algorithm-based nondominated sorting genetic algorithm-II [25] and multiobjective particle swarm optimization [26] have been used for multilabel feature selection.

Although most studies consider single-label sensory datasets, there are several studies on feature selection methods because of the promising potential. To apply automatic view generation, a semisupervised feature selection method for features extracted from very high-resolution remote sensing images was proposed [27]. Specifically, features are categorized into a series of disjoint groups, and then important features in each group are selected by solving the -norm-based minimization problem. Similarly, a refined feature subset from discrete wavelet transform coefficient features, extracted from artificial tongue sensor signals, was selected by using a dispersion ratio computation [12]. Activity recognition using accelerometers was also shown to be improved by feature selection [28]. There are several studies related to the identification of a set of important features based on the fitness or classification accuracy derived from the learning algorithm. For example, a feature subset can be obtained by iteratively including the best feature at each step, which is referred to as the sequential forward selection algorithm [29]. This technique is applied to the application of chiller fault detection [30], which is an instantiation of an automatic fault detection problem in a smart factory [31]. The genetic algorithm which is one of the most famous evolutionary search methods in the machine learning community was also considered for selecting discriminative features for online bearing fault diagnosis [32]. In addition, the particle swarm optimization technique, which is another popular evolutionary search method, was also used to find the optimal feature subset for intrusion detection [13]. Support vector machine recursive feature elimination has been used for the analysis of correlated gas sensor data [7]. Energy consumption was minimized and the classification accuracy was improved by feature selection from sensor data [5].

3. Proposed Method

3.1. Preliminary

Of the various evolutionary search methods, estimation of distribution algorithm (EDA) has proven effective for solving various problems [24, 33]. Unlike typical evolutionary search methods, to generate new feature subsets, EDAs do not use genetic operators [19]. Instead, conventional EDAs generate new solutions or candidates using a probability model and update the probability model based on a statistical distribution estimated from the representation of solutions. Thus, it provides an opportunity to generate promising feature subsets by manipulating the probability model. The probability model can be implemented as follows [33, 34]: where is the selection probability of the -th feature in the -th generation, is the probability associated with the -th feature in the top 50% feature subsets in the -th generation that are ranked in terms of their fitness values, and is the learning rate, which is a user-defined parameter that controls the influence of to the probability model in the next generation. Through (2), the probability of selecting a feature in the ()-th generation, , is calculated, and in the ()-th generation, feature subsets are built. This process is repeated until the maximum allowed computational cost is exhausted. Although there are many stopping criteria, we set the number of spent fitness function calls (FFCs) as the termination condition for all evolutionary search methods employed in this study for a fair comparison against diversified settings and implementations [35].

In the feature selection problem, the algorithm should be capable of searching a huge parametric space; thus, significant computational cost is associated with finding a promising solution. Although simple probability models are easy to implement, it can be insufficient for solving complicated problems, such as pinpointing promising feature subsets in a large search space [36]. For example, in the conventional EDA-based feature selection method, all features are initially assigned the selection probability of 0.5. This means that nonpromising features can be also present in feature subsets. To overcome this drawback, we devise a process for generation of a promising feature subset. Specifically, when creating a feature subset, the algorithm will consider important features more frequently by setting the priority to such features given by an individual feature filter.

After creating the feature subsets, the next step amounts to selecting promising feature subsets. Although good feature subsets can be created using filter methods, there can be nonpromising feature subsets because the creation process is probabilistic and there can be efficient interaction among features. Nonpromising feature subsets consume FFCs and negatively affect the search efficiency. To overcome this problem, we propose a feature subset evaluation method consuming a cheap computational cost. Using the methods of information theory, the proposed method calculates, for each subset, the relevance and redundancy of the subset features. Then, the proposed method selects feature subsets with maximal relevance and minimal redundancy. Because there is a possibility that the proposed solution will be only locally promising, the proposed method uses roulette wheel selection as the selection algorithm [37]. Thus, nonpromising feature subsets are filtered out from the neighbor set, without exact evaluation.

In the proposed method, there are two key functions for the feature subset generation. create function makes candidate feature subsets that is composed of relevant features. select function selects promising feature subsets among created ones by using roulette wheel selection based on their potential given by a feature subset evaluation method. Figure 1 schematically shows the proposed method. In the first stage, the probability model is initialized, indicating feature subsets containing randomly chosen features will be created frequently. The probability model is represented as a vector where each element encodes the presence of each feature. In the next step, feature subsets are created using create function. All feature subsets are assigned random integers, ranging from one to . If one feature subset is determined to choose two features, the proposed method ranks the features in terms of their importance, using a filter method. In the first iteration, the most important feature is . Then, the proposed method chooses a random number between 0 and 1 and compares to the selection probability of in the probability model, . Since is greater than in the example, is selected and added to the feature subset. In the second iteration, features are again ranked using the filter method. In this case, the features’ importance is measured again in terms of relevance and redundancy under the selection of . Thus, the features’ ranks can change. In this example, is the most important feature. Then, another random number is drawn and compared to the selection probability of , . However, is lower than ; thus, is not selected. Then, the second most important feature can be selected. In this example, is added to the subset of features, and iteration is terminated. Through this process, the proposed method creates a series of new feature subsets including important features. The next step amounts to selecting promising feature subsets among created feature subsets using select function. The proposed method chooses promising feature subsets by roulette wheel selection biased by the proposed feature subset evaluation. Finally, the probability model is updated using promising feature subsets and (2), to reflect the presence of features in the best half of new feature subsets ranked by fitness value.

Figure 1: Schematic overview of the proposed method.
3.2. Proposed Search Procedure

The proposed algorithm creates feature subsets to large searching spaces and filters nonpromising subsets using the proposed subset evaluation method that does not incur exact evaluation. Algorithm 1 shows the proposed method. For the population size and maximal number of FFCs , the method initializes feature subsets and the probability model (line 3). The method generates a set of feature subsets through a random assignment of maximum binary bits. The probability model is an length vector, and each entry in the vector refers to the probability of choosing coordinated features. Each entry is initialized for the distribution of the features of . Then, the created set is evaluated (line 4). The method set consumed FFCs to 0 (line 5) and stores the global best feature subset to (line 6). is updated by (2) (line 8). The method creates a set of neighbor feature subsets , which is based on a filter method by create function (line 9). Then, feature subsets are selected by roulette wheel selection weighted by select function in the set and yield the new generation (line 6). The feature subset is evaluated (line 11), and sets consumed FFCs (line 12). The feature subset , which offers globally optimal performance, is stored and replaced in the procedure (line 13). After all allowed FFCs are consumed, the algorithm returns the feature subset .

Algorithm 1: Proposed method.

Algorithm 2 is a create function that shows the process of creating feature subsets. Each feature subset selects random size features (line 5). To introduce important features more frequently, first, each feature should be ranked by their importance value. To achieve this, we evaluate the importance of each feature using the relevance criterion [20]: where and denote the relevance and redundancy of the -th feature and denotes the importance of the -th feature. Although both functions can be implemented differently according to the subject of each study, we use a recent filter method for measuring the importance of features. In the work of [15], we proposed a filter method for multilabel dataset, and was shown to outperform conventional filter methods. Because of this reason, we use this method for measuring the importance of features. Accordingly, can be implemented as where indicates the mutual information between variables and and is the joint entropy obtained from the probability , , and . Next, can be implemented as

Algorithm 2: create function.

Thus, the feature ’s importance is measured by

Then the rank of each feature can be determined by using (6) and remembered (line 8). After then, the function decides whether to choose a feature from the most important subsets by (lines 9 to 13). If a feature is chosen, it is added to subset (line 11). In addition, after a subset is created, it is added to the set of neighbor feature subsets (line 16).

It is well-known fact from the feature selection community that a set of individually good features is not necessarily a good feature subset due to the interaction among features. This means that the created feature subset can be unpromising even though (6) only included important features. To achieve this, select function described in Algorithm 3 that shows the process for selecting promising feature subsets in a neighbor set is necessary. In select function, a new feature subset filter method is employed [38]. Specifically, it evaluates the fitness of the feature subset as

Algorithm 3: select function.

By using (7), select function ranks feature subsets in the neighbor set (line 3). Next, the algorithm selects feature subsets using roulette wheel selection [37], which is a biased selection weight by (7) (line 4).

In summary, in the generation of a feature subset, the algorithm ranks the importance of features using the filter method and selects the most important feature based on the probability considering subset selected at this point. If the -th feature is not chosen, the next most important feature can be selected with the probability , and the process repeats until a feature is selected. Then, for each neighbor feature subset, (7) ranks the importance of feature subsets, and feature subsets with highest values are likely to selected.

4. Experimental Results

We conducted experiments on 14 datasets from various domains. The Birds dataset is audio data containing samples of multiple bird calls. The Enron and Language Log (Llog) datasets are generated from text mining applications, where each feature corresponds to the presence of a word and each label represents the relevance of each text pattern to a specific subject. The Mediamill dataset contains video data from an automatic detection system. The Medical dataset is sampled from a large corpus of suicide letters obtained from the natural language processing of clinical free texts. The TMC2007 dataset contains safety reports of a complex space system. The remaining eight datasets came from the Yahoo dataset collection. We performed unsupervised dimensionality reduction on datasets, including the TMC2007 and Yahoo collections, consisting of more than 10,000 features. Because our algorithm uses information theory, for numeric features, we performed discretization using the supervised discretization method [39]. Table 1 shows the standard characteristics of the multilabel datasets used in our experiments, including the number of patterns in the datasets , number of features , type of features, and number of labels . The label cardinality measure represents the average number of labels for each instance. The label density measure is the label cardinality over the total number of labels. The number of distinct label sets indicates the number of unique label subsets in . Domain represents the applications associated with the extracted datasets.

Table 1: Standard characteristics of employed datasets.

We compared the proposed method with conventional methods, including the genetic algorithm (GA) [14], nondominated sorting genetic algorithm-II (NSGA-II) [25], and multiobjective particle swarm optimization feature selection (MPSOFS) [26]. We considered a conventional multilabel classifier, namely, the multilabel naïve Bayes (MLNB) classifier [14]. We used conventional hold-out cross-validation for each dataset. Of the patterns, 80% were randomly chosen as a training set and the remaining 20% were chosen as a test set. We set the size of the population to 20, and the maximal number of FFCs was limited to 100. In our proposed method, we created 500 feature subsets using the probability model and set the learning rate () to 0.4. The GA and NSGA-II created two offspring feature subsets and one feature subset from mutation operators in each generation. The MPSOFS preserved the global best particle solutions and each particle’s best solutions. Thereafter, the MPSOFS updated the velocity values. All experiments were repeated 10 times, and the average measured values were used to compare the performances of the methods.

To measure the methods’ performances, we employed the following four evaluation metrics: multilabel accuracy, hamming loss, ranking loss, and normalized coverage. Multilabel accuracy is defined as where is a given test set. Hamming loss is defined by where denotes the correct label subset and denotes the symmetric difference between the two sets. Ranking loss is defined by where is a complementary set of . Ranking loss measures the average fraction of pairs with over all possible relevant and irrelevant label pairs. Finally, normalized coverage is defined as: where returns the rank of the corresponding relevant label according to in nonincreasing order. Therefore, normalized coverage measures how many labels must be marked as positive for all relevant labels to be positive. Higher values of multilabel accuracy and lower values of hamming loss, ranking loss, and normalized coverage indicate good classification performance.

Tables 2, 3, 4, and 5 list the experimental results for the different performance measures as averages over the experiments on the employed datasets. The best performance of each dataset is indicated by a bold font. In each table, the last column shows the average rank (Avg. rank) of each comparison method over all the multilabel datasets. In terms of the multilabel accuracy and ranking loss measures, the proposed method outperformed the GA, NSGA-II, and MPSOFS, on all datasets. In terms of the hamming loss, the proposed method outperformed conventional methods on all datasets except TMC2007. In terms of the normalized coverage, the proposed method outperformed the conventional methods on all datasets except Llog.

Table 2: Comparison results in terms of multilabel accuracy.
Table 3: Comparison results in terms of hamming loss.
Table 4: Comparison results in terms of ranking loss.
Table 5: Comparison results in terms of normalized coverage.

After measuring the performance of the methods on all datasets, we analyzed the performance using statistical tools. We employed the Friedman test, a widely used statistical test, for comparing multiple methods over a number of datasets [40]. Supposing there are methods and datasets, and let denote the average rank for the -th method under the null hypothesis (i.e., when all of the methods perform equally well). Then, the following Friedman statistic is distributed according to the -distribution with numerator degrees of freedom and () () denominator degrees of freedom as parameters: where is defined as

If is larger than the critical value at a significance level , the null hypothesis is rejected, implying that the compared methods have different performances. After the null hypothesis is rejected, we perform a post hoc test to analyze whether the proposed method performs significantly better than other methods. The Bonferroni–Dunn test is employed [41]. Critical difference () is used to compare the proposed method and one comparison method. CD is defined as where the critical value is constant and is determined by the number of methods and the significance level. If the difference between the two compared methods’ average ranks is greater than , the better-ranking method is concluded to perform significantly better than the other method. Because our experiment used four methods, including the proposed method, and 14 datasets, we set and . We employed the Friedman test when the significance level was 0.05. Table 6 shows the summary of the employed Friedman test. The critical value for 3 and 39 degrees of freedom was 2.845. The Friedman statistic for all performance measures was above the critical value. Thus, the null hypothesis that the compared methods perform equally well was rejected.

Table 6: Summary of the Friedman statistics (, ) and critical value in terms of each evaluation measure.

To employ the Bonferroni–Dunn test, the calculated with was 1.168 since at the significance level of 0.05. Figure 2 shows the diagrams for all evaluation measures, where the average rank of each method is on the top of each figure. Our proposed method significantly outperforms other, conventional, methods on all evaluation measures.

Figure 2: Bonferroni-Dunn test results of four comparison methods with four evaluation measures (significance level ).

5. Conclusion

To handle multilabel sensor datasets, we proposed an effective search based on a promising feature subset generation method for multilabel feature selection problem. The main contribution of this work is to propose and validate a new feature subset generation method. Specifically, the proposed method generates candidate feature subsets using important features and chooses promising subsets of features without consuming significant computational cost. Experimental results show that our method converges faster than other conventional methods. In the future, we would like to investigate a new feature subset generation that is more effective because the proposed feature subset generation is strongly dependent on the employed filter method, and it may result redundant feature subsets during the search process. In addition, we would like to apply the proposed method to various sensor datasets and compare the performance with conventional feature selection methods considered from sensory data analysis.

Data Availability

All the employed data used to support the findings of this study have been deposited in the MULAN repository (http://mulan.sourceforge.net/datasets-mlc.html).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2016R1C1B1014774).

References

  1. S. Qadri, D. M. Khan, S. F. Qadri et al., “Multisource data fusion framework for land use/land cover classification using machine vision,” Journal of Sensors, vol. 2017, Article ID 3515418, 8 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Alhamoud, V. Muradi, D. Bohnstedt, and R. Steinmetz, “Activity recognition in multi-user environments using techniques of multi-label classification,” in Proceedings of the 6th International Conference on the Internet of Things - IoT'16, pp. 15–23, Stuttgart, Germany, 2016. View at Publisher · View at Google Scholar · View at Scopus
  3. R. Gravina, P. Alinia, H. Ghasemzadeh, and G. Fortino, “Multi-sensor fusion in body sensor networks: state-of-the-art and research challenges,” Information Fusion, vol. 35, pp. 68–80, 2017. View at Publisher · View at Google Scholar · View at Scopus
  4. B. Du, Z. Wang, L. Zhang, L. Zhang, and D. Tao, “Robust and discriminative labeling for multi-label active learning based on maximum correntropy criterion,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1694–1707, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Ghasemzadeh, N. Amini, R. Saeedi, and M. Sarrafzadeh, “Power-aware computing in wearable sensor networks: an optimal feature selection,” IEEE Transactions on Mobile Computing, vol. 14, no. 4, pp. 800–812, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. D. Li, Y. Zhou, G. Hu, and C. J. Spanos, “Optimal sensor configuration and feature selection for AHU fault detection and diagnosis,” IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1369–1380, 2017. View at Publisher · View at Google Scholar · View at Scopus
  7. K. Yan and D. Zhang, “Feature selection and analysis on correlated gas sensor data with recursive feature elimination,” Sensors and Actuators B: Chemical, vol. 212, pp. 353–363, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Cheng, Z. Cai, J. Li, and H. Gao, “Extracting kernel dataset from big sensory data in wireless sensor networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 4, pp. 813–827, 2017. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Kumar, I. Qamar, J. S. Virdi, and N. C. Krishnan, “Multi-label learning for activity recognition,” in 2015 International Conference on Intelligent Environments, pp. 152–155, Prague, Czech Republic, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Read, L. Martino, P. M. Olmos, and D. Luengo, “Scalable multi-output label prediction: from classifier chains to classifier trellises,” Pattern Recognition, vol. 48, no. 6, pp. 2096–2109, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. M.-L. Zhang and Z.-H. Zhou, “A review on multi-label learning algorithms,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 8, pp. 1819–1837, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Liu, Y. Chen, D. Li, and M. Wu, “An active feature selection strategy for DWT in artificial taste,” Journal of Sensors, vol. 2018, Article ID 9709505, 11 pages, 2018. View at Publisher · View at Google Scholar
  13. X. Teng, H. Dong, and X. Zhou, “Adaptive feature selection using v-shaped binary particle swarm optimization,” PLoS One, vol. 12, no. 3, article e0173907, 2017. View at Publisher · View at Google Scholar · View at Scopus
  14. M.-L. Zhang, J. M. Pena, and V. Robles, “Feature selection for multi-label naive Bayes classification,” Information Sciences, vol. 179, no. 19, pp. 3218–3229, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Lee and D.-W. Kim, “SCLS: multi-label feature selection based on scalable criterion for large label set,” Pattern Recognition, vol. 66, pp. 342–352, 2017. View at Publisher · View at Google Scholar · View at Scopus
  16. Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics, Springer Science & Business Media, 2013.
  17. J. Lee and D.-W. Kim, “Memetic feature selection algorithm for multi-label classification,” Information Sciences, vol. 293, pp. 80–96, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Lee, W. Seo, and D.-W. Kim, “Effective evolutionary multilabel feature selection under a budget constraint,” Complexity, vol. 2018, Article ID 3241489, 14 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  19. A. Zhou, J. Sun, and Q. Zhang, “An estimation of distribution algorithm with cheap and expensive local search methods,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 6, pp. 807–822, 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Lee and D.-W. Kim, “Feature selection for multi-label classification using multivariate mutual information,” Pattern Recognition Letters, vol. 34, no. 3, pp. 349–357, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Lee and D.-W. Kim, “Mutual information-based multi-label feature selection using interaction information,” Expert Systems with Applications, vol. 42, no. 4, pp. 2013–2025, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Read, “A pruned problem transformation method for multi-label classification,” in Proceedings of New Zealand Computer Science Research Student Conference, pp. 143–150, Christchurch, New Zealand, 2008. View at Google Scholar
  23. N. Spolaor, E. A. Cherman, M. C. Monard, and H. D. Lee, “A comparison of multi-label feature selection methods using the problem transformation approach,” Electronic Notes in Theoretical Computer Science, vol. 292, pp. 135–151, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. B. Xue, M. Zhang, W. N. Browne, and X. Yao, “A survey on evolutionary computation approaches to feature selection,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 4, pp. 606–626, 2016. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Yin, T. Tao, and J. Xu, “A multi-label feature selection algorithm based on multi-objective optimization,” in 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–7, Killarney, Ireland, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Zhang, D.-w. Gong, X.-y. Sun, and Y.-n. Guo, “A PSO-based multi-objective multi-label feature selection method in classification,” Scientific Reports, vol. 7, no. 1, p. 376, 2017. View at Publisher · View at Google Scholar · View at Scopus
  27. X. Chen, W. Liu, F. Su, and G. Zhou, “Semisupervised multiview feature selection for VHR remote sensing images with label learning and automatic view generation,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 6, pp. 2876–2888, 2017. View at Publisher · View at Google Scholar · View at Scopus
  28. P. Gupta and T. Dallas, “Feature selection and activity recognition system using a single triaxial accelerometer,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 6, pp. 1780–1786, 2014. View at Publisher · View at Google Scholar · View at Scopus
  29. A. Jain and D. Zongker, “Feature selection: evaluation, application, and small sample performance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 2, pp. 153–158, 1997. View at Publisher · View at Google Scholar · View at Scopus
  30. K. Yan, L. Ma, Y. Dai, W. Shen, Z. Ji, and D. Xie, “Cost-sensitive and sequential feature selection for chiller fault detection and diagnosis,” International Journal of Refrigeration, vol. 86, pp. 401–409, 2018. View at Publisher · View at Google Scholar · View at Scopus
  31. Y. Luo, Y. Duan, W. Li, P. Pace, and G. Fortino, “A novel mobile and hierarchical data transmission architecture for smart factories,” IEEE Transactions on Industrial Informatics, vol. 14, no. 8, pp. 3534–3546, 2018. View at Publisher · View at Google Scholar · View at Scopus
  32. R. Islam, S. A. Khan, and J.-m. Kim, “Discriminant feature distribution analysis-based hybrid feature selection for online bearing fault diagnosis in induction motors,” Journal of Sensors, vol. 2016, Article ID 7145715, 16 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  33. M. Perez, D. M. Rubin, T. Marwala, L. E. Scott, and W. Stevens, “A population-based incremental learning approach to microarray gene expression feature selection,” in 2010 IEEE 26-th Convention of Electrical and Electronics Engineers in Israel, pp. 10–14, Eilat, Israel, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. S. Baluja, “Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning,” Tech. Rep., Technical Report Carnegie-Mellon University Pittsburgh PA Department of Computer Science, 1994. View at Google Scholar
  35. Z. Zhu, S. Jia, and Z. Ji, “Towards a memetic feature selection paradigm [application notes],” IEEE Computational Intelligence Magazine, vol. 5, no. 2, pp. 41–53, 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Pelikan, D. E. Goldberg, and F. G. Lobo, “A survey of optimization by building and using probabilistic models,” Computational Optimization and Applications, vol. 21, no. 1, pp. 5–20, 2002. View at Publisher · View at Google Scholar · View at Scopus
  37. A. Lipowski and D. Lipowska, “Roulette-wheel selection via stochastic acceptance,” Physica A: Statistical Mechanics and its Applications, vol. 391, no. 6, pp. 2193–2196, 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. J. Lee, H. Lim, and D.-W. Kim, “Approximating mutual information for multi-label feature selection,” Electronics Letters, vol. 48, no. 15, pp. 929-930, 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Cano, J. M. Luna, E. L. Gibaja, and S. Ventura, “LAIM discretization for multi-label data,” Information Sciences, vol. 330, pp. 370–384, 2016. View at Publisher · View at Google Scholar · View at Scopus
  40. J. Demsar, “Statistical comparisons of classifier over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar
  41. O. J. Dunn, “Multiple comparisons among means,” Journal of the American Statistical Association, vol. 56, no. 293, pp. 52–64, 1961. View at Publisher · View at Google Scholar · View at Scopus