Abstract

Nowadays, many disciplines have to deal with big datasets that additionally involve a high number of features. Feature selection methods aim at eliminating noisy, redundant, or irrelevant features that may deteriorate the classification performance. However, traditional methods lack enough scalability to cope with datasets of millions of instances and extract successful results in a delimited time. This paper presents a feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets. The algorithm decomposes the original dataset in blocks of instances to learn from them in the map phase; then, the reduce phase merges the obtained partial results into a final vector of feature weights, which allows a flexible application of the feature selection procedure using a threshold to determine the selected subset of features. The feature selection method is evaluated by using three well-known classifiers (SVM, Logistic Regression, and Naive Bayes) implemented within the Spark framework to address big data problems. In the experiments, datasets up to 67 millions of instances and up to 2000 attributes have been managed, showing that this is a suitable framework to perform evolutionary feature selection, improving both the classification accuracy and its runtime when dealing with big data problems.

1. Introduction

Learning from very large databases is a major issue for most of the current data mining and machine learning algorithms [1]. This problem is commonly named with the term “big data,” which refers to the difficulties and disadvantages of processing and analyzing huge amounts of data [24]. It has attracted much attention in a great number of areas such as bioinformatics, medicine, marketing, or financial businesses [5], because of the enormous collections of raw data that are stored. Recent advances on Cloud Computing technologies allow for adapting standard data mining techniques in order to apply them successfully over massive amounts of data [4, 6, 7].

The adaptation of data mining tools for big data problems may require the redesigning of the algorithms and their inclusion in parallel environments. Among the different alternatives, the MapReduce paradigm [8, 9] and its distributed file system [10], originally introduced by Google, offer an effective and robust framework to address the analysis of big datasets. This approach is currently taken into consideration in data mining, rather than other parallelization schemes such as MPI (Message Passing Interface) [11], because of its fault-tolerant mechanism and its simplicity. Many recent works have been focused on the parallelization of machine learning tools using the MapReduce approach [12, 13].

Recently, new and more flexible workflows have appeared to extend the standard MapReduce approach, such as Apache Spark [14], which has been successfully applied over various data mining and machine learning problems [1517].

Data preprocessing methods, and more concretely data reduction models, are intended to clean and simplify input data [18]. Thus, they attempt to accelerate data mining algorithms and also to improve their accuracy by eliminating noisy and redundant data. The specialized literature describes two main types of data reduction models. On the one hand, instance selection [19, 20] and instance generation [21] processes are focused on the instance level. On the other hand, feature selection [2225] and feature extraction [26] models work at the level of characteristics.

Among the existing techniques, evolutionary approaches have been successfully used for feature selection techniques [27]. Nevertheless, an excessive increment of the individual size can limit their applicability, being unable to provide a preprocessed dataset in a reasonable time when dealing with very large problems. In the current literature, there are no approaches to tackle the feature space with evolutionary big data models.

The main objective of this paper is to enable Evolutionary Feature Selection (EFS) models to be applied on big data. To do this, a MapReduce algorithm has been developed, which splits the data and performs a bunch of EFS processes in parallel in the map phase and then combines the solutions in the reduce phase to get the most interesting features. This algorithm will be denoted “MapReduce for Evolutionary Feature Selection” (MR-EFS).

More specifically, the purposes of this paper are(i)to design an EFS technique over the MapReduce paradigm for big data,(ii)to analyze and illustrate the scalability of the proposed scheme in terms of classification accuracy and time necessary to build the classifiers.

To analyze the proposed approach, experiments on two big data classification datasets with up to 67 millions instances and up to 2000 features will be carried out, focusing on the CHC algorithm [28] as EFS method. With the characteristics selected by this model, its influence on the classification performance of the Spark implementation of three different algorithms (Support Vector Machine, Logistic Regression, and Naive Bayes), available in MLlib [29], will be analyzed.

The rest of the paper is organized as follows. Section 2 provides some background information about EFS and MapReduce. Section 3 describes the MapReduce algorithm proposed for EFS. The empirical results are discussed and analyzed in Section 4. Finally, Section 5 summarizes the conclusions of the paper.

2. Background

This section describes the topics used in this paper. Section 2.1 presents some preliminaries about EFS and its main drawbacks to deal with big data classification problems. Section 2.2 introduces the MapReduce paradigm, as well as two of the main frameworks for big data: Hadoop and Spark.

2.1. Feature Selection: Problems with Big Datasets

Feature selection models attempt to reduce a dataset by removing irrelevant or redundant features. The feature selection process seeks to obtain a minimum set of attributes, such that the results of the data mining techniques that are applied over the reduced dataset are as close as possible (or even better) to the results obtained using all attributes [25]. This reduction facilitates the understanding of the patterns extracted and increases the speed of posterior learning stages.

Feature selection methods can be classified into three categories:(i)Wrapper methods: The selection criterion is part of the fitness function and therefore depends on the learning algorithm [30].(ii)Filtering methods: The selection is based on data-related measures, such as separability or crowding [22].(iii)Embedded methods: The optimal subset of features is built within the classifier construction [24].

For more information about specific feature selection methods, the reader can refer to the published surveys on the topic [2224].

A recent, interesting proposal for applying feature selection to big datasets is presented in [31]. In that paper, the authors describe an algorithm that is able to efficiently cope with ultrahigh-dimensional datasets and select a small subset of interesting features from them. However, the number of selected features is assumed to be several orders of magnitude lower than the total of features, and the algorithm is designed to be executed in a single machine. Therefore, this approach is not scalable to arbitrarily large datasets.

A particular way of tackling feature selection is by using evolutionary algorithms [27]. Usually, the set of features is encoded as a binary vector, where each position determines if a feature is selected or not. This allows to perform feature selection with the exploration capabilities of evolutionary algorithms. However, they lack the scalability necessary to address big datasets (from millions of instances onwards). The main problems found when dealing with big data are as follows:(i)Runtime: The complexity of EFS models is at least , where is the number of instances and the number of features. When either of these variables becomes too large, the application of EFS may be too time-consuming for real situations.(ii)Memory consumption: Most EFS methods need to store the entire training dataset in memory, along with additional computation data and results. When these data are too big, their size could easily exceed the available RAM memory.

In order to overcome these weaknesses, distributed partitioning procedures are used, within a MapReduce paradigm, that divide the dataset into disjoint subsets that are manageable by EFS methods.

2.2. Big Data: MapReduce, Hadoop, and Spark

This section describes the main solutions for big data processing. Section 2.2.1 focuses on the MapReduce programming model, whilst Section 2.2.2 introduces two of the main frameworks to deal with big data.

2.2.1. MapReduce

MapReduce [8, 9] is one of the most popular programming models to deal with big data. It was proposed by Google in 2004 and designed for processing huge amounts of data using a cluster of machines. The MapReduce paradigm is composed of two phases: map and reduce. In general terms, in the map phase, the input dataset is processed producing some intermediate results. Then, the reduce phase combines them in some way to form the final output.

The MapReduce model is based on a basic data structure known as the pair. In the map phase, each application of the map function receives a single pair as input and generates a list of intermediate pairs as output. This is represented by the following form:

Then, the MapReduce library groups all intermediate pairs by key. Finally, the reduce function takes the aggregated pairs and generates a new pair as output. This is depicted by the following form:

A flowchart of the MapReduce framework is presented in Figure 1.

2.2.2. Hadoop and Spark

Different implementations of the MapReduce programming model have appeared in the last years. The most popular one is Apache Hadoop [32], an open-source framework written in Java that allows the processing and management of large datasets in a distributed computing environment. In addition, Hadoop works on top of the Hadoop Distributed File System (HDFS), which replicates the data files in many storage nodes, facilitating rapid data transfer rates among nodes and allowing the system to continue operating without interruption when one or several nodes fail.

In this paper, Apache Hadoop is used to implement the proposal, MR-EFS, as described in Section 3.2.

Another Apache project that is tightly related to Hadoop is Spark [14]. It is a cluster computing framework originally developed in the UC Berkeley AMP Lab for large-scale data processing that improves the efficiency by the use of intensive memory. Spark uses HDFS and has high-level libraries for stream processing and for machine learning and graph processing, such as MLlib [29].

For this work, several classifiers included in MLlib are used to test the MR-EFS algorithm: SVM, Naive Bayes, and Logistic Regression. Their parameters are specified in Section 4.1.

3. MR-EFS: MapReduce for Evolutionary Feature Selection

This section describes the proposed MapReduce approach for EFS, as well as its integration in a generic classification process. In particular, the MR-EFS algorithm is based on the CHC algorithm to perform feature selection, as described in Section 3.1.

First, MR-EFS is applied over the original dataset to obtain a vector of weights that indicates the relevance of each attribute (Section 3.2). Then, this vector is used within another MapReduce process to produce the resulting reduced dataset (Section 3.3). Finally, the reduced dataset is used by a classification algorithm.

3.1. CHC Algorithm for Feature Selection

The CHC algorithm [28] is a binary-coded genetic algorithm that combines a very high selective pressure with an elitist selection strategy, along with several components that introduce diversity. The main parts of CHC are the following:(i)Half Uniform Crossover (HUX): This crossover operator aims at enforcing a high diversity and reducing the risk of premature convergence. It selects at random half of the bits that are different between both parents. Then, it obtains two offspring that are at the maximum Hamming distance from their parents.(ii)Elitist selection: In each generation, the new population is composed of the best individuals (those with the best values of the fitness function) among both the current and the offspring populations. In case of draw between a parent and an offspring, the parent is selected.(iii)Incest prevention: Two individuals are not allowed to mate if the Hamming similarity between them exceeds a threshold (usually initialized to , where is the chromosome length). The threshold is decremented by one when no offspring is obtained in one generation, which indicates that the algorithm is converging.(iv)Restarting process: When (which happens after several generations without any new offspring), the population is considered to be stagnated. In such a case, a new population is generated: the best individual is kept, and the remaining individuals have a certain percentage of their bits flipped.

The basic execution scheme of CHC is shown in Figure 2. This algorithm naturally adapts to a feature selection problem, as each feature can be represented as a bit in the solution vector. Thus, each position of the vector indicates if the corresponding feature is selected or not. Therefore, this approach falls within the wrapper method category, according to the classification established in Section 2.1. The fitness function used to evaluate new individuals applies a -Nearest Neighbors classifier (-NN) [33] over the dataset that would be obtained after removing the corresponding features. The fitness value is the weighted sum of the -NN accuracy and the feature reduction rate.

3.2. MR-EFS Algorithm

This section describes the parallelization of the CHC algorithm, by using a MapReduce procedure to obtain a vector of weights.

Let be a training set, stored in HFDS and randomized as described in [34]. Let be the number of map tasks. The splitting procedure of MapReduce divides in disjoint subsets of instances. Then, each subset () is processed by the corresponding task. As this partitioning is performed sequentially, all subsets will have approximately the same number of instances, and the randomization of the file ensures an adequate balance of the classes.

The map phase over each consists of the EFS algorithm (in this case, based on CHC) as described in Section 3.1. Therefore, the output of each map task is a binary vector , where is the number of features, that indicates which features were selected by the CHC algorithm. The reduce phase averages all the binary vectors, obtaining a vector as defined in (3), where is the proportion of EFS applications that include the feature in their result. This vector is the result of the overall EFS process and is used to build the reduced dataset that will be used for further machine learning purposes:

In the implementation used for the experiments, the reduce phase is carried out by a single task, which reduces the runtime by decreasing the MapReduce overhead [35]. The whole MapReduce process for EFS is depicted in Figure 3. It is noteworthy that the whole procedure is performed within a single iteration of the MapReduce workflow, avoiding additional disk accesses.

3.3. Dataset Reduction with MapReduce

Once vector is calculated, the objective is to remove the less promising features from the original dataset. To do so in a scalable manner, an additional MapReduce process was designed. First, vector is binarized using a threshold :

Vector indicates which features will be selected for the reduced dataset. The number of selected features () can be controlled with : with a high threshold, only a few features will be selected, while a lower threshold allows more features to be picked.

The MapReduce process for the dataset reduction works as follows. Each map processes an instance and generates a new one that only contains the features selected in . Finally, the instances generated are concatenated to form the final reduced dataset, without the need of a reduce step. The dataset reduction process, using the result of MR-EFS and an arbitrary threshold, is depicted in Figure 4.

4. Experimental Framework and Analysis

This section describes the performed experiments and their results. First, Section 4.1 describes the datasets and the methods used for the experiments. Section 4.2 details the underlying hardware and software support. Finally, Sections 4.3 and 4.4 present the results obtained using two different datasets.

4.1. Datasets and Methods

This experimental study uses two large binary classification datasets in order to analyze the quality of the solutions provided by the MR-EFS algorithm.

First, the epsilon dataset was used, which is composed of 500 000 instances with 2000 numerical features. This dataset was artificially created for the Pascal Large Scale Learning Challenge [36] in 2008. The version provided by LIBSVM [37] was used.

Additionally, this study includes the dataset used at the data mining competition of the Evolutionary Computation for Big Data and Big Learning held on July 14, 2014, in Vancouver (Canada), under the international conference GECCO-2014 (from now on, it is referred to as ECBDL14) [38]. This dataset has 631 features (including both numerical and categorical attributes), and it is composed of approximately 32 million instances. Moreover, the class distribution is not balanced: 98% of the instances belong to the negative class.

In order to deal with the imbalance problem, the MapReduce approach of the Random Oversampling (ROS) algorithm presented in [39] was applied over the original training set for ECBDL14. The aim of ROS is to replicate the minority class instances from the original dataset until the number of instances from both classes is the same.

Despite the inconvenience of increasing the size of the dataset, this technique was proven in [39] to yield better performance than other common approaches to deal with imbalance problems, such as undersampling and cost-sensitive methods. These two approaches suffer from the small sample size problem for the minority class when they are used within a MapReduce model.

The main characteristics of these datasets are summarized in Table 1. For each dataset, the number of instances for both training and test sets and the number of attributes are shown, along with the number of splits in which MR-EFS divided each dataset. Note that the imbalanced version of ECBDL14 is not used in the experiments, as only the balanced ECBDL14-ROS version is considered.

The parameters for the CHC algorithm are presented in Table 2.

After applying MR-EFS over the described datasets, the behavior of the obtained reduced datasets was tested using three different classifiers implemented in Spark, available in MLlib: SVM [40], Logistic Regression [41], and Naive Bayes [42]. The reader may refer to the provided references or to the MLlib guide [43] for further details about their internal functioning. The parameters used for these classifiers are listed in Table 2. Particularly, two different variants of SVM were used, modifying the regularization parameter, which allows the algorithm to calculate simpler models by penalizing complex models in the objective function.

In the remainder of this paper, two metrics are used to evaluate the performance of the three classifiers when applied over the obtained reduced datasets:(i)Area Under the Curve (AUC): This measure is defined as the area under the Receiver Operating Characteristic (ROC) curve. In this work, this value is approximated with the formula in (5), where TPR is the True Positive Rate and TNR is the True Negative Rate. These values can be directly obtained from the confusion matrix and are not affected by the imbalance of the dataset:(ii)Training runtime: It is the time (in seconds) used to train or build the classifier.

Note that for this study the test runtime is much less affected by the feature selection process, because at that point the classifier has already been built. For the sake of simplicity, only training runtimes are reported.

4.2. Hardware and Software Used

The experiments for this paper were carried out on a cluster of twenty computing nodes, plus a master node. Each one of these compute nodes has the following features:(i)Processors: 2 x Intel Xeon CPU E5-2620.(ii)Cores: 6 per processor (12 threads).(iii)Clock speed: 2.00 GHz.(iv)Cache: 15 MB.(v)Network: QDR InfiniBand (40 Gbps).(vi)Hard drive: 2 TB.(vii)RAM: 64 GB.

Both Hadoop master processes—the NameNode and the JobTracker—are hosted in the master node. The former controls the HDFS, coordinating the slave machines by the means of their respective DataNode processes, while the latter is in charge of the TaskTrackers of each compute node, which execute the MapReduce framework. Spark follows a similar configuration, as the master process is located on the master node, and the worker processes are executed on the slave machines. Both frameworks share the underlying HDFS file system.

These are the details of the software used for the experiments:(i)MapReduce implementation: Hadoop 2.0.0-cdh4.7.1. MapReduce 1 (Cloudera’s open-source Apache Hadoop distribution).(ii)Spark version: Apache Spark 1.0.0.(iii)Maximum maps tasks: 320 (16 per node).(iv)Maximum reducer tasks: 20 (1 per node).(v)Operating system: CentOS 6.6.

Note that the total number of cores of the cluster is 240. However, a higher number of maps were kept to maximize the use of the cluster by allowing a higher parallelism and a better data locality, thereby reducing the network overload.

4.3. Experiments with the Epsilon Dataset

This section explains the results obtained for the epsilon dataset. First, Section 4.3.1 describes the performance of the feature selection procedure and compares it with a sequential approach. Then, Section 4.3.2 describes the results obtained in the classification.

4.3.1. Feature Selection Performance

The complexity order of the CHC algorithm is approximately , where is the number of evaluations of the fitness function (a -NN classifier in this case), is the number of instances, and is the number of features. Therefore, the algorithm is quadratic with respect to the number of instances.

When the dataset is divided into splits within MR-EFS, each one of the map tasks has complexity order , which is times faster than applying CHC over the whole dataset. If cores are available for the map tasks, the complexity of the map phase within the MR-EFS procedure is approximately . This demonstrates the scalability of the approach presented in this paper: even if the maps are executed sequentially (), the procedure is still one order of magnitude faster than feeding a single CHC with all the instances at once.

In order to verify the performance of MR-EFS with respect to the sequential approach, a set of experiments were performed using subsets of the epsilon dataset. Both a sequential CHC algorithm and the parallel MR-EFS (with 1000 instances per split) were applied over those subsets. The obtained execution times are presented in Table 3 and Figure 5, along with the runtime of MR-EFS over the whole dataset.

The sequential runtimes described a quadratic shape, in concordance with the complexity order of CHC, which clearly states that the time necessary to tackle the whole dataset would be impractical. In opposite, the runtime of MR-EFS for the small datasets was nearly constant. The case with 1000 instances is particular, in the sense that MR-EFS only executed one map task; therefore, it executed a single CHC with 1000 instances. The time difference between CHC and MR-EFS in this case reflects the overhead introduced by the latter. Even though his overhead increased slightly as the number of map tasks grew, it represented a minor part of the overall runtime.

As for the full dataset, with 512 splits, the number of instances for each map task in MR-EFS is around 780. As the number of cores used for the experiments was 240, the map phase in MR-EFS should be roughly three times slower than the sequential CHC with 1000 instances, according to the complexity orders previously detailed. The times in Table 3 show a higher time gap, because MR-EFS includes as well the other phases of the MapReduce framework (namely, splitting, shuffle, and reduce), which are nonnegligible for a dataset of such size. Nevertheless, the overall MR-EFS execution time is highly scalable, as shown by the fact that MR-EFS was able to process the 400 000 instances faster than the time needed by CHC to process 5000 instances.

4.3.2. Classification Results

This section presents the results obtained by applying several classifiers over the full epsilon dataset with its features previously selected by using MR-EFS. The dataset was split among 512 map tasks, each of which computed around 780 instances.

The AUC values are shown in Table 4 and Figure 6, both for training and test sets, using three different thresholds for the dataset reduction step. Note that the zero-threshold corresponds to the original dataset, without performing any feature selection. The best results for each method are stressed in boldface. The table shows that the accuracy was improved by the removal of the adequate features, as the threshold 0.55 allowed for obtaining higher AUC values. The accuracy gain was especially large for SVM. Moreover, more than half of the features were removed, which reduces significantly the size of the dataset and therefore the complexity of the resulting classifier.

A threshold of value 0.60 also got to improve the accuracy results, while reducing even further the size of the dataset. Finally, for the 0.65 thresholds, only SVM saw its AUC improved.

It is also noteworthy that the two variants of SVM obtained the same results for all the tested thresholds. This fact indicates that the complexity of the obtained SVM model is relatively low.

The training runtime of the different algorithms and databases is shown in Table 5 and Figure 7. The obtained results were seemingly the opposite to the expectations: except for Naive Bayes, the classifiers needed more time to process the datasets as their number of features decreases.

However, this behavior can be explained: as the dataset gets smaller, it occupies less HDFS blocks, and therefore the full parallel capacity of the cluster is not exploited. The size of each version of the epsilon dataset and the number of HDFS blocks that are needed to store it are shown in Table 6. The computer cluster is composed of 20 machines; therefore, when the number of blocks is lower than 20, some of the machines remain idle because they have no HDFS block to process. Moreover, even if the number of blocks is slightly above 20, the affected HDFS blocks may not be evenly distributed among the computing nodes. This demonstrates the capacity of Spark to deal with big databases: as the size of the database (more concretely, the number of HDFS blocks) increases, the framework is able to distribute the processes more evenly, exploiting data locality, increasing the parallelism, and reducing the network overhead.

In order to deal with this problem, the same experiments were repeated over the epsilon dataset, after reorganizing the files with a smaller block size. For each of the eight sets, the block size was calculated according to (6), where is the size of the dataset in bytes and is the number of cores in the cluster:

The runtime for the dataset with the block size customized for each subset is displayed in Table 7 and Figure 8. It is observed that the runtime was smaller than that with the default block size. Furthermore, the curves show the expected behavior: as the number of features of the dataset was reduced, the runtime decreased. In the extreme case (for threshold 0.65), the runtime increased again, because with such a small dataset the synchronization times of Spark become bigger than the computing times, even with the customized block size.

In the next section, MR-EFS is tested over a very large dataset, validating these observations.

4.4. Experiments with the ECBDL14-ROS Dataset

This section presents the classification accuracy and runtime results obtained with the ECBDL14-ROS dataset. As described in Section 4.1, a random oversampling technique [39] was previously applied over the original ECBDL14 dataset to overcome the problems originated by its imbalance. The MR-EFS method was applied using 32 768 map tasks; therefore, each map task computed around 1984 instances.

The obtained results in terms of accuracy are depicted in Table 8 and Figure 9. MR-EFS improved the results in all cases, with different thresholds. The accuracy gain was especially important for the Logistic Regression and SVM algorithms. As expected, the SVM with obtained better results than the one with , as the latter attempted to reduce the complexity of the obtained model. However, it is noteworthy that, for 119 features, SVM-0.5 was able to outperform SVM-0.0. This hints that after removing noisy features, the simpler obtained models represented better the true knowledge underlying the data. With even less features, both variants of SVM obtained roughly the same accuracy.

To conclude this study, the runtime necessary to train the classifiers with all variants of ECBDL14-ROS is presented in Table 9 and Figure 10. In this case, the runtime behaved as expected: the time was roughly linear with respect to the number of features, for all tested models. This means that MR-EFS was able to improve both the runtime and the accuracy for all those classifiers.

5. Concluding Remarks

This paper presents MR-EFS, an Evolutionary Feature Selection algorithm designed upon the MapReduce paradigm, intended to preprocess big datasets so that they become affordable for other machine learning techniques, such as classification techniques, that are currently not scalable enough to deal with such datasets. The algorithm has been implemented using Apache Hadoop, and it has been applied over two different large datasets. The resulting reduced datasets have been tested using three different classifiers, implemented in Apache Spark, over a cluster of 20 computers.

The theoretical evaluation of the model highlights the full scalability of MR-EFS with respect to the number of features in the dataset, in comparison with a sequential approach. This behavior has been further confirmed after the empirical procedures.

According to the obtained classification results, it can be claimed that MR-EFS is able to reduce adequately the number of features of large datasets, leading to reduced versions of them, that are at the same time smaller to store, faster to compute, and easier to classify. These facts have been observed with the two different datasets and for all tested classifiers.

For the epsilon dataset, the relation between the reduced datasets size and the number of nodes is forced to modify the HDFS block size, proving that the hardware resources can be optimally used by Hadoop and Spark, with the correct design. One of the obtained reduced ECDBL14-ROS datasets, with more than 67 million instances and several hundred features, could be processed by the classifiers in less than half of the time than that of the original dataset, and with an improvement of around 5% in terms of AUC.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Research Projects TIN2014-57251-P, P10-TIC-6858, P11-TIC-7765, P12-TIC-2958, and TIN2013-47210-P. D. Peralta and S. Ramírez-Gallego hold two FPU scholarships from the Spanish Ministry of Education and Science (FPU12/04902, FPU13/00047). I. Triguero holds a BOF postdoctoral fellowship from the Ghent University.