Research Article | Open Access
Congwei Sun, Zhijun Dai, Hongyan Zhang, Lanzhi Li, Zheming Yuan, "Binary Matrix Shuffling Filter for Feature Selection in Neuronal Morphology Classification", Computational and Mathematical Methods in Medicine, vol. 2015, Article ID 626975, 9 pages, 2015. https://doi.org/10.1155/2015/626975
Binary Matrix Shuffling Filter for Feature Selection in Neuronal Morphology Classification
A prerequisite to understand neuronal function and characteristic is to classify neuron correctly. The existing classification techniques are usually based on structural characteristic and employ principal component analysis to reduce feature dimension. In this work, we dedicate to classify neurons based on neuronal morphology. A new feature selection method named binary matrix shuffling filter was used in neuronal morphology classification. This method, coupled with support vector machine for implementation, usually selects a small amount of features for easy interpretation. The reserved features are used to build classification models with support vector classification and another two commonly used classifiers. Compared with referred feature selection methods, the binary matrix shuffling filter showed optimal performance and exhibited broad generalization ability in five random replications of neuron datasets. Besides, the binary matrix shuffling filter was able to distinguish each neuron type from other types correctly; for each neuron type, private features were also obtained.
To accelerate the understanding of neuronal characteristics in the brain, the prerequisite is to classify neurons correctly. It is therefore necessary to develop a uniform methodology for their classification. The existing classification techniques are usually based on structural functions and the numbers of dendrites to fit the models . As neuronal morphology is closely related to neuronal characteristics and functions, neuroscientists have been making great efforts to study neurons from the perspective of neuronal morphology. Renehan et al.  employed intracellular recording and labeling techniques to examine potential relationships between the physiology and morphology of brainstem gustatory neurons and demonstrated a positive correlation between the breadth of responsiveness and the number of dendritic branch points. In the study by Badea and Nathans , detailed morphologies for all major classes of retinal neurons in adult mouse were visualized. After analyzing the multidimensional parametric space, the neurons were clustered into subgroups by using Ward’s and -means algorithms. In the study by Kong et al. , retinal ganglion cells were imaged in three dimensions and the morphologies of a series of 219 cells were analyzed. A total of 26 parameters were studied, of which three parameters, level of stratification, extent of the dendritic field, and density of branching, were used to get an effective clustering, and the neurons could often be matched to ganglion cell types defined by previous studies. In addition, a quantitative analysis based on topology and seven morphometric parameters was performed by Ristanović et al. in adult dentate nucleus , and neurons were classified into four types in this region. A number of neuronal morphologic indices such as soma surface, number of stems, length, and diameter were designed , which makes it possible to classify neurons based on morphological characteristics.
In the study by Li et al. , a total of 60 neurons were selected randomly and five of the twenty morphologic characteristics were extracted by principal component analysis (PCA), after which neurons were clustered into four types. Jiang et al.  extracted four principal components of neuronal morphology by PCA and employed back propagation neural network (BPNN) to distinguish the same kinds of neuron in different species. However, the above studies [2–5] only focused on a particular neuronal type or specific region of the brain, aiming to solve specific issues rather than classify neurons systematically. In this form, only a few samples were selected and the classification results were not independently tested, which is not persuasive enough. Moreover, the methodologies used in previous studies [7, 8] were mainly based on PCA and cluster analysis. PCA is the optimal linear transformation to minimize the mean square reconstruction error, but it only considers second order statistics, and if the data have nonlinear dependencies, higher order statistics should be taken into account . Besides, the principal component was a compression of attributes, and it was hard to interpret the respective contribution. Therefore, feature selection (FS) is necessary, which is able to simplify the model by removing redundant and irrelevant features.
Available feature selection methods fall into three categories, (i) filter methods, in which inherent features of datasets are used to rank variables, and the algorithm complexities are low. However, redundant phenomena are usually present among the selected features, which may result in low classification accuracy. Univariate filter methods include -test , correlation coefficient , Chi-square statistics , information gain , relief , signal-to-noise ratio , Wilcoxon rank sum , and entropy . Multivariable filter methods include mRMR , correlation-based feature selection , and Markov blanket filter . There are also (ii) wrapper methods, where the training precision and algorithm complexity are high, which usually leads to overfitting. Representative methods include sequential forward selection , sequential backward selection , sequential floating selection , genetic algorithm , and ant colony algorithm . SVM and ANN are usually used for implementation. There are also (iii) embedded methods, including support vector machine recursive feature elimination (SVM-RFE) , support vector machine with RBF kernel based on recursive feature elimination (SVM-RBF-RFE) , support vector machine and T statistics recursive feature elimination (SVM-T-RFE) , and random forest , which use internal information of the classification model to evaluate selected features.
In this work, a new feature selection method named BMSF was used. It not only overcame over fitting problem in a large dimensional search space but also took potential feature interactions into account during feature selection. Seven types of neurons, including pyramidal neuron, Purkinje neuron, sensory neuron, motoneuron, bipolar interneuron, tripolar interneuron, and multipolar interneuron, that have different characteristics and functions in the NeuroMorpho.org database were selected, being derived from all the existing species or brain regions (up to version 6.0). BMSF was used to reduce features nonlinearly, and support vector classification (SVC) model was built to classify neurons based on the reserved morphological characteristics. SVM-RFE and rough set theory were used to give a comparison with the introduced feature selection methods, while another two classifiers including the back propagation neural network (BPNN) and Naïve Bayes (NB), which are widely used in the pattern recognition field, were employed to test the robustness of the BMSF. A systematic classification of neurons would facilitate the understanding of neuronal structure and function.
2. Materials and Methods
2.1. Data Sources
Data sets used in this work were downloaded from the NeuroMorpho.org database [6, 29]. NeuroMorpho.org is a web-based inventory dedicated to densely archiving and organizing all publicly shared digital reconstructions of neuronal morphology. NeuroMorpho.org was started and maintained by the Computational Neuroanatomy Group at the Krasnow Institute for Advanced Study, George Mason University. This project is part of a consortium for the creation of a “neuroscience information framework,” endorsed by the Society for Neuroscience, funded by the National Institutes of Health, led by Cornell University (Dr. Daniel Gardner), and including numerous academic institutions such as Yale University, Stanford University, and University of California, San Diego (http://neuromorpho.org/neuroMorpho/myfaq.jsp). The data sets used in this study were documented in Table 1. A total of 5862 neurons were selected, and training and test sets were divided randomly in the percentage of 2 : 1 in each neuron type. Finally, we obtained five pairs of data sets, each with random samples.
2.2. Feature Extraction and Selection
Dendritic cells in the NeuroMorpho.org database were cut into a series of compartments, and each compartment was characterized by an identification number, a type, and the spatial coordinates of the cylinder ending point, the radius value, and the identification number of the “parent.” Although the digital description constituted a completely accurate mapping of dendritic morphology, it bore little intuitive information . In this work, 43 attributes that held more intuitive information were extracted with L-measure software , and related morphological indices and descriptions are shown in Table 2. For convenience, we gave an abbreviation for each neuronal morphological index, as listed in the second column of Table 2.
It was considered redundant among attributes. Feature selection was able to save the cost of computational time and storage and simplify models when dealing with high dimensional data sets, and it was also useful to improve classification accuracy by removing redundant and irrelevant features.
2.2.1. Binary Matrix Shuffling Filter
For rapid and efficient selection of high-dimensional features, we have reported a novel method named binary matrix shuffling filter (BMSF) based on support vector classification (SVC). The method was successfully applied to the classification of nine cancer datasets and obtained excellent results . The outline of the algorithm is as follows.
Firstly, denoting the original training set as , which includes samples and features, where , , we randomly generate a matrix with dimensions with entries being either 1 or 0, representing whether the feature in that column is included in the modeling or not. Where is the given number of combinations ( in this paper), the number of 1 or 0 in each column (each feature) is equal.
Secondly, for each combination, there will be a reduced training set from the original training set according to the subscripts of those selected features, and classification accuracy can be obtained through tenfold cross validation. By repeating this process times, values of accuracy are obtained.
Thirdly, taking the values of accuracy as the new dependent variable, the random 0 or 1 matrix as the independent variable matrix, a new training set is constructed. To evaluate the contribution of a single feature to the model, we change all the 1 in th column to 0 and all the 0 in that column to 1 (keeping the other columns unchanged) to produce two test sets with all the elements of 0 or 1 in th column. The newly produced training set is used to build the model to predict the two kinds of test sets, and the predictive vectors and are then obtained.
Comparing the mean value of vectors and , if the mean value of is bigger than that of , the feature corresponding to this column tends to give better classification performance. Otherwise, this feature should be excluded. Repeating this process, the features are screened in multiple rounds until no more can be deleted.
Detailed procedures can be found in our previous study . This method is able to find a parsimonious set of features which has high joint prediction power.
2.2.2. Support Vector Machine Recursive Feature Elimination
SVM-RFE is an application of recursive feature elimination (RFE) using the weight magnitude as the ranking criterion. It eliminates redundant features and yields more compact feature subsets. The features are eliminated according to a criterion related to their support to the discrimination function, and the support vector is retrained at each step. This method was first successfully used in gene feature selection and afterwards in the fields of bioinformatics, genomics, transcriptomics, and proteomics. For the technical details of the method, refer to the original study by Guyon et al. .
2.2.3. Rough Set Theory
Rough set theory, introduced by Pawlak  in the early 1980s, is a tool for representing and reasoning about imprecise and uncertain data. It constitutes a mathematical framework for inducing minimal decision rules from training examples. Each rule induced from the decision table identifies a minimal set of features discriminating one particular example from other classes. The set of rules induced from all the training examples constitutes a classificatory model capable of classifying new objects. The selected feature subset not only retains the representational power but also has minimal redundancy. A typical application of the rough set method usually includes three steps: construction of decision tables, model induction, and model evaluation . The algorithm used in this work is derived from the study by Hu et al. [35–37].
2.3. Classification Techniques
2.3.1. Support Vector Classification
Support vector classification, based on statistic learning theory, is widely used in the machine learning field . In SVM, structural risk minimization is a substitution of traditional empirical risk minimization, and it is particularly suitable for small sample size, high-dimensional, nonlinearity, overfitting, dimension disaster, local minima, and strong collinear problems. Meanwhile, it also performs excellent generalization abilities. In this work the nonlinear radial basis function (RBF) was selected, where the ranges of parameters and for optimization were −5 to 15 and 3 to −15 (base-2 logarithm), respectively. The cross validation and independent test were carried out using in-home programs written in MATLAB (version R2012a).
2.3.2. Back Propagation Neural Network
BPNN is one of the most widely employed techniques among the artificial neural network (ANN) models. The general structure of the network consists of an input layer, a variable number of hidden layers containing any number of nodes, and an output layer. The back propagation learning algorithm modifies the feed-forward connections between the input and hidden units and the hidden and outputs units to adjust appropriate connection weights to minimize the error . Java-based software WEKA  was used to fit the model.
2.3.3. Naïve Bayes
Naïve Bayes is a classification technique obtained by applying a relatively simple method to a training dataset . A Naïve Bayes classifier calculates the probability that a given instance belongs to a certain class. Considering its simple structure and ease of implementation, Naïve Bayes often performs well. Naïve Bayes models were also implemented in the WEKA software, and all the parameters were set by default.
3. Results and Discussion
3.1. Selected Feature Subsets
Feature selection methods are applied to training sets to get optimal feature subsets. For each method, five sets of features were obtained. Table 3 shows the reserved feature subsets derived from BMSF, SVM-RFE, and rough set theory, respectively. Five feature subsets are numbered with Roman numerals I to V for five replications. The number of selected features is also listed in Table 3.
As shown in Table 3, approximately eight features on average were reserved by BMSF, while the number of features derived from SVM-RFE and rough set theory was more than ten. BMSF retained fewer features, which were more informative and easy to interpret. The feature ranking list showed the importance of a certain feature. In the feature subsets of BMSF and rough set, ranked first in five replications, which indicated that had a strong ability to discriminate neuron types. We calculated the frequency of each of the selected features in the five replications. Except for , features NW, HT, and Ta2 were also reserved in five random replications simultaneously, and their ranking lists were similar in the five BMSF subsets.
3.2. Classification Performance
3.2.1. Comparison of Independent Test Accuracies Using Different Models
In order to evaluate the performance of BMSF and make a comparison with SVM-RFE and rough set, three classifiers were employed to perform independent test. Including the classification performance without features selection, there were twelve classification accuracies. The average accuracies on five random datasets are presented in Table 4.
The independent classification accuracy is the ratio of the total correctly classified samples to the total test samples. As shown in Table 4, of the twelve results obtained, the optimal classification model based on the five datasets is BMSF-SVC (97.84%), followed by SVC without feature selection (97.1%). Excellent classification results on the SVC classifier indicated that all the extracted features were useful in identifying neurons, and few irrelevant features were extracted. Further, after feature selection by BMSF, the classification accuracy of SVC increased. This phenomenon suggested that BMSF deleted redundant features successfully and simplified models with fewer features. On the other hand, the feature subsets derived from SVM-RFE and rough set did not contribute to increasing the accuracies on SVC; in fact, they decreased sharply. A similar finding can be found for Naïve Bayes, as the two feature selection methods decreased the performance of Naïve Bayes, while BMSF improved the performance. The classifier BPNN showed little sensitivity to feature subsets, and the classification performance was at similar levels. With fewer features, BMSF also obtained good accuracy on BPNN, and a simplified model may be useful in further interpretation.
The above independent accuracies indicated that BMSF has an excellent generalization ability and robustness on the three classifiers. We also calculated the average performance of each feature selection method on the three classifiers and the classification performance based on the three different feature selection methods. The results are listed in the last row and column of Table 4. The average classification accuracy based on BMSF was also the best.
As the datasets used in this work are unbalanced (as shown in Table 1), it is necessary to break down the independent test accuracy to obtain the classification performance of each cell type. Based on the predicted labels, the sensitivities of each cell type in the five replications are presented in Table 5.
For seven neuron types, BMSF-SVC exhibited the best performance on pyramidal neuron, motoneuron, sensory neuron, and bipolar neuron. Though tripolar and multipolar neurons showed excellent performance on Naïve Bayes, they did not do very well on other neuron types. The classification result of multipolar neuron was poor; however, SVM-RFE and rough set also performed less well on SVC. We found that the predicted labels of multipolar neuron are almost the same as those of the pyramidal neuron in all the models, which indicated that the unbalanced datasets had an effect on the prediction of multipolar neuron.
3.2.2. Distinguishing a Certain Neuron Type from Others by BMSF-SVC
To evaluate whether a certain feature subset is useful in identifying only a single cell type, the optimal model (BMSF-SVC) in this study was employed. For seven neurons types, six hierarchy models were established. In each hierarchy model, it was a binary classification problem. Due to the imbalanced datasets in this paper, accuracy and the Matthews correlation coefficient (MCC) were used to evaluate the established models, and recall was used to evaluate the classification performance of single neuron type as follows:where TP, TN, FP, and FN were true positive, true negative, false positive, and false negative, respectively, which derived from the confusion matrix. In this paper, positive samples were a certain neuron type and all the rest of the neuron types were negative samples. Positive samples were selected according to the number of samples in each type, and the datasets in each hierarchy are presented in Table 6. For each neuron type, private feature subsets were obtained.
As shown in Table 6, the accuracies and MCC in each hierarchy indicated the effectiveness of the models. We obtained private feature subsets for each neuron type. These features were useful in identifying the corresponding neurons, and the perfect recall may support our conclusion. The above finding suggested that BMSF was not only useful in identifying all seven cell types but also able to discriminate specific neuron types.
In this paper, we used a new feature selection method named BMSF for neuronal morphology classification. Interactions are taken into consideration to get highly accurate classification of neurons, and this method usually selects a small amount of features for easy interpretation. As shown in Table 3, eight features were reserved via BMSF, which was less than the number of features obtained by the other two feature selection methods. The BMSF method automatically conducts multiple rounds of filtering and guided random search in the large feature subset space and reports the final list of features. Though this process is wrapped with SVC, the features selected have general applicability to multiple classification algorithms. This conclusion can be demonstrated by the classification performance shown in Table 4.
We should point out that different runs of BMSF may produce different lists of feature subsets. This phenomenon arises from the fact that there are many possible characteristics that may be used to distinguish neurons. For example, feature subsets derived from rough set theory and BMSF achieve similar classification accuracy when applied to SVC classifier. Our goal is to find a minimal set of such features that the combination of them can well differentiate the dependent variables.
The reserved feature subsets on the same data set that resulted from different feature selection methods differed greatly. Li et al.  and Jiang et al.  selected features from the first twenty attributes of Table 1 only, so they inevitably ignore the attributes that were reserved by BMSF. Therefore, feature extraction by L-measure software was necessary. Another drawback of their feature selection methods was that they did not reduce the variables in the nonlinear manner. For example, PCA only considers second order statistics, and interactions cannot be taken into account.
Conventional classification techniques were built on the premise that the input data sets were balanced; if not, the classification performance would decrease sharply . There were 3908 neurons in the training set, but the number of neurons in each type differed greatly (Table 1). For example, there were only 24 and 11 multipolar interneurons and Purkinje neurons, respectively, whereas the number of pyramidal neurons was 3172, and the unbalanced data sets would have a negative effect on the classification results (Table 5). Therefore, we conducted the hierarchy model for each neuron type, and BMSF was demonstrated as useful in distinguishing specific neuron types from others.
We introduced a new feature selection method named BMSF for neuronal morphology classification, obtained satisfactory accuracy for all of the datasets and each hierarchy model, and were able to select private parsimonious feature subsets for each neuron type. However, it was obvious that classification based simply on neuronal morphology was inadequate. As time goes by, dendrites may continue to grow and axons will generate additional terminals, which will undoubtedly lead to changes in the vital parameters . Therefore, combining biophysical characteristics with function characteristics to investigate the neuronal classification problem will be a productive direction in the future.
Conflict of Interests
All the authors declare that they have no conflict of interests regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China no. 31000666 and no. 61300130 and by China Postdoctoral Science Foundation nos. 2012M511722 and 2014T70769.
- M. Bota and L. W. Swanson, “The neuron classification problem,” Brain Research Reviews, vol. 56, no. 1, pp. 79–88, 2007.
- W. E. Renehan, Z. Jin, X. Zhang, and L. Schweitzer, “Structure and function of gustatory neurons in the nucleus of the solitary tract. II. Relationships between neuronal morphology and physiology,” The Journal of Comparative Neurology, vol. 367, no. 2, pp. 205–221, 1996.
- T. C. Badea and J. Nathans, “Quantitative analysis of neuronal morphologies in the mouse retina visualized by a using a genetically directed reporter,” Journal of Comparative Neurology, vol. 480, no. 4, pp. 331–351, 2004.
- J. H. Kong, D. R. Fish, R. L. Rockhill, and R. H. Masland, “Diversity of ganglion cells in the mouse retina: unsupervised morphological classification and its limits,” Journal of Comparative Neurology, vol. 489, no. 3, pp. 293–310, 2005.
- D. Ristanović, N. T. Milošević, B. D. Stefanović, D. L. Marić, and K. Rajković, “Morphology and classification of large neurons in the adult human dentate nucleus: a qualitative and quantitative analysis of 2D images,” Neuroscience Research, vol. 67, no. 1, pp. 1–7, 2010.
- G. A. Ascoli, D. E. Donohue, and M. Halavi, “NeuroMorpho.Org: a central resource for neuronal morphologies,” The Journal of Neuroscience, vol. 27, no. 35, pp. 9247–9251, 2007.
- C. Li, X. Xie, and X. Wu, “A universal neuronal classification and naming scheme based on the neuronal morphology,” in Proceedings of the IEEE International Conference on Computer Science and Network Technology (ICCSNT '11), vol. 3, pp. 2083–2087, December 2011.
- R. Jiang, Q. Liu, and S. Liu, “A proposal for the morphological classification and nomenclature of neurons,” Neural Regeneration Research, vol. 6, no. 25, pp. 1925–1930, 2011.
- G. Kerschen and J. C. Golinval, “Non-linear generalization of principal component analysis: from a global to a local approach,” Journal of Sound and Vibration, vol. 254, no. 5, pp. 867–876, 2002.
- I. Hedenfalk, D. Duggan, Y. Chen et al., “Gene-expression profiles in hereditary breast cancer,” The New England Journal of Medicine, vol. 344, no. 8, pp. 539–548, 2001.
- V. R. Iyer, M. B. Eisen, D. T. Ross et al., “The transcriptional program in the response of human fibroblasts to serum,” Science, vol. 283, no. 5398, pp. 83–87, 1999.
- X. Jin, A. Xu, R. Bie, and P. Guo, “Machine learning techniques and chi-square feature selection for cancer classification using SAGE gene expression profiles,” in Data Mining for Biomedical Applications, vol. 3916 of Lecture Notes in Computer Science, pp. 106–115, Springer, Berlin, Germany, 2006.
- M. Dash and H. Liu, “Feature selection for classification,” Intelligent Data Analysis, vol. 1, no. 1–4, pp. 131–156, 1997.
- K. Kenji and A. R. Larry, “The feature selection problem: traditional methods and a new algorithm,” in Proceedings of the 10th National Conference on Artificial Intelligence, W. Swartout, Ed., pp. 129–134, AAAI Press/The MIT Press, San Jose, Calif, USA, July 1992.
- T. R. Golub, D. K. Slonim, P. Tamayo et al., “Molecular classification of cancer: class discovery and class prediction by gene expression monitoring,” Science, vol. 286, no. 5439, pp. 531–527, 1999.
- Z. Fang, R. Du, and X. Cui, “Uniform approximation is more appropriate for wilcoxon rank-sum test in gene set analysis,” PLoS ONE, vol. 7, no. 2, Article ID e31505, 2012.
- S. Zhu, D. Wang, K. Yu, T. Li, and Y. Gong, “Feature selection for gene expression using model-based entropy,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 7, no. 1, pp. 25–36, 2010.
- H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226–1238, 2005.
- Y. Wang, I. V. Tetko, M. A. Hall et al., “Gene selection from microarray data for cancer classification—a machine learning approach,” Computational Biology and Chemistry, vol. 29, no. 1, pp. 37–46, 2005.
- M. Han and X. Liu, “Forward feature selection based on approximate Markov blanket,” in Advances in Neural Networks—ISNN 2012, vol. 7368 of Lecture Notes in Computer Science, pp. 64–72, Springer, Berlin, Germany, 2012.
- J. Kittler, “Feature set search algorithms,” in Pattern Recognition and Signal Processing, C. H. Chen, Ed., pp. 41–60, Sijthoff and Noordhoff, Alphen aan den Rijn, The Netherlands, 1978.
- P. Pudil, J. Novovičová, and J. Kittler, “Floating search methods in feature selection,” Pattern Recognition Letters, vol. 15, no. 11, pp. 1119–1125, 1994.
- B. Q. Hu, R. Chen, D. X. Zhang, G. Jiang, and C. Y. Pang, “Ant colony optimization vs genetic algorithm to calculate gene order of gene expression level of Alzheimer's disease,” in Proceedings of the IEEE International Conference on Granular Computing (GrC '12), pp. 169–172, Hangzhou, China, August 2012.
- L. J. Cai, L. B. Jiang, and Y. Q. Yi, “Gene selection based on ACO algorithm,” Application Research of Computers, vol. 25, no. 9, pp. 2754–2757, 2008.
- I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 389–422, 2002.
- Q. Liu, A. H. Sung, Z. Chen et al., “Gene selection and classification for cancer microarray data based on machine learning and similarity measures,” BMC Genomics, vol. 12, no. 5, article S1, 2011.
- X. Li, S. Peng, J. Chen, B. Lü, H. Zhang, and M. Lai, “SVM-T-RFE: a novel gene selection algorithm for identifying metastasis-related genes in colorectal cancer using gene expression profiles,” Biochemical and Biophysical Research Communications, vol. 419, no. 2, pp. 148–153, 2012.
- K. K. Kandaswamy, K. C. Chou, T. Martinetz et al., “AFP-Pred: a random forest approach for predicting antifreeze proteins from sequence-derived properties,” Journal of Theoretical Biology, vol. 270, no. 1, pp. 56–62, 2011.
- G. A. Ascoli, Computational Neuroanatomy: Principles and Methods, Humana Press, Totawa, NJ, USA, 2002.
- G. A. Ascoli, J. L. Krichmar, S. J. Nasuto, and S. L. Senft, “Generation, description and storage of dendritic morphology data,” Philosophical Transactions of the Royal Society, Series B: Biological Sciences, vol. 356, no. 1412, pp. 1131–1145, 2001.
- R. Scorcioni, S. Polavaram, and G. A. Ascoli, “L-measure: a web-accessible tool for the analysis, comparison and search of digital reconstructions of neuronal morphologies,” Nature Protocols, vol. 3, no. 5, pp. 866–876, 2008.
- H. Zhang, H. Wang, Z. Dai, M. S. Chen, and Z. Yuan, “Improving accuracy for cancer classification with a new algorithm for genes selection,” BMC Bioinformatics, vol. 13, no. 1, article 298, 2012.
- Z. Pawlak, Rough Set: Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, Boston, Mass, USA, 1991.
- Y. Cao, S. Liu, L. Zhang, J. Qin, J. Wang, and K. Tang, “Prediction of protein structural class with Rough Sets,” BMC Bioinformatics, vol. 7, article 20, 2006.
- Q. Hu, D. Yu, and Z. Xie, “Information-preserving hybrid data reduction based on fuzzy-rough techniques,” Pattern Recognition Letters, vol. 27, no. 5, pp. 414–423, 2006.
- Q. Hu, D. Yu, Z. Xie, and J. Liu, “Fuzzy probabilistic approximation spaces and their information measures,” IEEE Transactions on Fuzzy Systems, vol. 14, no. 2, pp. 191–201, 2006.
- Q. Hu and D. Yu, “Entropies of fuzzy indiscernibility relation and its operations,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 12, no. 5, pp. 575–589, 2004.
- V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 2000.
- R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN '89), pp. 593–605, June 1989.
- M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The WEKA data mining software,” ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10–18, 2009.
- T. M. Mitchell, Machine Learning, McGraw-Hill, 1997.
- Y. Tang, Y. Q. Zhang, N. V. Chawla, and S. Krasser, “SVMs modeling for highly imbalanced classification,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 39, no. 1, pp. 281–288, 2009.
Copyright © 2015 Congwei Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.