|
Classifier | Advantages | Limitations |
|
Support vector machine (SVM) | (1) Suitable for small and clean datasets | (1) Less efficient on datasets that have noise |
(2) Effective in high dimensional spaces | (2) Unsuitable for big datasets |
(3) Hard to choose a suitable kernel-function that is robust to interpret |
|
Dynamic time warping (DTW) | (1) It is time series averaging which makes the classification faster more accurate | (1) The number of templates is restricted |
(2) Suitable for a smaller number of templates | (2) Actual training samples is required |
|
Deep learning | (1) Computation power does not affect it | (1) Hard to understand |
(2) High dimensional | (2) For training, it require large quantity of data for training |
(3) Can automatically adapt all data | (3) Large memory and computing resources is required |
(4) Faster in obtaining results | (4) More costly |
(5) Works on big and complex datasets | (5) High errors rate |
|
K-nearest neighbor (K-NN) | (1) The complete dataset is covered for finding K-nearest neighbors | (1) Sensitive to outliers |
(2) Cannot handle the missing value issue |
(2) Suitable for multi-class classification and regression problems | (3) Mathematically costly |
(4) Large memory is required |
(5) Homogeneous features is required |
|
Probabilistic neural network (PNN) | (1) Quicker and more accurate than MLPs | (1) More memory space is needed |
(2) Insensitive to outliers | (2) When it compared to MLP it is slower in case of new classification samples |
(3) Representative training set is required |
|
Euclidean distance | (1) Very popular method | Sensitive to outliers |
(2) Easy computation |
(3) Works good with compact or isolated clusters |
Manhattan distance | Dealing good with datasets with compact or isolated clusters | Sensitive to the outliers |
Hidden markov model (HMM) | Can handle inputs with variable length | More memory and time it requirement |
|