| Machine learning algorithms | Models | SVM | KNN | CART | Parameter | Chosen values | Parameter | Chosen values | Parameter | Chosen values | (penalty parameter of the error term) | 0.001, 0.01, 0.1, 1, and 10 | (number of neighbors) | From 1 to 30 with a step of 1 | Criterion (measure the quality of a split) | Gini, entropy | Weight (how weights are initialized) | Uniform, distance | Splitter (method of the split at each node) | Best, random | Gammas (how far the influence of a single training example reaches) | 0.001, 0.01, 0.1, and 1 | Leaf size (brute force searches between nodes) | 50, 100, 200, 300, and 400 | Number of features (number of features to consider when looking for the best split) | Auto, sqrt, log2 | Algorithm (how it will look for neighbors) | Auto, ball_tree, kd_tree, brute | Minimum of samples split (minimum number of samples required to split an internal node) | 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 | (power parameter for the Minkowski metric, defines how to calculate the distance) | 1, 2, and 3 | Minimum of leaf samples (minimum number of samples required to be at a leaf node) | 2, 3, 4, 5, 6, 7, 8, 9, and 10 | Metric (the distance metric to use for nodes of the tree) | Minkowski, Euclidean, Manhattan, and Chebyshev | Maximum depth (maximum depth of the tree) | 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 | Kernel (pattern analysis method) | RBF, linear | Maximum features (No. of features to consider when looking for the best split) | From 1 to 36 with a step of 1 in each increasing | Deep learning architectures | Architectures | MLP | CNN | Parameter | Chosen values | Parameter | Chosen values | Units (number of neurons in layer) | 32, 64, 128, 256, and 512 | Filter number (matrix filer for convolutional calculation) | 5, 10, 15, 20, 25, and 30 | Batch size (how many samples to treat before adjusting weights) | 32, 64, 96, 128, and 256 | Filter length | 1, 2, 3, 4, 5, and 6 | Epochs (iterations for training model) | 50, 100, 250, 500, and 1000 |
|
|