| Algorithm | Introduction |
| Support vector machine, SVM | A class of generalized linear classifiers that binarizes data in supervised learning, with a decision boundary that is the maximum margin hyperplane for learning sample solving |
| Naïve Bayes, NB | The Naive Bayes classification (NBC) is based on the Bayes theology and assumes that the characteristic conditions are independent of each other, first through the given training set, with the characteristic words independent as the premise hypothesis, learning from the input to the output of the joint probability distribution, and then, based on the learned model, input X to find the output Y, which makes the interest probability the greatest |
| k-nearest neighbors, KNN | KNN’s principle is that when predicting a new value x, it determines which category X belongs to, based on what category it is closest to the K point, and the general distance calculation method selects the European distance |
| Radial basis function neural network, RBF | There is one hidden node, including “n” input nodes, “p” hidden nodes, and “i” output nodes. The number of hidden nodes in the network is equal to the number of input samples. The activation function of this hidden node is usually a Gaussian radial basis function. All input samples are set as the center of the radial basis function, and each radial basis function agrees with the extended constant |
| Convolutional neural networks, CNNs | Convolutional neural network (CNN) is a type of feedforward neural network, which contains convolutional calculations with a deep structure and is one of the representative algorithms of deep learning. CNN has the ability to quantify learning and classify input information translation through class structure (displacement invariant classification), so it is also called “displacement invariant artificial neural network (SIANN)” |
|
|