Abstract

At present, most of the existing image recognition methods are not only cumbersome in process but also require manual design of functions, resulting in poor recognition results and time-consuming training. This study explores image recognition algorithms based on artificial intelligence and machine learning, which can simulate the hierarchical structure of the human brain and nervous system, realize automatic extraction of complex features, and have powerful data representation capabilities. In this study, the structure of artificial neuron is introduced first, and the training process of the neural network is introduced in detail. At the same time, this study improves the traditional training algorithm and proposes two new machine learning models for different application scenarios, which effectively improves the performance of the optimal model. In relevant tests, the effect is significant. On the basis of these improved algorithms, an online recognition system is designed and implemented. The recognition accuracy rate of the system for the hidden layer reaches more than 90%, which verifies the effectiveness of the technology. At the same time, after repeated experiments, the results show that the multiview algorithm effectively solves the problem that the recognition results of the traditional multiview algorithm are affected by the size of the target contour.

1. Introduction

Image recognition is the input image obtained from vision. Through a series of calculation, analysis, and learning processes, the object recognition in the scene, the description of the relationship between the objects in the scene, and the recognition of the scene are obtained [1]. In short, image recognition. It is to use computers to realize human’s visual understanding of images, for example, fingerprint recognition technology, face detection, and recognition technology in various embedded systems such as smart phones and tablet computers [2]; surface defect detection technology and product shape recognition technology in industrial automated inspection systems [3]; robot vision in artificial intelligence, autonomous driving of unmanned cars, and automatic human-machine tracking [4]; and military target tracking, missile guidance, and air remote sensing [5]. Nowadays, the demand for intelligent automation equipment and instruments is becoming more and more urgent [6].

Generally speaking, the existing image recognition methods can be divided into methods based on expert knowledge and methods based on machine automatic learning [7, 8]. In contrast, machine-based automatic learning methods are based on the “machine” self-learning ability [9]. The idea is to design machines with self-learning capabilities by relying as little as possible on “human” experience [10]. For example, given only the structure of the machine and some simple operation measures, the machine can automatically learn the specific rules hidden in the image data [11, 12]. Using “machines” instead of “humans” to learn can reduce the workload of manual design [13, 14]. Therefore, it is more common to rely on the advantages of “machines” in computing speed to solve these problems.

This study proposes a novel research direction of the image recognition algorithm, which can effectively improve the quality and effect of image recognition and can also provide new ideas for the research of artificial intelligence and machine learning technology.

2. Proposed Method

2.1. The Structure of Artificial Neurons

The artificial neural network can be regarded as a directed graph connected by directed weighted arcs with artificial neurons as nodes. The structure of the artificial neuron is shown in Figure 1.

If a local feature appears in one location, it may also appear in any other location. Mathematically, the operation of the feature map based on the convolution kernel corresponds to discrete convolution.

2.2. The Training Process of the Neural Network

By using the min-batch method, back propagation is performed to update the parameters in the neuron.

The whole training process and training results are shown in Figure 2.

2.3. Neural Network Training Result Analysis

Based on the above learning model, the convolution kernel of the first layer of the neural network is obtained, as shown in Figure 3.

The segmentation training of the convolutional layer is not performed. In contrast, the overall performance of the model in this study is better. This article integrates multiple models and merges their output with postresort, as given in Table 1.

2.4. Improvement of the Model
2.4.1. Optimizes with Optimizer

It is set at the beginning of training to initialize the cumulants of gradients and square cumulants.

Assuming that in the training phase of the t-round training, two basic momentum parameter updates can be calculated at first.

Since the average value of the moving index at the beginning of the iteration will cause a large difference between the initial value and the initial value, the deviation of the above value needs to be corrected.

Through the above formula, the correction value of the cumulative parameter evaluation can be obtained in the t-round iteration process, and the weight and deviation can be updated.

In summary, the learning rate decay method can make the network’s early training converge quickly, and in the later training, it can converge to the optimal value of the function to the greatest extent.

2.4.2. Improvement of the Multiview Method

By comparing the recognition error rates of different methods, Table 2 is obtained.

Observing the data in the table, we find that the error rate and accuracy of the improved algorithm are lower than those of the previous and traditional test methods. Of course, the improved algorithm is not perfect; it also has some problems, the number of improved images is too much, and the problem of frequent testing is unavoidable.

2.5. Level Classification Algorithm

The probability of misclassification of m and K are calculated by the following formulas. X is used to indicate the number of objects to be identified, and m and K are used to represent the frequency of misclassification of m to K.

The principle of undirected graph G = (V, E) is applied. A class is a node, the set of nodes is V, and the set of edges is E. The weight of the edge between two points is used as the probability value of the two classes. Then, use the global minimum cut algorithm to repeatedly find the global minimum cut on this graph, and then, we can get the set of errors that are easiest to classify, as given in Table 3.

3. Experiments

The platform used in the experiment is Python++Theano. SVM is used as the baseline test. Each pixel is used as the input of the SVM algorithm. We can test the effectiveness of the proposed method. Under the Java platform, use LIBSVM for testing.

3.1. The Number of Samples, the Number of Nodes, and the Number of Layers

This part of the experiment is based on the MATLAB R2009a computing platform, and the SVM classifier uses the LIBSVM toolbox. The number of training samples used in the experiment is 1000, and the number of test samples is 200. As given in Table 4. The image recognition network model is shown in Figure 4.

Two kinds of training samples and test samples are separately removed, and different hidden nodes and layers are selected to compare the effects of training samples, nodes, and layers. Among them, the training sample of Figure 5 is 1000, and the testing sample is 200.

Figure 6 shows the comparison of the prediction accuracy of this method and the CNN method for different hidden layer nodes when there are two hidden layers. The training sample is 5000, the test sample is 1000, and the number of the first layer hidden layer is 450.

4. Discussion

4.1. Experimental Comparison with Nondeep Learning Algorithms

As shown in Figure 7, the prediction probability of the class in the multiclass DNN is shown. The multiclass DNN performs the worst when the data are unevenly distributed.

From the processing results of Figures 8 and 9 (picture source network library), the performance of the multitask DNN based on ring training is stable, and the final results on different datasets can basically remain unchanged.

5. Conclusions

In image recognition methods, artificial intelligence machine learning algorithms have the advantages of parallel processing, self-learning capabilities, and flexible nonlinear expression capabilities compared with traditional algorithms. This study proposes a multiview algorithm based on artificial intelligence machine learning. After repeated experiments, the results show that the multiview algorithm effectively solves the problem that the recognition result of the traditional multiview algorithm is affected by the size of the target contour. The structure of the traditional model is optimized, and some problems in the field of traditional image recognition are solved. Through the improvement of related algorithms, the error rate of image recognition is reduced, which has special reference significance for the design and development of related systems in the future.

There are still many deficiencies in the research of this study. The depth and breadth of the research in this study are not enough, and some interference factors involved in the practice of image recognition are not considered. The evaluation of the algorithm is also restricted by many factors. The level of research is also limited, and the research on image recognition algorithms is still in the preliminary stage. In the future work, based on the existing technology and level, we will improve the performance of recognition image recognition accuracy from more angles and continuously optimize the research method.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Science and Technology Program of Shaanxi Province. (2017XT-02) and International S&T Cooperation Program (2021KW-07).