Abstract

Electronic component recognition plays an important role in industrial production, electronic manufacturing, and testing. In order to address the problem of the low recognition recall and accuracy of traditional image recognition technologies (such as principal component analysis (PCA) and support vector machine (SVM)), this paper selects multiple deep learning networks for testing and optimizes the SqueezeNet network. The paper then presents an electronic component recognition algorithm based on the Faster SqueezeNet network. This structure can reduce the size of network parameters and computational complexity without deteriorating the performance of the network. The results show that the proposed algorithm performs well, where the Receiver Operating Characteristic Curve (ROC) and Area Under the Curve (AUC), capacitor and inductor, reach 1.0. When the FPR is less than or equal level, the TPR is greater than or equal to 0.99; its reasoning time is about 2.67 ms, achieving the industrial application level in terms of time consumption and performance.

1. Introduction

The scientific and standardized classification of electronic components is important for enterprise informatization and improving the management of electronic components. It is not only essential for designers, management departments, and purchasing departments, where they need to investigate and use products accurately and efficiently, but also helpful for the management of engineering components. Many institutions attach great importance to the procurement of components, based on their own needs, and the establishment of a component classification system.

As an example, NASA’s NPSL (NASA Parts Selection List) [1] divides electronic components into 10 major categories and 28 minor categories based on their function. The U.S. military classifies the QPL list into 264 subcategories based on the specific functions of the electronic components. Electronic components published by the U.S. Defense Supply Center (DSCC) are classified into 37 categories according to their military drawing component number [2]. The European Space Agency’s (ESA) PPL list covers 15 major categories and 67 minor categories of electronic components, while the QPL list covers 14 major categories and 37 minor categories of electronic components. The classification table of GB/T 17564.4-2001 is given where components are classified into three parts: Electric-Electronic (EE), Electro-Mechanical (EM), and magnetic parts, including amplifiers, antennas, batteries, capacitors, conductors, and another 31 categories [35].

Electronic components play an important role in the development of industry. There are many kinds of electronic components, and these are developing towards miniaturization and chips. In the production, scientific research, application, and recycling of electronic components, classification is a very important fundamental task. Therefore, it is of great practical significance to construct an automatic identification system for electronic components which can operate in real time [6].

Because traditional image recognition technologies (such as principal component analysis (PCA) and support vector machine (SVM)) have the problem of low recall and accuracy, this paper selects lightweight deep learning networks for testing and draws on the structure of DenseNet and ResNet to optimize the SqueezeNet network. An electronic component identification system based on the Faster SqueezeNet algorithm is proposed. This structure can reduce the size of network parameters and computational complexity without reducing network performance. Experimental results show that the proposed Faster SqueezeNet algorithm has excellent performance in terms of model parameters, inference time, and portability and is suitable for engineering applications of electronic component recognition in electronic manufacturing [79].

Current image classification methods are mostly divided into two categories [10]. The first category mainly classifies images based on image spatial domains and transform domains. The second category classifies images using deep learning Networks such as convolutional neural networks (CNNs) to automatically learn image features.

2.1. Traditional Image Classification Methods

Traditional image classification methods have been studied and developed for many years. These methods first extract image features through a series of complex image preprocessing steps (such as morphological transforms [11]) and then classify them according to those features [1214]. For example, Du et al. [15] combined the least squares method with the Hough transform and classified components by extracting the edge features of components [1618]. Although the classification accuracy is relatively high, traditional image classification methods cannot process very large images [19], the computational complexity is prohibitive, and it is difficult to achieve multiple component classification at the same time.

2.2. Deep Learning Network-Based Recognition Methods

Convolutional neural networks, [2023] integrated with convolutions, backpropagation, weight sharing, sparse connections, pooling, and other ideas, can automatically extract image features. This addresses the problem of processing very large images and reaches the level of industrial application concerning the problem of multiple classification [2426]. However, general deep learning networks have a complex structure and a large number of calculations, which is incompatible with large amounts of data and reasonable real-time performance.

In this paper, we study deep learning network models such as ResNet, SquezeNet, DenseNet, MobileNetV2, and EfficientNet [2730]. Based on the analysis of traditional methods and deep learning methods, the improved Faster SqueezeNet is chosen as the network model for electronic component classification. After further modification of its network structure, the Faster SqueezeNet can meet the real-time and accuracy requirements of industrial electronic component identification. The TPR is sufficiently high and an acceptable FPR is guaranteed, that is, the AUC is large enough. At the same time, the number of parameters of the Faster SqueezeNet is further reduced and the processing time is improved. Figure 1 shows a diagram of electronic component classification methods.

3. The Dataset

All the training, verification, and test images in this paper are based on printed circuit boards of cell phones, computers, air conditioners, automobiles, etc. All the data are obtained by template matching, PS matting, circuit board CAD information, and other approaches. Solder paste detection (SPI) equipment is used, which includes more than 30 thousand pictures with pixels. A partial image of the whole board is shown in Figures 2 and 3.

The environment of the circuit board is single and closed, so there is no complex image preprocessing in this paper [3136]. The main preprocessing includes left and right mirror images, upper and lower mirror images, random noise, random brightness, random rotation of 30, 90, 120, and 180 degrees, and boundary filling, as shown in Figure 4.

4. The Architecture

A CNN is a feedforward neural network, which can extract features from a two-dimensional image and optimize parameters in the network using backpropagation. Many classification network models are considered in this paper, of which SqueezeNet and Faster SqueezeNet are superior.

4.1. SqueezeNet

Because the number of parameters for AlexNet and VGGNet are getting larger and larger, the SqueezeNet network model was proposed that has very few parameters while preserving the accuracy [37, 38]. The Fire module is the core base module in SqueezeNet, and its structure is shown in Figure 5. This module is divided into Squeeze and Expand structures. Squeeze contains S 1 × 1 convolution kernels. The Expand layer contains 1 × 1 kernels and 3 × 3 convolution kernels. The number of 1 × 1 convolution kernels is E1×1, and the number of 3 × 3 convolution kernels is E3×3. The model must satisfy S < (E1×1 + E3×3).

The 1 × 1 convolutional layer has attracted much attention in the consideration of network structures. Min used a multilayer perceptron (MLP) instead of the traditional linear convolution kernel to improve the expressive power of the network [39]. The work also explains from the perspective of cross-channel pooling that MLP is equivalent to the cascaded cross-channel parametric pooling layer behind the traditional convolution kernel, thus achieving a linear combination of multiple feature maps and information integration across channels.

When the number of input and output channels is large, the convolution kernel parameter becomes large. We add a 1 × 1 convolution to every inception module, reducing the number of input channels, and the convolution kernel parameters and operation complexity also decrease [40]. At the end of the structure, a 1 × 1 convolution is added to improve the number of channels and enhance feature extraction [41].

SqueezeNet replaces 3 × 3 convolutions with 1 × 1 convolutions to reduce the number of parameters to one-ninth. When the sampling reduction operation is delayed, a larger activation graph can be provided for the convolutional layer, while the larger activation graph retains more information, which can provide a higher classification accuracy.

4.2. Faster SqueezeNet

In order to improve the accuracy and real-time performance of electronic component classification, Faster SqueezeNet is proposed. In order to prevent overfitting, we added BatchNorm and residual structures. At the same time, like DenseNet, we use concat to connect different layers to enhance the expressiveness of the first few layers in the network.

Faster SqueezeNet consists of 1 BatchNorm layer, 3 block layers, 4 convolution layers, and a global average pooling layer. The model is shown in Figure 6.

Faster SqueezeNet is mainly improved in the following ways:(1)In order to further improve the information flow between layers, we imitated the DenseNet structure and proposed a different connection mode [42]. This consists of a pooling layer and a fire module, and finally the two concat layers are connected to the next convolution layer. Its structure is shown in the green dotted box in Figure 6.The current layer receives all the feature maps of the previous layer, and we use as input; then, is shown aswhere refers to the connection of feature graphs generated in layer and concatenates multiple inputs. Here, represents the max pooling layer, represents the Fire layer, and is the concat layer.Without excessively increasing the number of network parameters, the performance of the network is enhanced in the early stages, and at the same time, any two-layer network can directly communicate information.(2)In order to ensure better network convergence, we learn from the ResNet structure and propose different building blocks which consist of a pooling layer and a fire module. Finally, after the two layers are summed, it is connected to the next convolutional layer. Its structure is shown in the green dotted box in Figure 6.

In ResNet, the shortcut connection directly uses identity mapping, which means that the input of a convolution stack is directly added to the output of the convolution stack [39]. Formally, denoting the desired underlying mapping as H (x), we let the stacked nonlinear layers fit another mapping of . The original mapping is recast into . can be realized by a structure called a shortcut connection in the actual encoding process. The shortcut connection usually skips one or more layers. Therefore, we use the residual structure of ResNet for reference to address the problem of vanishing gradients and degradation without increasing the number of network parameters. The residual structure is shown in Figure 7.

The model parameters for Faster SqueezeNet are shown in Table 1.

5. Experiments

All experimentation in this paper is conducted using the Windows platform. The machine uses an Intel Core i5-7300H 2.5 GHz CPU, an NVIDIA GTX1050 (2G) GPU, and 8 Gb memory. In addition to selecting several deep learning networks such as ResNet [23], SqueezeNet, MobleNet V2 [24, 27], DenseNet [28], and EfficientNet [29], this paper also uses the traditional PCA + SVM algorithm for comparison.

This paper mainly classifies resistor, capacitor, and inductor in electronic components. Specifically, 0–6 and 11–17 are 14 types of capacitors; 7–9 and 18–20 are six kinds of resistors; 10 and 21 are two types of inductors; and a total of 22 small classes. In the experimentation, the training and validation set totaled 40,000 pictures; 50% were used for training and 50% for validation. Each category (resistor, capacitor, and inductor) has 100–1000 pictures, and the test set has 10,264 pictures.

5.1. PCA + SVM

When the principal component dimensions are 32, 128, 256, and 512, the average value of each index increases with the increase of components. When the principal component dimension is 128, the ROC curve is the best, as shown in Figure 8.

For the ROC curve, the false positive rate (FPR) is the horizontal axis, which represents the rate of false positives among all samples that are actually negative. The true positive rate (TPR) is the vertical axis, which represents the ratio of all positive samples that are correctly judged as positive. The different colored lines represent the ROC performance for different electronic components.

As can be seen from Figure 8, when FPR ≤ 0.01, the TPR declines rapidly. In application, when the number of components increases to a certain scale, the performance of the model will be seriously degraded, and various misjudgments will often occur.

5.2. Deep Learning Algorithm Results

When the backbone network uses EfficientNet, MobleNet V2, SqueezeNet, and Faster SqueezeNet (our approach), the obtained ROC curves are shown in Figures 912.

The parameters and runtimes of these models are shown in Table 2.

As can be seen from Table 2, when the FPR is 10e − 6, the performance of PCA + SVM decreases linearly, while deep learning methods still have good performance. By further improving Faster SqueezeNet, our model size is substantially reduced.

At the same time, TensorRT-based applications perform up to 40 times faster than CPU-only platforms during inference [43]. TensorRT can optimize all neural network models trained in the main framework to calibrate low accuracy with high accuracy [44]. It can also be deployed to large-scale data centers, embedded or automotive products, and other platforms. Therefore, model inference is accelerated again by TensorRT to meet the requirements of real-time electronic component identification.

6. Evaluation

Through the above experiments, we can summarize the performance of the traditional algorithm and deep learning algorithms.

6.1. PCA + SVM

Du et al. [15] used the least squares algorithm and the Hough transform to extract the edge of components. The classification accuracy in the three categories of resistor, capacitor, and inductor was 87%, 82.6%, and 100%, respectively. Although the result is good, this still lags behind industrial application. PCA combined with SVM is a very good and commonly used traditional method for the classification of electronic components. When the dimension of the eigenvectors increased from 22 to 880, the precision of some categories decreased. Of course, when the number of eigenvector dimensions is equal to the number of images in the training set, the time consumption also doubles (compared with 1024 eigenvector dimensions). Therefore, it is difficult to reach the industrial application level.

6.2. Deep Learning Networks

For each category, the ROC curve area for the Faster SqueezeNet algorithm can always approach 1.0, and the TPR can achieve above 0.999996 when the FPR is below . Although its time consumption (2.57 ms to 4.85 ms) is relatively high compared with the traditional method, the average time of 1000 reasoning times is about 0.68 ms after TensorRT acceleration, which can meet the requirements of practical applications. The graphics card is a GTX 1050 (2 Gb memory), which can be used on most computers.

The experimental results show that the proposed Faster SqueezeNet algorithm can identify the electronic components on the circuit board, and the improved model has good robustness and application potential. Compared with the traditional method or other deep learning models, the Fast SqueezeNet proposed in this paper has excellent performance concerning the parameters, reasoning time, and portability of the model, which is suitable for engineering applications of electronic component identification in electronic manufacturing.

7. Discussion

In this paper, the electronic components are classified into 22 subcategories of resistor, capacitor, and inductor via an improved SqueezeNet. The improved SqueezeNet draws on the structure of DenseNet and ResNet, the number of parameters is reduced, and the speed becomes faster.

However, there are thousands of other different types of resistors, capacitors, and inductors, such as integrated circuits. Moreover, there are other types of components. When all components are classified, how to address the many types of problems becomes very difficult. One of the solutions is to classify the main categories of components first and then divide the main categories into smaller categories. It is hoped that more researchers can participate in the research of electronic component classification.

Data Availability

The data used to support the findings of the study are available from the corresponding author via [email protected].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the National Key R&D Program of China (2017YFA60700602 and 2018YFC0809200) and Ministry of Industry and Information Technology of China under Project “Industrial Internet Platform Test Bed For Optimizing the Operation of Motor and Driving Equipment” (2018-06-46).