Mathematical Problems in Engineering

Volume 2018, Article ID 9298017, 8 pages

https://doi.org/10.1155/2018/9298017

## Strip Steel Surface Defects Recognition Based on SOCP Optimized Multiple Kernel RVM

Correspondence should be addressed to Xia Kewen; nc.ude.tubeh@aixwk

Received 10 September 2017; Accepted 8 February 2018; Published 29 March 2018

Academic Editor: Ningde Jin

Copyright © 2018 Hou Jingzhong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Strip steel surface defect recognition is a pattern recognition problem with wide applications. Previous works on strip surface defect recognition mainly focus on feature selection and dimension reduction. There are also approaches on real-time systems that mainly exploit the autocorrection within some given picture. However, the instances cannot be used in practical applications because of a bad recognition rate and low efficiency. In this paper, we study the intelligent algorithm of strip steel surface defect recognition, where the goal is to improve the accuracy and save running time. This problem is very important in various applications, especially the process testing of steel manufacturing. We propose an approach called the second-order cone programming (SOCP) optimized multiple kernel relevance vector machine (MKRVM), which can recognize strip surface defects much better than other methods. The method includes the model parameter estimation, training, and optimization of the model based on SOCP and the classification test. We compare our approach with existing methods on strip surface defect recognition. The results demonstrate that our proposed approach can improve the recognition accuracy and reduce the time costs of the strip surface defect.

#### 1. Introduction

There are different types of defects in strip steel production due to various causes. These defects affect the appearances of the products and cause concentrated stress and weak spots. Moreover, these defects will decrease the product’s properties in varying degrees, such as corrosion resistance and wear resistance [1]. Therefore, it is very important for enterprises to strengthen the inspection and control of strip surface defects and improve the strip surface quality [2]. If we want to improve the strip surface quality, detection and classification problems must be solved for the strip surface quality. These years, there are many study methods on strip surface defect recognition, including the exhaustive method and branch and bound method. For example, there are also the support vector machine (SVM) method, the principal component analysis method, the neural networks method, the magnetic flux leakage detection method, the infrared detection method, and the eddy current testing method. However, some of these methods may damage the topological structures of the data and some may get local optimal solutions. The Particle Swarm Optimization algorithm is an efficient optimization method, but its complex genetic operations make it difficult to meet the expected recognition speed and accuracy. These algorithms cannot take into account defects recognition.

Therefore, further study is necessary to improve the accuracy of strip surface defect recognition. Compared with the SVM, the RVM compensates for the disadvantages of the SVM [3] that include the lack of sparsity, the lack of probabilistic factors, the limitation by Mercer, and more. To improve the universal applicability of the RVM algorithm, multiple kernel learning (MKL) has been applied instead of one specific kernel function. However, the training time of the multiple kernel RVM (MKRVM) is long because of the inverse matrix problem in the algorithm. If the training sample and iteration numbers are increased, the training process becomes very slow.

Second-order cone programming (SOCP) [4, 5] is a convex optimization in which a linear function is minimized. Linear programs and convex quadratic programs can all be formulated as SOCP optimizations. Therefore, a SOCP optimized MKRVM algorithm is proposed, which optimizes the MKRVM parameters using the SOCP model. Through the optimized SOCP, the classification accuracy and time costs of the MKRVM algorithm can be largely improved. First, typical strip defect images are collected for model training and building the data set. Second, the simplified information and the key indicators that reflect the defects are obtained after strip defect images are addressed. This process can reduce the learning time of the MKRVM. Third, the model of the MKRVM based SOCP is trained, which is used to recognize the defect images data set. To improve the recognition accuracy, the design and construction of the multikernel RVM classifier are completed, and multikernel learning is transformed into the SOCP problem. Finally, the proposed SOCP-MKRVM [6] model is used for classification and identification. The effectiveness of the proposed method is verified.

#### 2. Principle of Identification Method

##### 2.1. Traditional RVM Method

Assuming that the training set is , the target value and the input value are independently distributed samples. The output of the RVM [7, 8] can be expressed as

Here, is the kernel function. The Gauss kernel function is used as the kernel function in this paper:

is the weight vector of the model. is the additional noise that satisfies the Gaussian distribution:

Deriving from formula (1) and formula (3), we get

In the formula, is a designed order matrix consisting of kernel functions such that and . The ARD Gaussian prior probability distribution of the defined weights is

Here, is the dimension vector composed of super parameters that control the distribution of , .

According to Bayesian theory, the posterior probability distribution of can be decomposed aswhere

and are the mean vector and covariance matrix of . It can be seen from formula (9) that the parameter and the super parameter need to be estimated. The posterior probability distribution of the parameter and the super parameter is obtained according to the Bayesian theory as follows:

The estimation of the parameter and the super parameter is the calculation of the maximum value of the posterior probability distribution. It can be seen from (10) that the value of the posterior probability distribution of the two parameters is mainly determined by the likelihood function :

Here, .

The maximum likelihood estimation method is used to calculate the maximum value of (11). The parameter and the super parameter are obtained as

In the iterative operation, much of the super parameter tends to infinity. obtained from formula (9) corresponds to the trend of zero. Therefore, you can make a small number of sample points work to achieve the RVM’s sparse effect [9], and these sample points are of relevance to the vector machine.

##### 2.2. Multiple Kernel Learning (MKL) Algorithms

To improve the universal applicability of the RVM algorithm, MKL has been applied instead of one specific kernel function:where is the multikernel function. The multiple kernel can be obtained by function and by combining different . is the proportion parameter of the kernel.

The kernel weights are regarded as a regularization item , which is taken as . Thus, the objective function and gradient function can transform into

The kernel combined function is

##### 2.3. Second-Order Cone Programming (SOCP)

The above classification algorithm is based on the edge likelihood estimation method. However, this algorithm has an extensive matrix inversion process, and thus the training time is longer. To solve this problem, the MKRVM method based on the SOCP algorithm is proposed in this paper.

The standard form of the second-order cone programming is

Here, , is the variable. and , , is the known quantity. , is the second-order cone of dimension . Thus

In this, is the Euclidean norm of the vectors. It is easy to know that is self-dual and

For the MKRVM, the focus of its operations is the iterative computation of . The purpose of the SOCP is to change the original solution and efficiently solve this parameter. The Gauss kernel function is a typical local kernel, and the polynomial kernel function is a typical global kernel. The combination of the two kernel functions can have the advantages of both local and global kernels. Let and be the sample points in the data space. The combination kernel function is

Here,

, , and are the kernel parameters obtained by nuclear correction. , , and are the undetermined combination coefficients.

In the multikernel functions, the kernel correction value of the kernel function is in which is the sample data.

Then, the process of obtaining the best correction value of the kernel function is transformed into the following planning problem:

After calculating of each kernel function, the validity of the kernel parameter is judged according to the value of . If approaches 0, it means that the corresponding kernel parameter of is not suitable for use on this data set. If the value of is close to 1, then the corresponding kernel parameter of is valid.

##### 2.4. MKRVM Method Based on SOCP Algorithm

SOCP is used to optimize the MKRVM [10, 11], and , , and are the kernel work parameters that need to be optimized. The previous part of (21) is the reciprocal of a Euclidean paradigm, and the latter exponential function is a convex function. In this case, the optimization problem can be solved by being transformed into a standard SOCP problem [12]. In the SOCP form, the parameters are solved efficiently through the MKRVM training and the optimization process by using the SeDuMi and Yamlip toolboxes.

The SOCP-MKRVM algorithm, which mainly uses the MKRVM, calculates by using the method of base vector filling, which efficiently calculates the multikernel parameters. The main algorithmic steps of the SOCP-MKRVM are as follows.

*Step 1. *Load the training samples and initialize the parameters of the MKRVM.

*Step 2. *Construct a multikernel function as (19).

*Step 3. *Use formula (22) for the nuclear correction and solve the nuclear parameters.

*Step 4. *Put the unknown kernel working parameters into the SOCP and calculate it.

*Step 5. *Get the optimal values of the kernel function parameters and put them into the MKRVM model for classification.

##### 2.5. Classic Example Simulation

In the MKRVM classification verification, Pipley’s synthetic (a commonly used data set for classification) is used for the test. The data set is a two-dimensional classification sample set with 250 training samples and 1000 test samples. There is much Gaussian mixed noise in Pipley’s synthetic data set. We use the edge likelihood estimation iteration and the SOCP optimization to classify the data by computer simulation, and the results are shown in Figures 1 and 2. The effect of the two methods is shown in Table 1.