Complexity

Volume 2019, Article ID 5296123, 21 pages

https://doi.org/10.1155/2019/5296123

## Eigen Solution of Neural Networks and Its Application in Prediction and Analysis of Controller Parameters of Grinding Robot in Complex Environments

^{1}Mechanical Information Research Center, Jiangsu University, Zhenjiang, Jiangsu 212013, China^{2}School of Information Engineering, Yancheng Teachers University, Yancheng, Jiangsu 224002, China

Correspondence should be addressed to Jinan Gu; nc.gro.auhgnist@nanijug

Received 11 May 2018; Revised 27 August 2018; Accepted 10 September 2018; Published 3 January 2019

Guest Editor: Andy Annamalai

Copyright © 2019 Shixi Tang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The robot dynamic model is often rarely known due to various uncertainties such as parametric uncertainties or modeling errors existing in complex environments. It is a key problem to find the relationship between the changes of neural network structure and the changes of input and output environments and their mutual influences. Firstly, this paper defined the conceptions of neural network solution, neural network eigen solution, neural network complete solution, and neural network partial solution and the conceptions of input environments, output environments, and macrostructure of neural networks. Secondly, an eigen solution theory of general neural networks was proposed and proven including consistent approximation theorem, eigen solution existence theorem, consistency theorem of complete solution, the partial solution, and none solution theorem of neural networks. Lastly, to verify the eigen solution theory of neural networks, the proposed theory was applied to a novel prediction and analysis model of controller parameters of grinding robot in complex environments with deep neural networks and then build prediction model with deep learning neural networks for controller parameters of grinding robot. The morphological subfeature graph with multimoment was constructed to describe the block surface morphology using rugosity, standard deviation, skewness, and kurtosis. The results of theoretical analysis and experimental test show that the output traits have an optional effect with joint action. When the input features functioning in prediction increase, higher predicted accuracy can be obtained. And when the output traits involving in prediction increase, more output traits can be predicted. The proposed prediction and analysis model with deep neural networks can be used to find and predict the inherent laws of the data. Compared with the traditional prediction model, the proposed model can predict output features simultaneously and is more stable.

#### 1. Introduction

Grinding is one of the most environmental-related manufacturing processes in complex environments. These processes are often scattered, inefficient, contaminated, labor-intensive, and foremost field-intensive. And all the processes account for 12–15% of overall manufacturing costs and 20–30% of total manufacturing time [1]. The surface roughness of the casting block and the grinding force prediction are an important aspect of the grinding process for optimization and monitoring. The grinding process is controlled by the grinding robot servo control system. Neural networks can be helpful in reducing the data dimensionality, and the optimization of neural network training may be employed to enhance the learning and adaptation performance of robots [2]. With its powerful approximation ability, neural network has been utilized in many promising fields, such as modeling and identification of complex nonlinear systems and optimization and automatic control. The components integrated with a complex system may interact with each other and bring difficulties to the control [3]. So many grinding robot servo control systems are constructed with neural networks.

Adaptive neural network controllers were studied in many aspects. The approximation-based controllers are designed for induction motors with input saturation [4] and 3-DOF robotic manipulator that is subject to backlash like hysteresis and friction [5]. The parameter-based controllers are designed to identify the unknown robot kinematic and dynamic parameters for robot manipulators with finite-time convergence [6] and perform haptic identification for uncertain robot manipulators [7]. The predictive controllers are designed for behaviour predicting of electronic circuits [8], trajectory tracking [9], trajectory tracking of underactuated surface vessels [10], performing attitude tracking control of a quad tilt rotor aircraft [11], and tracking error converging to a neighborhood of the origin [12]. A federal Kalman filter based on neural networks is used in the velocity and attitude matching of transfer alignment [13]. An iterative neural dynamic programming is provided for affine and nonaffine nonlinear systems by using system data rather than accurate system models [14]. We have the merit of adaptive neural network controllers in our work.

Different controller frameworks of neural networks are constructed for different nonlinear systems. A general framework of the nonlinear recurrent neural network was proposed for solving the online generalized linear matrix equation with global convergence property [15]. Another framework, which combines modified frequency slice wavelet transform and convolutional neural networks, was proposed for automatic atrial fibrillation beat identification [16]. A stability criterion of impulsive stochastic reaction-diffusion cellular neural network framework was derived via fixed-point theory [17]. The subnetwork of a class of coupled fractional-order neural networks consisting of identical subnetworks have locally Mittag-Leffler stable equilibria [18]. Our work was implemented based on these frameworks.

The concept of convolutional neural networks made the framework implementation of multilayer neural networks possible [19]. A pretraining method was proposed for hidden layer node weights using unsupervised restricted Boltzmann machine to solve the overfitting problem in gradient descending of multilayer neural networks [20]. A Dropout strategy was proposed to provide a high-efficiency training method of deep neural network weights with nonsaturating neurons, and this method can prevent overfitting and improve accuracy [21]. A reinforcement learning theory was introduced into deep learning neural networks to improve the reasoning ability of deep learning neural networks [22]. Based on these, the above developments were systematically described and were formally named as deep learning [23]. The improved variants of deep learning theory [24] also appeared. Deep learning theory has been widely used because of its high efficiency and high accuracy [25]. The Go program (AlphaGo) was constructed successfully using the deep learning neural networks and successfully defeated the human Go experts [26]. Depth learning theory has reached satisfactory success in the adaptive output feedback control of unmanned aircraft [27], image quality assessment [28], image character recognition under natural environment [29], change detection of image [30], image classification [31], and robotic guidance [32]. A semiactive nonsmooth control method with deep learning was proposed to suppress harmful effect on building structure by surface motion [33]. So the deep neural networks could be used in controller to process massive amounts of unsupervised data in complex scenarios. Neural networks have been widely studied and applied because of their ability to solve complex linear indivisibility problems, especially their super deep learning ability, which has significant advantages in cross-domain big data analysis with unstructured and unidentified patterns. Further, the conditions of existence and global exponential synchronization of almost periodic solutions of the delayed quaternion-valued neural networks were investigated [34], and the function projective synchronization problem of neural networks with mixed time-varying delays and uncertainties asymmetric coupling was investigated [35]. Neural networks have made great achievements in the determination of the success or failure and improving the performance.

But the robot dynamic model is often rarely known due to the complex robot mechanism, let alone various uncertainties such as parametric uncertainties or modeling errors existing in the robot dynamics. Therefore, it is a key problem to find the relationship between the changes of neural network structure and the changes of input and output environments to find their mutual influence. What are the relationships between the changes of neural network structure and the changes of input and output environments? How do they have mutual influence? These are the fundamental credit assignment problems [36]. It is a hotspot in the current research on the interaction with environments of neural networks, especially in input and output data interaction with neural networks [37]. There exist problems to be solved: (1) the quantitative description of input environments, macrostructure of neural networks, and output environments; (2) the quantitative description of the relationship between input environments, macrostructure of neural networks, and output environments; and (3) experimental verification of the relationship between input environments, macrostructure of neural networks, and output environments. Solution to these problems provides support for the wider application of neural networks.

In order to solve these problems, our work focuses on the eigen solution of neural networks of robot in complex environments, including kinematic model parameter uncertainties, dynamic model parameter uncertainties, and external disturbances by directly mapping the morphological subfeatures with controller parameters of the grinding robot. The concept and the eigen solution theorem of neural networks were studied in the second part of the paper. The third part gave the application of the neural network eigen solution rule in prediction analysis of controller parameters of the grinding robot to verify the theory correctness from a practical point of view. In the last part of the paper, we summarized our research work. The main contributions in this paper include the following. (1)In order to describe the relationship between the change of neural networks, the change of input environments, and the change of output environments, this paper constructed the neural network eigen solution theory, which defined the conception of neural networks, neural network solution, neural network eigen solution, complete solution, and partial solution of neural networks. Further, this paper proposed the consistent approximation theorem, eigen solution existence theorem, consistency theorem of complete solution and partial solution, and no solution theorem of neural networks(2)In order to verify the eigen solution theory of neural networks, the morphological subfeature graph with multimoment was constructed to describe the block surface morphology using rugosity , standard deviation , skewness , and kurtosis and then build prediction model with deep learning neural networks for controller parameters of the grinding robot. The predicted accuracies of the proposed model are more than 95%. Compared with the traditional prediction model, the proposed model can predict output features simultaneously and is more stable(3)The experimental results verify the correctness of the neural network eigen solution theory. The results of theoretical analysis and experimental test showed that the output traits have an optional effect with joint action. When the input features functioning in prediction increase, higher predicted accuracy can be obtained. And when the output traits involve in prediction increase, more output traits can be predicted. The deep learning model can find and predict the inherent laws of the data. Moreover, if the analyzed object data are gained in random, there is no nonrandom feature. The deep learning controller provides random predicted result. The output accuracies of random-output experiments are between 65% and 70% but not 50% due to superb affection

#### 2. Solution of Neural Networks

In our study, we try to find the essential relationship between the changes of neural network structure and the changes of input and output environments and their mutual influence out of the kinematic model parameter uncertainties, dynamic model parameter uncertainties, and external disturbances. We first defined the neural networks from the input and output environments’ perspective, and consistent approximation theorem is proposed with this concept. Then, the concept of solution of neural networks was defined based on this concept, and the corresponding concepts of isomorphism solution, eigen solution, complete solution, and partial solution were defined too. With these concepts and theorem, the eigen solution existence theorem, the consistency theorem of complete solution and partial solution, and no solution theorem were proposed. The relation of definitions and theorems were described in detail (see Figure 1).

*Definition 1 Neural networks. *The neural networks are defined as (see Figure 2). is the input object set. **Net** = {, } are the network layers (, , ), and is the -th convolutional kernel [] of the -th layer. If the network does not use convolution, let , then is the -th element in the ()-th layer, is the -th element in the -th layer, and is the weight between the -th element and the -th element. is bias in the *-*th layer. **Output** = {} is the output target vector set. is the feature acquisition function. is the neural network function. is the target function corresponding to the target vector set {}.

*Definition 2 Solution of neural networks. *In the study, {**Net**_{tr}} is a trained neural network, {**Input**_{test}} is a given test set, and {**Output**_{test}} is the output target vector set corresponding to the test set for the trained neural networks. Let {**Output**_{given}} be the target vector, then **Error** is given as
where is the given number of test sets. Suppose that the minimum threshold is *T*_{hreshold}, if the **Error** is higher than the threshold, then the trained neural networks cannot meet the actual needs.

If **Error** < *T*_{hreshold}, then {**Net**_{tr}} is called a solution of neural networks {**Input**, **Net**, **Output**}.

If **Error** = 0, {**Net**_{tr}} is called an ideal solution or a theoretical solution of neural networks {**Input**, **Net**, **Output**}.

If 0 < **Error** < *T*_{hreshold}, {**Net**_{tr}} is called an approximate solution of neural networks {**Input**, **Net**, **Output**}.

If **Error** > 0, we call that the neural networks {**Input**, **Net**, **Output**} have no solution.

*Definition 3 Isomorphism solution and eigen solution of neural networks. *The neural networks {, } are trained independently using different sets of input object vector sets to obtain different solutions . All these solutions are called isomorphism solutions of neural networks {, } for each other.

Let the input object vector set have the solution {**Net**_{tr}}_{1} corresponding to the neural networks {, }. is given as
where is the eigen acquisition function. The original input object vector set is transformed into the feature data object vector set . If has the solution , solution is called eigen solution of {**Net**_{tr}}_{1}. The eigen solutions and {**Net**_{tr}}_{1} are the isomorphism solutions for each other. In the study, we focus on the neural network eigen solution.

*Definition 4 Complete solution and partial solution of neural networks. *With the output target vector set {}, , if the output target vector set includes all the target vectors , then {**Net**_{tr}}_{Max} is called the complete solution of neural networks {, }. If the output target vector set is part of the target vector , then {**Net**_{tr}}_{Part} is called the partial solution of neural networks {, }.

Theorem 1 Consistent approximation theorem. *First, let the neural networks contain layers; each layer has hidden layer nodes; each activation function of the hidden layer node is a bounded continuous segmentation function, then the function of the neural networks are as
**Let the output objective vector set {} have the corresponding objective function . The residual function between the neural networks and the output target vector is defined as
**If , and are continuous; the limit of is given as
**That is, converges and monotonically declines.*

*Proof *(1)If the neural networks are feedforward neural networks and the hidden layer number is , then Theorem 1 is proofed true with the consistent approximation theorem of feedforward neural networks proposed in [38](2)If the neural networks are feedforward neural networks and the hidden layer number is , then is given as
Then, the converted problem is consistent with the consistent approximation theorem of feedforward neural networks proposed in [39]; hence, Theorem 1 is true(3)If the neural networks are recurrent neural networks **NET**, the feedback structure of recurrent neural networks is expanded in time dimension. Let the network **NET** run from time ; every time step of network **NET** is expanded into one of the layers in the feedforward neural networks , and the layer has exactly the same activity value with time step network **NET**. For any , the connection weight between the neuron or external input and neuron of network **NET** is copied into between the neuron or external input in the -th layer and neuron in the -th layer of network (see Figure 3)As time goes by, the depth of can be infinitely deep, that is, the recurrent neural networks **NET** can be equivalent to the feedforward neural networks when , and this is exactly the same with the consistent approximation theorem of feedforward neural networks proposed in [38], which also validates Theorem 1.

Theorem 2 Eigen solution existence theorem. *If the neural networks {, } have solution { Net_{tr}}, then for neural networks {, }, there exists the eigen solution {Net_{tr}}_{r}.*

*Proof. *Take the eigen acquisition function , then is given as
So is given as
Proof finished.

Theorem 3 Consistency theorem of complete solution and partial solution. *If the neural networks {, } have solution { Net_{tr}}, then for neural networks {, }, there exists the eigen solution . If the partial solution {Net_{tr}}_{Part} is gained by training the neural networks {, } with the input processing object vector set {}, then the complete solution {Net_{tr}}_{Max} can be gained by training the neural networks {, } with the input processing object vector set {}. The consistency theorem of complete solution and partial solution shows that the output traits have an optional effect with joint action. The more the output traits are involved in prediction, the more the output traits can be predicted.*

*Proof. *The corresponding target function of the output target vector set is , and the part solution is {**Net**_{tr}}_{Part} corresponding to the mapping function of neural networks {, }. The solution exists, so the approximation of holds; the convergence and monotone decrease of also hold; and are continuous too.

When the output target vector set is , the target function is still , and the complete solution is {**Net**_{tr}}_{Max} corresponding to the mapping function of neural networks {, }, but the target vector set nodes increase (see Figure 4).

Therefore, the number of effective hidden nodes is increased. According to the consistent approximation theorem, is more strictly approximated, that is, the trained neural networks {, } with input object vector set {} can also get a complete solution {**Net**_{tr}}_{Max}.

Proof finished.

Theorem 4 No solution theorem. *If the given trained neural networks {, } with input processing object vector set {} have no complete solution, then the trained neural networks {, } with input processing object vector set {} have no partial solution. The no solution theorem shows that if the analyzed data objects are random and there is no nonrandom trait, then the predicted result of the deep learning controller is also random; the controller cannot find nonrandom rule with random inputs. In other words, the prediction of deep learning controller cannot be out of nothing.*

*Proof. *When the output target vector set is , the target function is , and the complete solution corresponding to the mapping function of neural networks {, } has no solution, so does not hold.

The target function corresponding to target vector set is ; the input processing object vector set for training neural networks {, } is {}. Here, the number of target vector set nodes is reduced; the number of effective nodes in the hidden layer is reduced, so does not hold more strictly, i.e., the trained neural networks {, } with input processing object vector set {} have no partial solution.

Proof finished.