Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2019, Article ID 5296123, 21 pages
https://doi.org/10.1155/2019/5296123
Research Article

Eigen Solution of Neural Networks and Its Application in Prediction and Analysis of Controller Parameters of Grinding Robot in Complex Environments

1Mechanical Information Research Center, Jiangsu University, Zhenjiang, Jiangsu 212013, China
2School of Information Engineering, Yancheng Teachers University, Yancheng, Jiangsu 224002, China

Correspondence should be addressed to Jinan Gu; nc.gro.auhgnist@nanijug

Received 11 May 2018; Revised 27 August 2018; Accepted 10 September 2018; Published 3 January 2019

Guest Editor: Andy Annamalai

Copyright © 2019 Shixi Tang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The robot dynamic model is often rarely known due to various uncertainties such as parametric uncertainties or modeling errors existing in complex environments. It is a key problem to find the relationship between the changes of neural network structure and the changes of input and output environments and their mutual influences. Firstly, this paper defined the conceptions of neural network solution, neural network eigen solution, neural network complete solution, and neural network partial solution and the conceptions of input environments, output environments, and macrostructure of neural networks. Secondly, an eigen solution theory of general neural networks was proposed and proven including consistent approximation theorem, eigen solution existence theorem, consistency theorem of complete solution, the partial solution, and none solution theorem of neural networks. Lastly, to verify the eigen solution theory of neural networks, the proposed theory was applied to a novel prediction and analysis model of controller parameters of grinding robot in complex environments with deep neural networks and then build prediction model with deep learning neural networks for controller parameters of grinding robot. The morphological subfeature graph with multimoment was constructed to describe the block surface morphology using rugosity, standard deviation, skewness, and kurtosis. The results of theoretical analysis and experimental test show that the output traits have an optional effect with joint action. When the input features functioning in prediction increase, higher predicted accuracy can be obtained. And when the output traits involving in prediction increase, more output traits can be predicted. The proposed prediction and analysis model with deep neural networks can be used to find and predict the inherent laws of the data. Compared with the traditional prediction model, the proposed model can predict output features simultaneously and is more stable.

1. Introduction

Grinding is one of the most environmental-related manufacturing processes in complex environments. These processes are often scattered, inefficient, contaminated, labor-intensive, and foremost field-intensive. And all the processes account for 12–15% of overall manufacturing costs and 20–30% of total manufacturing time [1]. The surface roughness of the casting block and the grinding force prediction are an important aspect of the grinding process for optimization and monitoring. The grinding process is controlled by the grinding robot servo control system. Neural networks can be helpful in reducing the data dimensionality, and the optimization of neural network training may be employed to enhance the learning and adaptation performance of robots [2]. With its powerful approximation ability, neural network has been utilized in many promising fields, such as modeling and identification of complex nonlinear systems and optimization and automatic control. The components integrated with a complex system may interact with each other and bring difficulties to the control [3]. So many grinding robot servo control systems are constructed with neural networks.

Adaptive neural network controllers were studied in many aspects. The approximation-based controllers are designed for induction motors with input saturation [4] and 3-DOF robotic manipulator that is subject to backlash like hysteresis and friction [5]. The parameter-based controllers are designed to identify the unknown robot kinematic and dynamic parameters for robot manipulators with finite-time convergence [6] and perform haptic identification for uncertain robot manipulators [7]. The predictive controllers are designed for behaviour predicting of electronic circuits [8], trajectory tracking [9], trajectory tracking of underactuated surface vessels [10], performing attitude tracking control of a quad tilt rotor aircraft [11], and tracking error converging to a neighborhood of the origin [12]. A federal Kalman filter based on neural networks is used in the velocity and attitude matching of transfer alignment [13]. An iterative neural dynamic programming is provided for affine and nonaffine nonlinear systems by using system data rather than accurate system models [14]. We have the merit of adaptive neural network controllers in our work.

Different controller frameworks of neural networks are constructed for different nonlinear systems. A general framework of the nonlinear recurrent neural network was proposed for solving the online generalized linear matrix equation with global convergence property [15]. Another framework, which combines modified frequency slice wavelet transform and convolutional neural networks, was proposed for automatic atrial fibrillation beat identification [16]. A stability criterion of impulsive stochastic reaction-diffusion cellular neural network framework was derived via fixed-point theory [17]. The subnetwork of a class of coupled fractional-order neural networks consisting of identical subnetworks have locally Mittag-Leffler stable equilibria [18]. Our work was implemented based on these frameworks.

The concept of convolutional neural networks made the framework implementation of multilayer neural networks possible [19]. A pretraining method was proposed for hidden layer node weights using unsupervised restricted Boltzmann machine to solve the overfitting problem in gradient descending of multilayer neural networks [20]. A Dropout strategy was proposed to provide a high-efficiency training method of deep neural network weights with nonsaturating neurons, and this method can prevent overfitting and improve accuracy [21]. A reinforcement learning theory was introduced into deep learning neural networks to improve the reasoning ability of deep learning neural networks [22]. Based on these, the above developments were systematically described and were formally named as deep learning [23]. The improved variants of deep learning theory [24] also appeared. Deep learning theory has been widely used because of its high efficiency and high accuracy [25]. The Go program (AlphaGo) was constructed successfully using the deep learning neural networks and successfully defeated the human Go experts [26]. Depth learning theory has reached satisfactory success in the adaptive output feedback control of unmanned aircraft [27], image quality assessment [28], image character recognition under natural environment [29], change detection of image [30], image classification [31], and robotic guidance [32]. A semiactive nonsmooth control method with deep learning was proposed to suppress harmful effect on building structure by surface motion [33]. So the deep neural networks could be used in controller to process massive amounts of unsupervised data in complex scenarios. Neural networks have been widely studied and applied because of their ability to solve complex linear indivisibility problems, especially their super deep learning ability, which has significant advantages in cross-domain big data analysis with unstructured and unidentified patterns. Further, the conditions of existence and global exponential synchronization of almost periodic solutions of the delayed quaternion-valued neural networks were investigated [34], and the function projective synchronization problem of neural networks with mixed time-varying delays and uncertainties asymmetric coupling was investigated [35]. Neural networks have made great achievements in the determination of the success or failure and improving the performance.

But the robot dynamic model is often rarely known due to the complex robot mechanism, let alone various uncertainties such as parametric uncertainties or modeling errors existing in the robot dynamics. Therefore, it is a key problem to find the relationship between the changes of neural network structure and the changes of input and output environments to find their mutual influence. What are the relationships between the changes of neural network structure and the changes of input and output environments? How do they have mutual influence? These are the fundamental credit assignment problems [36]. It is a hotspot in the current research on the interaction with environments of neural networks, especially in input and output data interaction with neural networks [37]. There exist problems to be solved: (1) the quantitative description of input environments, macrostructure of neural networks, and output environments; (2) the quantitative description of the relationship between input environments, macrostructure of neural networks, and output environments; and (3) experimental verification of the relationship between input environments, macrostructure of neural networks, and output environments. Solution to these problems provides support for the wider application of neural networks.

In order to solve these problems, our work focuses on the eigen solution of neural networks of robot in complex environments, including kinematic model parameter uncertainties, dynamic model parameter uncertainties, and external disturbances by directly mapping the morphological subfeatures with controller parameters of the grinding robot. The concept and the eigen solution theorem of neural networks were studied in the second part of the paper. The third part gave the application of the neural network eigen solution rule in prediction analysis of controller parameters of the grinding robot to verify the theory correctness from a practical point of view. In the last part of the paper, we summarized our research work. The main contributions in this paper include the following. (1)In order to describe the relationship between the change of neural networks, the change of input environments, and the change of output environments, this paper constructed the neural network eigen solution theory, which defined the conception of neural networks, neural network solution, neural network eigen solution, complete solution, and partial solution of neural networks. Further, this paper proposed the consistent approximation theorem, eigen solution existence theorem, consistency theorem of complete solution and partial solution, and no solution theorem of neural networks(2)In order to verify the eigen solution theory of neural networks, the morphological subfeature graph with multimoment was constructed to describe the block surface morphology using rugosity , standard deviation , skewness , and kurtosis and then build prediction model with deep learning neural networks for controller parameters of the grinding robot. The predicted accuracies of the proposed model are more than 95%. Compared with the traditional prediction model, the proposed model can predict output features simultaneously and is more stable(3)The experimental results verify the correctness of the neural network eigen solution theory. The results of theoretical analysis and experimental test showed that the output traits have an optional effect with joint action. When the input features functioning in prediction increase, higher predicted accuracy can be obtained. And when the output traits involve in prediction increase, more output traits can be predicted. The deep learning model can find and predict the inherent laws of the data. Moreover, if the analyzed object data are gained in random, there is no nonrandom feature. The deep learning controller provides random predicted result. The output accuracies of random-output experiments are between 65% and 70% but not 50% due to superb affection

2. Solution of Neural Networks

In our study, we try to find the essential relationship between the changes of neural network structure and the changes of input and output environments and their mutual influence out of the kinematic model parameter uncertainties, dynamic model parameter uncertainties, and external disturbances. We first defined the neural networks from the input and output environments’ perspective, and consistent approximation theorem is proposed with this concept. Then, the concept of solution of neural networks was defined based on this concept, and the corresponding concepts of isomorphism solution, eigen solution, complete solution, and partial solution were defined too. With these concepts and theorem, the eigen solution existence theorem, the consistency theorem of complete solution and partial solution, and no solution theorem were proposed. The relation of definitions and theorems were described in detail (see Figure 1).

Definition 1 Neural networks. The neural networks are defined as (see Figure 2). is the input object set. Net = {, } are the network layers (, , ), and is the -th convolutional kernel [] of the -th layer. If the network does not use convolution, let , then is the -th element in the ()-th layer, is the -th element in the -th layer, and is the weight between the -th element and the -th element. is bias in the -th layer. Output = {} is the output target vector set. is the feature acquisition function. is the neural network function. is the target function corresponding to the target vector set {}.

Definition 2 Solution of neural networks. In the study, {Nettr} is a trained neural network, {Inputtest} is a given test set, and {Outputtest} is the output target vector set corresponding to the test set for the trained neural networks. Let {Outputgiven} be the target vector, then Error is given as where is the given number of test sets. Suppose that the minimum threshold is Threshold, if the Error is higher than the threshold, then the trained neural networks cannot meet the actual needs.
If Error < Threshold, then {Nettr} is called a solution of neural networks {Input, Net, Output}.
If Error = 0, {Nettr} is called an ideal solution or a theoretical solution of neural networks {Input, Net, Output}.
If 0 < Error < Threshold, {Nettr} is called an approximate solution of neural networks {Input, Net, Output}.
If Error > 0, we call that the neural networks {Input, Net, Output} have no solution.

Definition 3 Isomorphism solution and eigen solution of neural networks. The neural networks {, } are trained independently using different sets of input object vector sets to obtain different solutions . All these solutions are called isomorphism solutions of neural networks {, } for each other.
Let the input object vector set have the solution {Nettr}1 corresponding to the neural networks {, }. is given as where is the eigen acquisition function. The original input object vector set is transformed into the feature data object vector set . If has the solution , solution is called eigen solution of {Nettr}1. The eigen solutions and {Nettr}1 are the isomorphism solutions for each other. In the study, we focus on the neural network eigen solution.

Definition 4 Complete solution and partial solution of neural networks. With the output target vector set {}, , if the output target vector set includes all the target vectors , then {Nettr}Max is called the complete solution of neural networks {, }. If the output target vector set is part of the target vector , then {Nettr}Part is called the partial solution of neural networks {, }.

Theorem 1 Consistent approximation theorem. First, let the neural networks contain layers; each layer has hidden layer nodes; each activation function of the hidden layer node is a bounded continuous segmentation function, then the function of the neural networks are as Let the output objective vector set {} have the corresponding objective function . The residual function between the neural networks and the output target vector is defined as If , and are continuous; the limit of is given as That is, converges and monotonically declines.

Proof (1)If the neural networks are feedforward neural networks and the hidden layer number is , then Theorem 1 is proofed true with the consistent approximation theorem of feedforward neural networks proposed in [38](2)If the neural networks are feedforward neural networks and the hidden layer number is , then is given as Then, the converted problem is consistent with the consistent approximation theorem of feedforward neural networks proposed in [39]; hence, Theorem 1 is true(3)If the neural networks are recurrent neural networks NET, the feedback structure of recurrent neural networks is expanded in time dimension. Let the network NET run from time ; every time step of network NET is expanded into one of the layers in the feedforward neural networks , and the layer has exactly the same activity value with time step network NET. For any , the connection weight between the neuron or external input and neuron of network NET is copied into between the neuron or external input in the -th layer and neuron in the -th layer of network (see Figure 3)As time goes by, the depth of can be infinitely deep, that is, the recurrent neural networks NET can be equivalent to the feedforward neural networks when , and this is exactly the same with the consistent approximation theorem of feedforward neural networks proposed in [38], which also validates Theorem 1.

Theorem 2 Eigen solution existence theorem. If the neural networks {, } have solution {Nettr}, then for neural networks {, }, there exists the eigen solution {Nettr}r.

Proof. Take the eigen acquisition function , then is given as So is given as Proof finished.

Theorem 3 Consistency theorem of complete solution and partial solution. If the neural networks {, } have solution {Nettr}, then for neural networks {, }, there exists the eigen solution . If the partial solution {Nettr}Part is gained by training the neural networks {, } with the input processing object vector set {}, then the complete solution {Nettr}Max can be gained by training the neural networks {, } with the input processing object vector set {}. The consistency theorem of complete solution and partial solution shows that the output traits have an optional effect with joint action. The more the output traits are involved in prediction, the more the output traits can be predicted.

Proof. The corresponding target function of the output target vector set is , and the part solution is {Nettr}Part corresponding to the mapping function of neural networks {, }. The solution exists, so the approximation of holds; the convergence and monotone decrease of also hold; and are continuous too.
When the output target vector set is , the target function is still , and the complete solution is {Nettr}Max corresponding to the mapping function of neural networks {, }, but the target vector set nodes increase (see Figure 4).
Therefore, the number of effective hidden nodes is increased. According to the consistent approximation theorem, is more strictly approximated, that is, the trained neural networks {, } with input object vector set {} can also get a complete solution {Nettr}Max.
Proof finished.

Theorem 4 No solution theorem. If the given trained neural networks {, } with input processing object vector set {} have no complete solution, then the trained neural networks {, } with input processing object vector set {} have no partial solution. The no solution theorem shows that if the analyzed data objects are random and there is no nonrandom trait, then the predicted result of the deep learning controller is also random; the controller cannot find nonrandom rule with random inputs. In other words, the prediction of deep learning controller cannot be out of nothing.

Proof. When the output target vector set is , the target function is , and the complete solution corresponding to the mapping function of neural networks {, } has no solution, so does not hold.
The target function corresponding to target vector set is ; the input processing object vector set for training neural networks {, } is {}. Here, the number of target vector set nodes is reduced; the number of effective nodes in the hidden layer is reduced, so does not hold more strictly, i.e., the trained neural networks {, } with input processing object vector set {} have no partial solution.
Proof finished.

Figure 1: The relation of definitions and theorems.
Figure 2: Basic structure of neural networks.
Figure 3: Recurrent neural networks are expanded into feedforward neural networks.
Figure 4: Changing of neural network target vector set.

3. Prediction and Analysis of Controller Parameters of Grinding Robot

3.1. Grinding Robot Controller Model

In grinding robot servo control system, dynamic real-time adaptive positioning mechanism was added to the traditional robot servo control system driven by the machine vision system. Our grinding robot controller model describes the closed-loop control process for robot grinding (see Figure 5).

Figure 5: Robot grinding controller model.

The dynamic real-time adaptive specific controlling method of grinding robot is that the image of processing casting block surface is obtained using vision measurement and positioning system in time . and are the length and width of corresponding defective area waiting to handle. The corresponding handling is done to obtain an appropriate knowledge feature value given as

In our research, we set ; is the roughness of casting block surface.

The speed and position of the robotic arm end, the feed speed of the grinding wheel translational motion, the peripheral speed of the grinding wheel rotational motion, the axial displacement of casting block motion of the block, and the radial displacement of wheel motion are obtained at time using the feedback information obtained from the real-time knowledge feature value .

The speed of robotic arm end coincides with the speed of grinding wheel, , , , ; the coefficients are determined by experiment based on casting block material.

The robot controller achieves adaptive control by the dynamic real-time feedback information wherein is the reference force, and are the joint angles, and are the robot joint angular velocities, is the joint driving moment, is the force applied to the casting block by the end of the robot, is the feedback joint angle, and is the feedback joint angular velocity.

The working process algorithm of the grinding robot controller model describes the closed-loop control process (see Figure 6).

Figure 6: The working process algorithm of grinding robot controller model.

First, the surface image of block is gained with vision measurement and positioning system, and then, the geometrical features of image is obtained with vision measurement and positioning system too. By judging the quality of the casting block, the grinding decision is made. When the surface of block does not meet the quality requirement, the robot grinding traits are got from the geometrical features of image with robot grinding controller in the grinding robot servo control system. Last, the grinding robot executes the grinding operation with the robot grinding traits, and the surface image of block feeds back to the vision measurement and positioning system to form a closed loop.

The working process algorithm describes the specific controlling method of grinding robot controller (see Figure 7).

Figure 7: The working process algorithm of controlling method.

After the geometrical features of image are obtained, robot grinding traits are generated with grinding robot servo control system in the controller, which is composed of prediction model of trained deep neural networks. Then, the joint angles are computed from robot grinding traits with kinematics module, and the robot joint angular velocities are computed from the joint angles with force control module. The grinding robot executes the grinding operation with the robot grinding traits, joint driving moment, and joint angular velocities.

3.2. Prediction Model of Controller Parameters of Grinding Robot

The research object in our study is the robot grinding process of casting block of engine. Block is one of the important parts of the engine. The quality of casting block engine directly affects the automobile performance. With the development of automobile engine technology, the dimensional accuracy and mechanical performance of engine block are required for high-quality casting block. The automobile engine of study is 1.5 L; maximum outline dimension of the block is 400 mm × 320 mm × 253 mm, and its weight is 38 kg of HT300 material.

The controller parameters of grinding robot affect grinding efficiency and quality which includes the feed speed of grinding wheel translational motion, the peripheral speed of grinding wheel rotational motion, the axial displacement of workpiece motion of the block, and the radial displacement of wheel motion (see Figure 8). The specific controller parameters of grinding robot are the output data, and their type and number are related to the machining method. The block is located on the bracket of track, while the grinding wheel is located on the actuator side of the robot. During the process, the workpiece is fixed, whereas the grinding wheel moves.

Figure 8: Parameters of grinding robot.

According to DIN EN ISO 4288 and ASME B46.1 standards, the input parameters are Fin = (Ra, Rz, Ry) features of workpiece surface morphology; Ra is the arithmetical mean deviation of the workpiece surface morphology; Rz is the point height of irregularities of the workpiece surface morphology, and Ry is the maximum height of the workpiece surface morphology. The value of Ra of the workpiece surface after grinding ranges from 0.01 μm to 0.8 μm [40]. Features (Ra, Rz, Ry) are the statistical extract and dimensionality of surface morphology and cannot characterize all the surface morphology. Features (Ra, Rz, Ry) are only the empirical description of surface morphology and cannot characterize essential features of surface morphology. Features (Ra, Rz, Ry) are only the statistical average of surface morphology with a lot of information loss. The standards of DIN EN ISO 4287 and ASME B46.1 give more explicit and more information of surface morphology using four features, which are surface rugosity, standard deviation, skewness, and kurtosis [39, 41, 42], presented in (10)–(13). But these features still cannot work on all the pixels of surface morphology and cannot give a significant and systematic description statistics.

In order to solve the above problems, we improved the definition of surface rugosity, standard deviation, skewness, and kurtosis in [2628]. Alternatively, we give a unified definition of surface morphological features with definite mathematical meanings; the features are corresponding to the first moment, second moment, third moment, and fourth moment, respectively. On the other hand, the definitions of features are extended to all pixels of surface morphological image, which can reduce the information loss to optional extent, while extracting the essential features at the same time. The features are defined in (14)–(17) by improving the method of calculating the depth from the gray level [43].

Here, is the reflectivity of the block surface to the incident light. is the intensity of block surface. is the vertical component of the intensity of block surface. is the focal length of the camera. is the object distance from camera to block surface. is the gray value of pixel of block surface image in position (). represents the depth value of pixel of block surface image in position (). and are the maximum values of coordinates and ; is the average depth of all pixels of block image, and is the mean square deviation of the depth of all pixels of block image. is the moment of the depth of the pixel in position (). is the first moment of the depth of the pixel in position (), denoting the rugosity of surface image in position (). is the second moment of the depth of the pixel in position (), representing the standard deviation of surface image in position (). is the third moment of the depth of the pixel in position (), signifying the skewness of surface image in position (). is the fourth moment of the depth of the pixel in position (), indicating the kurtosis of surface image in position ().

, , , and are called subgraphs of rugosity, standard deviation, skewness, and kurtosis of generalized surface morphology which forms the basis of this study.

3.3. Grinding Robot Prediction Algorithm of Controller Parameters of Grinding Robot

Grinding parameter prediction is completed by the trained deep neural networks (see Figure 9). The trained deep neural networks are part of the grinding robot controller. The surface image of part of block is analyzed by algorithm, then the geometrical features of the obtained surface image are obtained, including the rugosity , standard deviation , skewness , and kurtosis of surface image in position (). The feature values are put into the well-trained deep neural networks, and the values of robot grinding traits are obtained when the network computing is finished. Then, the grinding robot controller sends out the control instruction to other institutions of the robot.

Figure 9: Prediction algorithm framework of controller parameters of grinding robot.

In the constructed model, the surface roughness of block surface was obtained by analyzing the surface of the workpiece, including the corresponding moment features. All the moment features of image are input into the grinding controller, which describes the roughness of the workpiece surface and determines the feed speed , the peripheral speed , the axial displacement , and the radial displacement of the grinding robot actuator.

It is a complex nonlinear relationship between rugosity , standard deviation , skewness , kurtosis and feed speed , peripheral speed , axial displacement , radial displacement in (18). The nonlinear relationship between them is described using deep neural networks.

Further, we give the process of generating the deep neural networks (see Figure 10). The feature subgraphs of part of block surface image is generated after the surface image being processed with the grinded part of block. The initial values of weights and biases are initialized in deep neural networks. The deep neural networks are trained using the (, Fouts) training set that had been marked with empirical knowledge. is the block images and Fouts are the output of grinding features corresponding to . Hence, the trained deep neural networks are fine-tuned to get one stage result of deep neural networks. This iterating process continues until all the data in training set are completed to get the well-trained deep neural networks.

Figure 10: The process of generating the deep neural networks.

Our proposed model has the characteristics of multifeatures and multitargets, which makes the model complicated. It needs to consider the correlation between rugosity, standard deviation, skewness, kurtosis and their influence on deep neural networks. It also needs to consider the correlation between feed speed , peripheral speed , axial displacement , radial displacement and their influence on deep neural networks. Further, the correlation between rugosity, standard deviation, skewness, kurtosis and feed speed , peripheral speed , axial displacement , radial displacement needs to be considered. The agreement of deep neural networks should be considered, that is, the neural network nodes are independent of each other and overfitting phenomena that do not conform to the agreement.

The inputs of our model have several feature subgraphs rather than a single graph. Our model predicts the grinding traits, not just the classification of a given object. Thus, the model processes several feature subgraphs to predict several grinding traits with multi-input and multioutput relationship, which is different from one to one relationship of single predetermined object classification.

3.4. Grinding Robot Prediction Algorithm of Controller Parameters of Grinding Robot

The images of block surface after grinding are taken using vision system, and all the images are labeled with corresponding grinding trait parameter values, as shown in (17). There are 400 labeled training data measured by the experiment according to years of working experience as skilled technicians.

In (14), is the gray value of block surface image; the reflectance of block surface is 0.65; the focus is 50 mm; the object distance to block surface is 0.5 m; takes 0 for the use of uniform light source, and the intensity of block surface is 1000 lx. The extended 3D shape of block surface and its feature are obtained by (17). , , , , , and are gray level, roughness, rugosity, standard deviation, skewness, and kurtosis; all these features are the inputs of deep neural network controller.

We obtain the correlation between the input feature parameters of the neural network controller (see Figure 11). It can be seen that the correlations among the parameters are very complex, with features of positive and negative correlations, and the degrees of correlation are also different and cannot be expressed with analytical formula.

Figure 11: The correlation between the input feature parameters.

And the correlation between the output parameters of the neural network controller is also obtained (see Figure 12). The output parameters act as robot grinding traits. It can be seen that the correlations among the parameters are also very complicated with features of both positive and negative correlations. The degrees of correlation are also different and cannot be expressed with analytical formula.

Figure 12: The correlation between the output parameters.
3.5. Predicted Result Analysis of Controller Parameters of Grinding Robot

In the experiment, the training time of deep learning neural networks for block grinding is finished within 78 hours and 32 minutes with computing environment of Windows Server 2010, Intel Core i5-4308U CPU, 2.80 GHz, and RAM 8.00 GB. The prediction time of trained deep learning neural networks for block grinding is computed within 2 seconds.

In our controller, the model is implemented on deep learning open-source framework Tiny-dnn [44]. In deep learning neuron network prediction training model, the minibatch_size is 4, the num_epochs are 3000, the activation function is tanh, and its value ranges from −0.8 to +0.8. In the input layer, the input labeled data are 400, and the default normalized values are [32 × 32]. In the feature extraction layer, convolution and sampling are performed for each feature subgraph; the convolution kernel is [5 × 5], and the sampling kernel takes [2 × 2]. In the fuzzy analysis layer, the number of jump positions is 10, the quantum interval initial value is 0.1, and the output layer is with full join. The inputs include subgraphs of rugosity , standard deviation , skewness , and kurtosis of the block surface morphology, and each feature subgraph works independently. The full join between the fuzzy quantum analysis layer and output layer of deep neuron network is processed with all the feature subgraphs. The outputs are the feed speed , the peripheral speed , the axial displacement , and the radial displacement .

3.5.1. Deep Learning Neuron Network Training Results

The training results of deep learning neuron network of block surface controller parameters of grinding robot are obtained with samples increasing (see Figure 13).

Figure 13: Trends of accuracy changing of deep learning neuron network for casting block grinding with samples increasing.

The accuracy of all the output features is more than 80% with the prediction of deep learning neuron network controller under the given empirical labeled data set. The increasing rate of grinding trait accuracy is among the fastest stage when the samples are less than 600. The accuracy of all the grinding traits reaches more than 95% when samples are 1000, and the accuracy still slowly increases with the increasing of training times. In the final training result, the prediction accuracies of feed speed , peripheral speed , axial displacement , and radial displacement are above 98% compared to the labeled training data set.

Similarly, the training results of deep learning neuron network of block surface controller parameters of grinding robot are obtained with training times increasing (see Figure 14).

Figure 14: Trends of accuracy changing of deep learning neuron network for casting block grinding with training times increasing.

The accuracy of all the output features is more than 80% with the prediction of deep learning neuron network controller under the given empirical labeled data set. The increasing rate of grinding trait accuracy is among the fastest stage when training times are less than 600. The accuracy of all the grinding traits reaches more than 95% when training times are 1500, and the accuracy still slowly increases with the increasing of training times. In the final training result, the prediction accuracies of feed speed , peripheral speed , axial displacement , and radial displacement are above 97% compared to the labeled training data set.

From the training results of neural networks in Figures 13 and 14, it can be seen that the prediction accuracy of the deep learning neural networks increases monotonically with the increase of sample size and number of training. And the sample size displays a bigger influence on the accuracy of simulating result than the number of training.

The prediction error of feed speed , peripheral speed , axial displacement , and radial displacement with the deep learning neural networks can be defined as

From (19)–(22), we can see that , , , and converge and monotonically decrease. The experimental results verify the correctness of Theorem 1 (consistent approximation theorem).

3.5.2. Predicted Experimental Results with Multioutput

The trained deep neural networks are used to predict the traits of casting block grinding. The sample number of data is 100, which is used as the input of the trained deep neural networks. The predicted results of feed speed , peripheral speed , axial displacement , and radial displacement of grinding wheel are obtained accordingly (see Figures 1518). The black curve represents the labeled grinding trait data of block surface. The other color curves are the predicted results using the trained deep neural networks, and the average accuracies of predicted results are given (see Table 1).

Figure 15: Predicted results of feed speed of grinding wheel.
Figure 16: Predicted results of peripheral speed of grinding wheel.
Figure 17: Predicted results of axial displacement of grinding wheel.
Figure 18: Predicted results of radial displacement of grinding wheel.
Table 1: Average accuracies of predicted results with multioutput.

From Figures 1518, we can get that [32 × 32], [48 × 48], and [52 × 52] are input data sets of different sizes; [32 × 32], [32 × 32], [32 × 32], and [32 × 32] are the feature data set that [32 × 32] transformed, and each input data set corresponds to the respective output data sets {}. In the given set of input feature data, at least one data set can be found to get the eigen solution of solution {Nettr} of neural networks {, } corresponding to the data set of [32 × 32], and the prediction accuracy of eigen solution is equivalent to that of neural network solution {Nettr} corresponding to the data set of [32 × 32], so the correctness of Theorem 2 (eigen solution existence theorem) is verified. The following conclusions are drawn from Figures 1518 and Table 1 with the average accuracies of predicted results. (1)The trained deep learning neural networks of our model can predict the feed speed , peripheral speed , axial displacement , and radial displacement simultaneously, and their average minimum prediction accuracies are more than 95%. The more pixels the input data samples from the image, the higher the correlation prediction accuracy. The green, blue, and red curves are the correlation predicted results of our model with pixels [32 × 32, 48 × 48, and 52 × 52] where the accuracy is improved by 1% in turn(2)Our proposed deep neuron network can well reflect the influence between the features of rugosity , standard deviation , skewness , kurtosis and the robot grinding traits described as follows. (a) There are several features, at least one feature of rugosity , standard deviation , skewness , kurtosis playing function on all the grinding traits of feed speed , peripheral speed , axial displacement , and radial displacement . (b) The joint prediction accuracy of robot grinding traits with all block surface features is within the same precision level of the optimal prediction accuracy of each independent robot grinding trait as rugosity , standard deviation , skewness , or kurtosis . Furthermore, the joint prediction accuracy is even sometimes superior to the single feature optimal prediction precision. As seen in Figure 17, the joint prediction accuracy of axial speed is superior to the optimal prediction precision of rugosity . All the joint predicted results of robot grinding traits obtain the optimal prediction accuracy at the same time where the joint predicted results do not interfere with each other. (c) Different surface feature has different influence on the robot controller parameters of the grinding robot. Here, the influence of rugosity , standard deviation , skewness , and kurtosis are in descending order. As shown in Figures 15, 16, and 18, the average prediction accuracy of rugosity is higher than that of the joint prediction accuracy when the dotted curve is the image with pixels [32 × 32]. The predicted results of standard deviation and skewness have larger amplitude fluctuation, and their accuracies are also less than that of the joint accuracy. The predicted result of kurtosis is a line; the corresponding controller almost has no prediction capability, and the prediction accuracy is only a random probability(3)There are severe deviations in times 78–83 and 87–95 as shown in Figure 15. Also, similar deviation is formed in times 4 and 85–95 as shown in Figure 10. In times 68–72 and 86–92 as shown in Figure 17, as well as in times 66–68 and 84–94 as indicated in Figure 18, severe abnormality were recorded, which are caused by the emergence of new data samples. The solution to this problem is that new data samples are labeled and added to the training set. This model can be used to obtain reasonable predicted result when the similar problem occurs in the future

Therefore, given the standard image of grinded block surface, we can accurately get the optional configuration values of the feed speed , peripheral speed , axial displacement , and radial displacement using the well-trained deep neuron network controller.

3.5.3. Predicted Experimental Results with Single Output

In the independent experiment with single output, the multi-to-one deep learning-oriented prediction model is adopted, and the inputs are the moment feature graphs of grinding casting block surface corresponding to the first, second, third, and fourth moments. Each of the moment feature graphs works in a relatively independent network. The output parameter between the fuzzy analysis layer and the output layer of network is single output with peripheral speed , feed speed , axial displacement , or radial displacement . The predicted results are obtained (see Figures 1922). The average accuracies of predicted results are given (see Table 2).

Figure 19: Predicted results of peripheral speed of grinding wheel.
Figure 20: Predicted results of feed speed of grinding wheel.
Figure 21: Predicted results of radial displacement of grinding wheel.
Figure 22: Predicted results of axial displacement of grinding wheel.
Table 2: Average accuracies of predicted results with single output.

In the independent experiment, [32 × 32], [48 × 48], and [52 × 52] are input data sets of different sizes; [32 × 32], [32 × 32], [32 × 32], and [32 × 32] are the feature data set that [32 × 32] transformed to, and each input data set corresponds to the respective output data set {}, {},{}, or {}. The solution of independent single-output experiment is partial solution of multioutput experiment complete solution . The partial solution predicted results are shown in Figures 1922. Data sets of [32 × 32], [48 × 48], and [52 × 52] have partial solution for all output data set of {}, {},{}, or {}. Similarly, has partial solution for {}. In the multioutput experiment, the feature data sets , , , and of input data sets [32 × 32], [48 × 48], and [52 × 52] have independent multioutput experiment complete solutions corresponding to {}, as shown in Figures 1518. All , , and have complete solutions except for . From the above, we can draw the conclusion that a partial solution must correspond to a complete solution. Thus, our experiment verifies the correctness of Theorem 3 (consistency theorem of complete solution and partial solution). The independent experiment shows that has no complete solutions corresponding to {}, thus verifying the correctness of Theorem 4 (no solution theorem).

As shown in Figure 19, the black curve is the original empirical data of the peripheral speed of grinding wheel. The other curves are the predicted results of deep learning-oriented controllers. From the figures and the average accuracies in Table 2, we draw the following conclusions. (1) When output parameter is only the feed speed, the single-output prediction accuracies of each moment are in the same percentage point scope with the multioutput prediction accuracies of each moment, as the blue and the red curve shows; the multioutput prediction accuracies do not interfere with each other for the sake of interaction of multioutput parameters. (2) In this experiment, only the first moment has function, and other moments have no function, as shown with dotted straight lines in the figures. The predicted results of the second, the third, and the fourth moments are only random results. The multioutput prediction accuracies are consistent with the first moment predicted result, which means that the first moment has superb effect in the multioutput prediction. (3) There are 3 nonfunctional moments in single-output experiment, which are 2 more nonfunctional moments than in multioutput experiment. This phenomenon shows that there exists superb effect moment between the moments functioning in multioutput experiment.

The predicted result of feed speed of grinding casting block surface is similar to the predicted result of peripheral speed as shown in Figure 20. The differences are shown as follows. (1) Compared with Figure 19, the second and the fourth moments do not work in this experiment; the predicted experimental accuracy of the fourth moment with single output is reduced by 10%, which also shows that there exists superb effect moment between the moments functioning in multioutput experiment. (2) The first and third moments function in this experiment, which means that the inputting feature parameters are determined by the grinding surface morph itself but not the order of moments.

From the above analysis, it can be seen that (1) there is existence of high prediction accuracy consistent with multi-input and multioutput prediction, multi-input and single-output prediction, and single-input and single-output prediction. The optimal prediction accuracy is obtained at the same time for all the predicted results, which do not interfere with each other for the superb affection existing. (2) Also, the optimal prediction accuracy consistent with multi-input and multioutput prediction, multi-input and single-output prediction, and single-input and single-output prediction fully demonstrate that the grinding data do have inherent law and can be captured and predicted by the designed deep learning neural network controller. In order to further verify this, we conducted experiment with random input as follows.

3.5.4. Predicted Experimental Results with Random Output

In the random-output experiment, we adopt the same proposed deep learning-oriented prediction model, and the inputs are the moment graph of the grinding surface morph data. Each moment graph works in relatively independent network. All the output data values of the feed speed , peripheral speed , axial displacement , and radial displacement are randomly generated by computer simulation. The predicted results of random-output experiment are obtained (see Figures 2326). The black curves are generated by random data, and the blue curves are the predicted results with our controller. The average accuracies of predicted results with random output are given (see Table 3), and the prediction results are in accordance with the random data. That is to say, we get the random results.

Figure 23: Predicted results of peripheral speed of grinding wheel.
Figure 24: Predicted results of feed speed of grinding wheel.
Figure 25: Predicted results of radial displacement of grinding wheel.
Figure 26: Predicted results of axial displacement of grinding wheel.
Table 3: Average accuracies of predicted results with random output.

From the analysis of random-output experiment, we get a very interesting result. If the analyzed object data are gained in random, there is no nonrandom feature. The deep learning controller gets random predicted result, which cannot be found in nonrandom laws. In other words, the deep learning prediction controller cannot be out of nothing, which proves the validity of the proposed model from another side. The stronger the law, the more the input features function, the higher the predicted accuracy. The random-output experiment further verifies the correctness of Theorem 4.

4. Conclusions

In our study, we give the eigen solution theory of general neural networks and quantitatively describe the input environments, the macroscopic structure, output environments, and their relationship of neural networks. The eigen solution theory is applied and validated in the prediction of controller parameters of grinding robot in complex environments with the proposed deep learning neural networks, which will provide theoretical support for a wider application with the proposed deep learning neural networks.

Data Availability

The data used to support the findings of this study are available from the supplementary materials.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by the National Science Foundation of China [51875266, 61379064, 61603326, 61772448], the Industry-Academia Prospect Research Foundation of Jiangsu [BY2016066-06, BY2015058-01], the National Students’ Innovation and Entrepreneurship Training Program of Jiangsu [201310324006, 2103242013006], the Tendered Special Fund Project on Promoting the Transformation of Scientific and Technological Achievements of Jiangsu [BA2015026], the Special Fund Project on Promoting the Transformation of Scientific and Technological Achievements of Jiangsu [BA2015078], and the Research Innovation Program for College Graduates of Jiangsu [KYLX16_0880].

Supplementary Materials

The data sets provided are the images of block surface after grinding taken using vision system. There are 400 train images in the imagetrain set and 100 train images in the imagetest set of inputdata sets. And there are 400 items in the train set and 100 items in the test set of outputdata sets. Each item contains the values of feed speed, peripheral speed, axial displacement, and radial displacement. (Supplementary Materials)

References

  1. J. A. Dieste, A. Fernández, D. Roba, B. Gonzalvo, and P. Lucas, “Automatic grinding and polishing using spherical robot,” Procedia Engineering, vol. 63, pp. 938–946, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Jiang, C. Yang, J. Na, G. Li, Y. Li, and J. Zhong, “A brief review of neural networks based learning and control and their applications for robots,” Complexity, vol. 2017, Article ID 1895897, 14 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. C. Yang, J. Na, G. Li, Y. Li, and J. Zhong, “Neural network for complex systems: theory and applications,” Complexity, vol. 2018, Article ID 3141805, 2 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Fu, L. Zhao, J. Yu, H. Yu, and C. Lin, “Neural network-based command filtered control for induction motors with input saturation,” IET Control Theory & Applications, vol. 11, no. 15, pp. 2636–2642, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. W. He, D. Ofosu Amoateng, C. Yang, and D. Gong, “Adaptive neural network control of a robotic manipulator with unknown backlash-like hysteresis,” IET Control Theory & Applications, vol. 11, no. 4, pp. 567–575, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. C. Yang, Y. Jiang, W. He, J. Na, Z. Li, and B. Xu, “Adaptive parameter estimation and control design for robot manipulators with finite-time convergence,” IEEE Transactions on Industrial Electronics, vol. 65, no. 10, pp. 8112–8123, 2018. View at Publisher · View at Google Scholar · View at Scopus
  7. C. Yang, K. Huang, H. Cheng, Y. Li, and C. Y. Su, “Haptic identification by ELM-controlled uncertain manipulator,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, pp. 2398–2409, 2017. View at Publisher · View at Google Scholar · View at Scopus
  8. M. I. Dieste-Velasco, M. Diez-Mediavilla, and C. Alonso-Tristán, “Regression and ANN models for electronic circuit design,” Complexity, vol. 2018, Article ID 7379512, 9 pages, 2018. View at Publisher · View at Google Scholar
  9. C. Wang, X. Liu, X. Yang, F. Hu, A. Jiang, and C. Yang, “Trajectory tracking of an omni-directional wheeled mobile robot using a model predictive control strategy,” Applied Sciences, vol. 8, no. 2, 2018. View at Publisher · View at Google Scholar · View at Scopus
  10. B. S. Park, J. W. Kwon, and H. Kim, “Neural network-based output feedback control for reference tracking of underactuated surface vessels,” Automatica, vol. 77, pp. 353–359, 2017. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Yin, H. Niu, and X. Liu, “Adaptive neural network sliding mode control for quad tilt rotor aircraft,” Complexity, vol. 2017, Article ID 7104708, 13 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Jiang, Q. Shen, and P. Shi, “Neural-networked adaptive tracking control for switched nonlinear pure-feedback systems under arbitrary switching,” Automatica, vol. 61, pp. 119–125, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. L. Song, Z. Duan, B. He, and Z. Li, “Application of federal Kalman filter with neural networks in the velocity and attitude matching of transfer alignment,” Complexity, vol. 2018, Article ID 3039061, 7 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  14. C. Mu, D. Wang, and H. He, “Novel iterative neural dynamic programming for data-based approximate optimal control design,” Automatica, vol. 81, pp. 240–252, 2017. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. Li, H. Cheng, and H. Guo, “General recurrent neural network for solving generalized linear matrix equation,” Complexity, vol. 2017, Article ID 9063762, 7 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Xu, S. Wei, C. Ma, K. Luo, L. Zhang, and C. Liu, “Atrial fibrillation beat identification using the combination of modified frequency slice wavelet transform and convolutional neural networks,” Journal of Healthcare Engineering, vol. 2018, Article ID 2102918, 8 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Rao and S. Zhong, “Stability analysis of impulsive stochastic reaction-diffusion cellular neural network with distributed delay via fixed point theory,” Complexity, vol. 2017, Article ID 6292597, 9 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  18. J.-E. Zhang, “Multisynchronization for coupled multistable fractional-order neural networks via impulsive control,” Complexity, vol. 2017, Article ID 9323172, 10 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. View at Publisher · View at Google Scholar · View at Scopus
  20. G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convo-lutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017. View at Publisher · View at Google Scholar · View at Scopus
  22. V. Mnih, K. Kavukcuoglu, D. Silver et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: object detection via region-based fully convolutional networks,” Tech. Rep., Computer Vision and Pattern Recognition, 2016, https://arxiv.org/abs/1605.06409. View at Google Scholar
  25. P. P. Brahma, D. Wu, and Y. She, “Why deep learning works: a manifold disentanglement perspective,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 10, pp. 1997–2008, 2016. View at Publisher · View at Google Scholar · View at Scopus
  26. D. Silver, A. Huang, C. J. Maddison et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. D. Nodland, H. Zargarzadeh, and S. Jagannathan, “Neural network-based optimal adaptive output feedback control of a helicopter UAV,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 7, pp. 1061–1073, 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. W. Hou, X. Gao, D. Tao, and X. Li, “Blind image quality assessment via deep learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 6, pp. 1275–1286, 2015. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Reading text in the wild with convolutional neural networks,” International Journal of Computer Vision, vol. 116, no. 1, pp. 1–20, 2016. View at Publisher · View at Google Scholar · View at Scopus
  30. M. Gong, J. Zhao, J. Liu, Q. Miao, and L. Jiao, “Change detection in synthetic aperture radar images based on deep neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, pp. 125–138, 2016. View at Publisher · View at Google Scholar · View at Scopus
  31. J. Xu, X. Luo, G. Wang, H. Gilmore, and A. Madabhushi, “A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images,” Neurocomputing, vol. 191, pp. 214–223, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. D. P. Moeys, F. Corradi, E. Kerr et al., “Steering a predator robot using a mixed frame/event-driven convolutional neural networks,” in 2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP), pp. 1–8, Krakow, Poland, June 2016. View at Publisher · View at Google Scholar · View at Scopus
  33. Q. Wang, J. Wang, X. Huang, and L. Zhang, “Semiactive nonsmooth control for building structure with deep learning,” Complexity, vol. 2017, Article ID 6406179, 8 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  34. Y. Li, X. Meng, and Y. Ye, “Almost periodic synchronization for quaternion-valued neural networks with time-varying delays,” Complexity, vol. 2018, Article ID 6504590, 13 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  35. T. Botmart, N. Yotha, P. Niamsup, and W. Weera, “Hybrid adaptive pinning control for function projective synchronization of delayed neural networks with mixed uncertain couplings,” Complexity, vol. 2017, Article ID 4654020, 18 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Minsky, “Steps toward artificial intelligence,” in Computers & Thought, pp. 8–30, MIT Press, 1995. View at Google Scholar
  37. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. G. B. Huang, L. Chen, and C. K. Siew, “Universal approximation using incremental constructive feedforward networks with random hidden nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879–892, 2006. View at Publisher · View at Google Scholar · View at Scopus
  39. J. Su and B. Xu, “Fabric wrinkle evaluation using laser triangulation and neural network classifier,” Optical Engineering, vol. 38, no. 10, pp. 1688–1693, 1999. View at Publisher · View at Google Scholar · View at Scopus
  40. American National Standards Institute B46.1-1978, Surface Texture, American Society of Mechanical Engineers, New York, NY, USA, 1978.
  41. J. Amirbayat and M. J. Alagha, “Objective assessment of wrinkle recovery by means of laser triangulation,” Journal of the Textile Institute, vol. 87, no. 2, pp. 349–355, 1996. View at Publisher · View at Google Scholar · View at Scopus
  42. J. Hu, B. Xin, and H. J. Yan, “Measuring and modeling 3D wrinkles in fabrics,” Textile Research Journal, vol. 72, no. 10, pp. 863–869, 2002. View at Publisher · View at Google Scholar · View at Scopus
  43. N. Paragios, Y. Chen, and O. Faugeras, “Shape from shading,” in Handbook of Mathematical Models in Computer Vision, pp. 375–388, Springer Science Business Media, Inc., 2006. View at Google Scholar
  44. Tiny-dnn, “Deep learning framework,” https://github.com/nyanp/tiny-cnn.