Journal of Robotics
Volume 2010 (2010), Article ID 870923, 9 pages
http://dx.doi.org/10.1155/2010/870923
Research Article

## Solution for Ill-Posed Inverse Kinematics of Robot Arm by Network Inversion

Department of Electronics and Computer Systems, Takushoku University, 815-1 Tatemachi, Hachioji, Tokyo 193-0985, Japan

Received 1 December 2009; Revised 9 April 2010; Accepted 7 July 2010

#### Abstract

In the context of controlling a robot arm with multiple joints, the method of estimating the joint angles from the given end-effector coordinates is called inverse kinematics, which is a type of inverse problems. Network inversion has been proposed as a method for solving inverse problems by using a multilayer neural network. In this paper, network inversion is introduced as a method to solve the inverse kinematics problem of a robot arm with multiple joints, where the joint angles are estimated from the given end-effector coordinates. In general, inverse problems are affected by ill-posedness, which implies that the existence, uniqueness, and stability of their solutions are not guaranteed. In this paper, we show the effectiveness of applying network inversion with regularization, by which ill-posedness can be reduced, to the ill-posed inverse kinematics of an actual robot arm with multiple joints.

#### 1. Introduction

In the context of controlling a robot arm with multiple joints, the problem of estimating the joint angles from the given end-effector coordinates is called the inverse kinematics problem, which is a type of inverse problems [1]. Inverse problems that estimate the cause from the given results are studied in various engineering fields [2]. There are a number of methods for solving inverse kinematics problems: analytical method, iterative calculation by using an algorithm, and so forth [1]. In addition, neural-network-based inverse modeling has been proposed [3], and we can use it as a solution of inverse kinematics [4, 5]. The network inversion method has been proposed for solving inverse problems by using a multilayer neural network [6]. In this method, inverse problems are solved by using a trained multilayer neural network inversely to estimate the corresponding input from the given output [7, 8]. The advantages of this method are easiness of the direct modeling by learning and adaptive estimation of the inverse solution. It has been applied to actual problems [9, 10]. In addition, it was introduced as a method to solve the inverse kinematics problem of estimating the multiple joint angles of a robot arm from the given end-effector coordinates [1113].

In general, inverse problems are affected by ill-posedness, which implies that the existence, uniqueness, and stability of their solutions are not guaranteed [2]. Ill-posedness also affects the solution when a problem is solved using network inversion. The regularization method to decrease ill-posedness by limiting the solution space of the inverse problem has been proposed for the ill-posed inverse problems [14, 15]. It has also been examined for network inversion [16, 17]. The inverse-modeling approach with a multilayer neural network and the approach that uses the network inversion method have been examined as neural-network-based methods. The problem of ill-posedness is an important aspect of inverse problems, and it has often been studied in the field of mathematical engineering [2, 15]. However, only a few studies have examined the possibility of solving the problem of ill-posedness by using a neural network [18]. In particular, a few systematic investigations of ill-posedness have been carried out by applying a neural network to the actual problem, but these investigations considered only the uniqueness of the solution [19].

In this study, we aim at examining the applicability of the network inversion method to the inverse kinematics of a robot arm with multiple joints. That is, we aim to demonstrate the applicability of this method to the ill-posed inverse kinematics in terms of the existence, uniqueness, or stability of the solutions provided by the method. In addition, we aim to examine the applicability of the network inversion method to all problems that involve ill-posedness. We consider a three-degree-of-freedom (DOF) robot arm that moves in two-dimensional space and set the ill-posed problem as one in which the existence, uniqueness, or stability of the solution is not satisfied. We then examine the problem and solution by using the network inversion method so as to demonstrate the effectiveness of this method in addressing the inverse kinematics of a robot arm with multiple joints. We consider that the results suggest the effectiveness of the method when applied to inverse not only kinematics but also a general ill-posed inverse problem.

#### 2. Inverse Estimation of Joint Angles of Robot Arm

An inverse problem determines the inner mechanism or cause from observed phenomena. In the case of a direct problem, the result is derived from a given cause by using a fixed mathematical model, whereas in the case of an inverse problem, the cause is estimated from the fixed model and given results. In the case of a direct problem, the existence, uniqueness, and stability of the solution are guaranteed. Problems in which these three conditions are satisfied are called as well-posed problems. In the case of an inverse problem, it may be difficult to obtain a solution because the well-posedness of the problem is not guaranteed [2]. An example of ill-posedness is shown in Figure 1.

Figure 1: Concept of inverse problem and ill-posedness.

Direct kinematics implies a problem of calculating the end-effector coordinates from the joint angles of robot arms. On the other hand, inverse kinematics inversely estimates the joint angles from the provided end-effector coordinates of robot arms. A number of analytical methods, algorithms of iterative calculation, and so forth exist for solving inverse kinematics problems. Analytical methods are effective for a robot arm of a low degree-of-freedom. They include the method of solving kinematics equations as nonlinear simultaneous algebraic equations, the method of solving an equation as a problem in planar geometry, and so on. Because an analytical method is not a general method, the iterative approximation algorithm that is based on the Newton method is generally used [1]. In addition, there is a method of solving inverse kinematics problems by using the adaptively estimated inverse model of kinematics [5]. The use of a neural network is proposed as a method of this inverse modeling [11]. Motion control of a robot arm is often achieved by the point-to-point or continuous-path methods, and it is required to solve the inverse kinematics problem.

In this paper, we consider a 3-DOF robot arm that moves in two-dimensional space, which is shown in Figure 2(a). We examine the problem of estimating the joint angles for the specified end-effector coordinate of the 3-DOF robot arm. In an inverse kinematic problem, the problem of ill-posedness in terms of the existence, uniqueness, and stability of the solution also arises. For instance, the combination of joint angles for the specified end-effector coordinate may not exist, as shown in Figure 2(b). This is an ill-posed problem where a solution does not exist. Figure 2(c) shows an example where there are a number of combinations for the joint angles of an end-effector coordinate; here, the problem is ill-posed in terms of its uniqueness. In addition, a minor measurement error for the end-effector coordinate may cause a large estimation error of the joint angles according to scaling of the joint angles and end-effector coordinate, as shown in Figure 2(d); the problem is ill-posed because the solution is unstable [2].

Figure 2: Inverse kinematics problem of 3-DOF robot arm in 2D plane. (a) Three joints and an end-effector coordinate. Ill-posed solutions in terms of (b) existence, (c) uniqueness, and (d) stability.

In this paper, we introduce the network inversion method in order to solve the inverse estimation problem of joint angles for a 3-DOF robot arm. We examine the solutions to the above-mentioned ill-posed problems.

#### 3. Network Inversion

An ordinary multilayer neural network is used to solve a direct problem. In a usual multilayer network in which the training is complete, the input-output relation is given by y = f(w, x), where x, y, and f represent the input vector, output vector, and function defined by the interlayer weights w of the network, respectively. If the input vector x is provided, the network calculates the output vector y.

Linden and Kindermann proposed the network inversion method [6]. In this method, the observed output data y can be applied with f fixed after finding the forward relation f by training. The input x can then be updated according to the calculated input correction signal based on the duality of the weights and input. The input is estimated from the output by iterative updating of the input based on the output error, as shown in Figure 3. In this manner, the inverse problem for estimating the input x from the output y is solved with the multilayer neural network by inversely using the forward relation.

Figure 3: Iterative update of input from provided output by network inversion.

We use a network in two phases so as to solve the inverse problem by network inversion: forward training and inverse estimation. The procedure is shown in Figure 4. In the training phase, we provide the training input x and training output y to calculate the output error E. The weight w is then updated by where εt denotes the training gain, because the output error is due to maladjustments of the weights. By repeating this update procedure, the forward relation is obtained. This procedure is based on the usual back propagation method.

Figure 4: Two-step procedure to solve inverse problem by using network inversion.

In the inverse estimation phase, we fix the relation obtained in the training, provide the random input x and test output y, and calculate the output error E. The input x is then updated by where εe denotes the input update gain, because the output error is due to the error in the input. By repeating this update procedure, the input is estimated from the output.

#### 4. Ill-Posedness and Regularization

In general, inverse problems are ill-posed in that the existence, uniqueness, and stability of their solutions are not guaranteed [2]. Regularization, which limits the solution space of an inverse problem, is a method that reduces ill-posedness. In this method, a functional intended to provide some constraints is added to the error functional of the iterative method. For example, when the transform from the element x in space X to the element y in space Y is expressed as Kx = y by mapping K, we search the element x to minimize the functional where F(x) and λ represent the regularization functional and regularization coefficient, respectively.

We can also use regularization for network inversion [17]. In order to provide the constraint condition, we minimize the regularization functional in accordance with the output error in the inverse estimation phase. The energy function E(x) with the regularization functional F(x) is defined as where , K, and x indicate the network output, transform, and input, respectively. In (4), the first and second terms represent the output error of the network and the regularization functional, respectively. The parameter λ is the regularization coefficient. The objective of the regularization term F(x) is to express the restraint condition that the input x should satisfy and to be minimized simultaneously with the error term. By using the regularization term, we aim to obtain a feasible solution that minimizes the error and satisfies the restraint condition. Since F(x) is a restraint condition concerning the input, we can use the minimum norm to select a specific solution, the minimum difference to smooth the solution, and so forth. Moreover, we can use the minimum and maximum norms of the vector of the joint angles, the minimum and maximum inclinations of the arm, and so forth by expressing them as the regularization term F(x) for the robot arm considered in this paper. The coefficient λ is usually fixed and determined empirically or by trial and error. However, the balance between the output error minimization and regularization is often uncertain. The regularization coefficient may actually be reduced with a decrease in the output error. We refer to this as dynamic regularization. For example, the regularization coefficient λ(t) is reduced for each input correction as where λ(0), t, and m denote the initial value of λ, current epoch number, and decay coefficient, respectively. The parameter λ(t) decays from λ(0) to zero with the epoch number t.

In this paper, we introduce the regularization method to reduce the ill-posedness of the problems in terms of the uniqueness and stability of their solutions. For the uniqueness problem, we examine the addition of a regularization term for conditioning and attempt to selectively estimate a specific solution. We select a solution that meets the requirement by specifying conditions for the joint angles. This method is shown to be effective when the solution for a specific shape of the arm is required to avoid some obstacle. Moreover, we attempt to stabilize the solutions by decreasing the difference between multiple solutions. We consider a regularization term to minimize the difference in solutions to successive data. This method is shown to be effective for estimating joint angles when the robotic arm is successively operated.

The principle of network inversion is the iterative correction of the input so as to minimize the error for a given output. That is, the output error converges to almost zero when it reaches an appropriate input for the given output. Therefore, the output error remains when the input to a given output does not exist. Thus, we can estimate that a solution may not exist by observing decreases in the output error in response to the corrections made to the input by network inversion. As a result, we consider it possible to deal with ill-posedness in terms of the existence of a solution.

#### 5. Simulation

In order to demonstrate the application of the network inversion method to an inverse kinematics problem and its effect on the ill-posedness, we carry out four simulations considering an ill-posed problem with no solution, with a nonunique solution, and with an unstable solution and a well-posed problem. These four simulations aim to cover all situations concerning ill-posedness when the network inversion method is applied to actual inverse kinematics problems.

We examine the inverse estimation of joint angles from the end-effector coordinates of the 3-DOF robot arm, shown in Figure 2(a), using network inversion. First, we train the network by providing the joint angles and end-effector coordinate corresponding to the joint angles as the tutorial input and output, respectively. The training of the network is conducted by the usual error back-propagation method. The inputs are then iteratively updated due to the error between the network outputs and the end-effector coordinate value. Initially, the inputs are set to random values, and the trained weights are assigned to the values obtained during learning. In this manner, the joint angles are estimated from the provided end-effector coordinates.

The network architecture used in the simulation is shown in Figure 5, and the network parameters are listed in Table 1. Because there are three joint angles (1, 2, and 3) and two end-effector coordinates (x and y) for the arm, we use a network with three inputs and two outputs. The hidden neurons are decided to be 10 units by trial and error. The input and output values of the network are normalized to the range [0.0, 1.0] by the maximum and minimum values of the joint angles and coordinates.

Table 1: Network parameters.
Figure 5: Network architecture.

First, we prepare the training data by calculating the end-effector coordinates for the three joint angles. We obtain 1136 data points in the tutorial training dataset by varying each of the joint angles 1, 2, and 3 to 7, 19, and 19 different values, respectively, and by selecting only positive coordinates (x, y). The length of each arm is 30 cm. The distribution of the end-effector coordinates in the training data is shown in Figure 6(a). The training is continued until the mean square error decreases to less than 0.001. The training rate εt is set to 0.01. The actual training is completed after approximately 8000 epochs.

Figure 6: Distribution of end-effector coordinates (a) for training data and for test data of (b) well-posed problem and a problem with (c) nonunique solution, (d) no solution, and (e) unstable solution.

In the inverse estimation, the joint angles are estimated from the given end-effector coordinates by means of network inversion. In order to confirm the effectiveness of the network inversion to the ill-posed inverse kinematics, we set each of the end-effector coordinates for well-posed, nonexistent, nonunique, and unstable solutions. The end-effector coordinates shown in Figures 6(b), 6(c), 6(d), and 6(e) correspond to well-posed, nonexistent, nonunique, and unstable solution data, respectively. The inverse estimation is continued until the mean square error decreases to less than 0.0001. The inverse estimation rate εe is set to 0.001. In order to confirm the effect of the regularization method, the initial input values are set to the constant value of 0.5 in this simulation, although they are generally set to random values for network inversion.

##### 5.1. Results for Well-Posed Problem

In order to demonstrate the fundamental ability of inverse estimation by network inversion, we simulate a well-posed problem. We use the abovementioned training data. The eight points of the end-effector coordinates that have unique solutions and are shown in Figure 6(b) are used as data for inverse estimation. Regularization is not used for this simulation. The estimated results for the joint angles are shown in Figure 7 and Table 2. The end-effector coordinates calculated from the estimated joint angles closely corresponded to the given end-effector coordinates shown in Figure 7. Table 2 shows that the error between the estimated and given coordinates is small. According to these results, the joint angles that realized the given end-effector coordinates are accurately estimated. We confirm that the joint angles are correctly estimated from the given end-effector coordinates by network inversion in the case of the well-posed problem.

Table 2: Estimated joint angles, calculated end-effecter coordinates, and mean square errors for well-posed problem.
Figure 7: Graph showing the coordinates of robot arm whose joint angles are estimated by network inversion for well-posed problem.
##### 5.2. Results for Ill-Posed Problem with No Solution

We consider measures to reduce the ill-posedness when no solution exists. When an impossible end-effector coordinate is given, the system is required to estimate the joint angles that realize the end-effector coordinate as much as possible for an approximate or provisional solution. Further, it is necessary to specify that the estimated solution is an approximate one and that a rigorous solution does not exist.

Here, we present the inverse estimation of an approximate solution by means of network inversion when a rigorous solution does not exist. The abovementioned training data are used for training. For inverse estimation, we use data that consist of four possible and four impossible end-effector coordinates, as shown in Figure 6(c). We do not use regularization in this simulation. We consider the transition and the converged values of the output error in the inverse estimation as criteria specifying the nonexistence of a solution.

The estimated joint angles are shown in Figure 8 and Table 3. According to the results, the joint angles are approximately estimated for the impossible end-effector coordinates, while they are correctly estimated for the possible end-effector coordinates. In other words, Figure 8 shows the results of the estimated end-effector coordinates in the possible range for impossible coordinates. Figure 9 shows the transition of the output error in the inverse estimation. We found that the transition and converged value of the output error were obviously different for the impossible data points no.1 to no.4 and the possible data points no.5 to no.8. We confirmed that the approximated joint angles for a given end-effector coordinate can be estimated by network inversion when a solution does not exist. Moreover, we confirmed that the nonexistence of a solution can be determined according to the output error of the inverse estimation.

Table 3: Estimated joint angles, calculated end-effector coordinates, and errors for ill-posed problem with no solution.
Figure 8: Graph showing the coordinates of robot arm whose joint angles are estimated by network inversion for ill-posed problem with no solution.
Figure 9: Error curves of inverse estimation for each estimation data.
##### 5.3. Results for Ill-Posed Problem with Nonunique Solution

We consider the effect of using regularization as a selection method when a solution is not unique. Here, we examine the maximization and minimization of the joint angles. For example, for the arm shape, we assumed the restraint condition as a case where an obstacle exists. The training dataset is the same, as described above. For inverse estimation, we use data consisting of eight end-effector coordinates, as shown in Figure 6(d).

We examine two types of regularization terms, where the regularization term F(x) of (4) is set as where x represents a normalized joint angle. (x) forces the angles 1 and 2 to 90 and 180, respectively, while (x) forces the angles 1 and 2 to 0 and 180, respectively. Moreover, we use a dynamic regularization method that reduces the regularization coefficient according to the updated input where the number of input updates, maximum inverse estimation epoch, and initial regularization coefficient are expressed as t, T = 10000, and , respectively. The inverse estimation is carried out for two regularization methods by using (x) and (x) of (6).

The estimated joint angles are shown in Figure 10 and Table 4. According to the results shown in Figure 10(a), we found that the estimated joint angle 1 became maximum for all end-effector coordinates when the regularization term (x) was added. Because of the ratio between the regularization coefficient and the inverse estimation rate, we found that the regularization term only had a slight effect on the estimated joint angle 2. Similarly, we found that the estimated joint angle 1 was minimized to all end-effector coordinates when the regularization term (x) was added, as shown in Figure 10(b).

Table 4: Estimated joint angles, calculated end-effector coordinates, and errors with (a) regularization (x) and (b) regularization (x), for ill-posed problem of nonuniqueness.
Figure 10: Graphs showing the coordinates of robot arm whose joint angles are estimated by network inversion with (a) regularization (x) and (b) regularization (x), for ill-posed problem with nonunique solution.

From the above-mentioned results, we confirmed that the joint angles for the shape of the robot arm could be estimated on the basis of specific conditions from the given end-effector coordinates by using network inversion with regularization. In other words, an arbitrary shape can be estimated using network inversion with regularization by providing conditions for the joint angles when the joint angles of a 3-DOF robot arm are inversely estimated.

##### 5.4. Results for Ill-Posed Problem of Instability

We consider the effect of using regularization as a stabilization method when a solution is instable. Here, we attempt to stabilize the solution by assuming that the data for estimation is successively provided and impose the restraint condition between data. This method effectively estimates the joint angles for a given orbit of the end-effector coordinates by the point-to-point method. The above-mentioned training data are used in this simulation. We use the data shown in Figure 6(e) for the inverse estimation; the data are created by adding a disturbance to the amplitude of 10% for every eight points shown in Figure 6(d).

The regularization term (x) of (4) isset as This regularization term minimizes the difference between the estimation data. We also use dynamic regularization as expressed in (6) and use the same parameters as those mentioned in the previous subsection.

The estimated results of the joint angles with and without the regularization term are shown in Figures 11(a) and 11(b), respectively. The estimated values are shown in Table 5. In Table 5, the mean square error of the end-effector coordinates represents the difference between the estimated and correct values without disturbance. According to the results obtained without regularization (Figure 11(a)), the difference between the estimated and correct joint angles were large and unstable. In contrast, with regularization, we found that the fluctuation of the joint angles became small and that they could be estimated with stability (Figure 11(b)). The large variance in the mean square error is clearly shown in Table 5(a), while the stable mean square error is shown in Table 5(b). These results confirm that we could estimate stable joint angles from given uneven end-effector coordinates through network inversion with regularization.

Table 5: Estimated joint angles, calculated end-effector coordinates, and errors(a) with regularization (x) and (b) without regularization for ill-posed problem of instability.
Figure 11: Graphs showing the coordinates of robot arm whose joint angles are estimated by network inversion (a) with regularization (x) and (b) without regularization for ill-posed problem of instability.

#### 6. Conclusion

In this paper, we applied network inversion to the inverse kinematics problem of estimating the joint angles of a robot arm from the given end-effector coordinates. We then examined different aspects of ill-posedness in the inverse kinematic problem for a robot arm. We showed that network inversion could detect the nonexistence of a solution and estimate an approximate or provisional solution. We also confirmed that network inversion with regularization provided a unique or stable solution to the ill-posed problems. Through our results, we demonstrated the effectiveness of applying network inversion to ill-posed inverse kinematic problems. We consider that the results suggest the applicability of network inversion to general ill-posed inverse problems.

In the future, we intend to search for a more effective regularization method for actual inverse kinematics problems by examining different types of regularization methods. In addition, we intend to examine the inverse estimation of joint angles for a multi-DOF robot arm.

#### References

1. J. J. Craig, Introduction to Robotics Mechanics and Control, Addison-Wesley, Reading, Mass, USA, 1989.
2. C. W. Groetsch, Inverse Problems in the Mathematical Sciences, Friedrich Vieweg & Sohn, 1993.
3. M. Kawato, K. Furukawa, and R. Suzuki, “A hierarchical neural-network model for control and learning of voluntary movement,” Biological Cybernetics, vol. 57, no. 3, pp. 169–185, 1987.
4. W. T. Miller, R. S. Sutton, and P. J. Werbos, Neural Networks for Control, MIT Press, Cambridge, Mass, USA, 1990.
5. E. Oyama and S. Tachi, “A study of human hand position control learning–output feedback inverse model,” in Proceedings of the International Joint Conference on Neural Networks, pp. 1434–1443, 1991.
6. A. Linden and J. Kindermann, “Inversion of multilayer nets,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '89), pp. 425–430, June 1989.
7. B.-L. Lu, H. Kita, and Y. Nishikawa, “Inverting feedforward neural networks using linear and nonlinear programming,” IEEE Transactions on Neural Networks, vol. 10, no. 6, pp. 1271–1290, 1999.
8. C. A. Jensen, R. D. Reed, and R. D. Reed , “Inversion of feedforward neural networks: algorithms and applications,” Proceedings of the IEEE, vol. 87, no. 9, pp. 1536–1549, 1999.
9. W. R. Murray, C. T. Heg, and C. M. Pohlhammer, “Iterative inversion of a neural network for estimating the location of a planar object,” in Proceedings of the World Congress on Neural Networks, vol. 3, pp. 188–193, 1993.
10. I. Valova, K. Kameyama, and Y. Kosugi, “Image decomposition by answer-in-weights neural network,” IEICE Transactions on Information and Systems, vol. E78-D, no. 9, pp. 1221–1224, 1995.
11. J. Wang, Q. Hu, and D. Jiang, “A Lagrangian network for kinematic control of redundant robot manipulators,” IEEE Transactions on Neural Networks, vol. 10, no. 5, pp. 1123–1132, 1999.
12. T. Ogawa, H. Matsuura, and H. Kanada, “A solution of inverse kinematics of robot arm using network inversion,” in Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, pp. 858–862, November 2005.
13. N. Sekiguchi, T. Ogawa, and H. Kanada, “Inverse estimation of joint angles of robot arm by network inversion,” in Proceedings of the Society of Instrument and Control Engineers Annual Conference (SICE '07), pp. 991–995, September 2007.
14. A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Winstion and Sons, 1977.
15. Y. P. Petrov and V. S. Sizikov, Well-Posed, Ill-Posed, and Intermediate Problems with Application, Koninklijke Brill NV, 2005.
16. Y. Kosugi and K. Kameyama, “Inverse use of BP net in answer-in-weights scheme for arithmetic calculations,” in Proceedings of the World Congress on Neural Networks, vol. 3, pp. 462–465, 1993.
17. T. Ogawa, Y. Kosugi, and H. Kanada, “Neural network based solution to inverse problems,” in Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 2471–2476, May 1998.
18. T. Poggio and F. Girosi, “Regularization algorithms for learning that are equivalent to multilayer networks,” Science, vol. 247, no. 4945, pp. 978–982, 1990.
19. B. Lu and K. Ito, “Regularization of inverse kinematics for redundant manipulators using neural network inversions,” in Proceedings of IEEE International Conference on Neural Networks, pp. 2726–2731, December 1995.