Abstract

This paper works on hybrid force/position control in robotic manipulation and proposes an improved radial basis functional (RBF) neural network, which is a robust relying on the Hamilton Jacobi Issacs principle of the force control loop. The method compensates uncertainties in a robot system by using the property of RBF neural network. The error approximation of neural network is regarded as an external interference of the system, and it is eliminated by the robust control method. Since the conventionally fixed structure of RBF network is not optimal, resource allocating network (RAN) is proposed in this paper to adjust the network structure in time and avoid the underfit. Finally the advantage of system stability and transient performance is demonstrated by the numerical simulations.

1. Introduction

During robotic operation, the end-effectors may perform tactile contact with the environment, which consists of a force interaction between the end-effector and the environment. In addition to robot’s position control, the force control is more necessary in order to fulfill its tasks better. Raibert and Craig [1] firstly introduce such an idea in 1981. After then many other researchers have proposed and explored new hybrid control strategies, for example, by combination with visual information [24].

Due to uncertainties of the robot model, the system's performance becomes greatly weaken or even unstable, so robust control methods for robots are widely concerned. By using a fixed controller structure, the method has the advantage of eliminating the impact of the uncertainty, ensuring the stability of the system during its operation.

The main assumption of the method is the fact that only the upper bound of uncertainty is known. However, the upper bound is difficult to be measured which is the limitation of the robust control method. To overpass this limitation, the radial basis functional (RBF) neural networks (RBFNNs) approximate the function to compensate for the lack of robust control. RBFNN has a compact topology structure and rapidly convergence, and its structural parameters can be learned separately [58]. Because of its fixed or a more complex structure, RBF will lead to the result that learning time is too long or wasting network resources. Therefore, we use resource allocation network (RAN) in this paper. The RAN method replaces the sampling point by the biggest error sampling point and by doing so, the network can perform self-learning and its complexity is reduced [9, 10]. Radial Basis Functional Link Network (RBFLN) increases the weight from input to output; therefore, RBFNN not only includes RBF advantage, but also compensates for the slow response of RBF.

2. Manipulator Dynamics

The dynamic equation of the n-link manipulator in joint-space coordinates is given by where the vector is the joint angle, the vector denotes the joint angular velocity, the vector is the joint angular acceleration, is the symmetric positive definite inertia matrix, denotes the vector of Coriolis and centrifugal forces, denotes the gravitational vector, is the vector of joint actuator torques in position control loop, is the vector of joint actuator torques in force control loop, is the force between the end-effector and the environment, denotes the Jacobain matrix, and represents the vector of external disturbance joint torques and unmodeled dynamics.

In the position loop, the simplest PD controller can be expressed as where , are the constant matrixes,

In the force loop, the dynamic equation should be transferred from joint-space to Cartesian-space [1, 7]. Based on , , (2.1) can be derived as follows: where

Equation (2.4) which is showed by the Cartesian coordinate has the following important quality.

Assume that is the desired trajectory and fd is the desired force. The force between the end-effector and the environment is given by the following expression [1, 8]: where is the environment stiffness, is reference position of environment, so .

We can obtain the error equation as follows: where . State variables can be defined as , where is a given positive number, (2.7) can be derived as where .

3. Design of Control Law

In order to obtain the control law, we introduce a theorem in this section. Assume that there is a system with disturbance as follows: where is the disturbance and is the signal of evaluation.

For the force control loop, the operation space of robot is transformed. Because , , (2.1) is written as where , , , and .

Suppose is the desired position and is the desired force. Then , . is the rigidity matrix and is reference position of the environment. . Then error equation is where . Then it is transformed into the state space. and is a positive number. Equation (3.3) becomes where .

The improved RAN network approaches . is the approaching error of the network. , is output matrix of the hidden layer, is the weight matrix from hidden layer to output layer. is input matrix, is the weight from input layer to output layer. is the contribution from hidden layer to output layer. is contribution from input layer to output layer. Equation (3.4) can be derived as

is regarded as interfere and its evaluation signal is , then gain is .

Theorem 3.1. For (3.5) if the study law of network is given by the following equation: the following controller is expressed for the force loop: and in must meet to (3.8) where and are given positive numbers, then the gain of closed-loop system (3.5) and (3.7) is less than .

Proof. For (3.5), the Lyapunov function is defined as Then Substituting (3.6) into the above equality, we have According to HJI, we get Then Due to we get Substituting (3.7) into the above inequality, we have So the system meets .

4. Experiments and Results

To verify the effectiveness of the proposed control strategy, we made some software simulation by using methods [1116]. Here the model is based on two-link manipulator, which is shown in Figure 1.

In the simulation, we took a horizontal plane as the work space: and describe the constraint surface as , the desired trajectory is , the desired force is . Assume that the initial position of the manipulator end effector is and initial velocity is . In order to analyze comparatively, we use PD control and robust neural network control, respectively, in the force control loop. First the model is controlled by PD controller. The PD parameters are determined by output result. , .

We adopt MATLAB Simulink and S-functions to design control system, the parameters are set , , , , , and . The simulation results are shown in Figures 26, among which Figures 25 give the tracking results of position and position error and Figure 6 gives force tracking results.

Figures 2 and 3 show that the control effect along -axis is unlikeness. The robust NN control result is superior to conventional PD along -axis. Figures 4 and 5 show that there’s no obvious difference along -axis.

Figure 6 shows that the methods under robust neural networks control and PD control can make force convergence desired value. But the effort of the two methods has great difference. The oscillation is severe, and convergence speed is slow under PD control method. The oscillation and convergence speed are improved under RAN NN control method. The stability and transient performance are greatly superior to the effect under PD control.

From the simulation results, we know that the improved RBF neural network robust control method can decrease the dramatic oscillation and improve the convergence speed. The stability and transient performance of the system are much better than the PD control, and therefore it is an effective control method.

5. Conclusion

An improved RAN NN controller has been designed in this paper for robot. In case of difficulty measureing an external disturbance, the upper bound of uncertainty cannot be obtained. The controller can make the system's uncertainty significantly reduced without obtaining the upper bound of uncertainty. It is found that the system can obtain good transient performance and strong adaptability. For the force and position control, it has good robustness and tracking ability. For future study, a simulation platform is constructed in the paper to intuitively demonstrate the control process.