Abstract

Recently, the technological development of manipulator robot increases very quickly and provides a positive impact to human life. The implementation of the manipulator robot technology offers more efficiency and high performance for several human’s tasks. In reality, efforts published in this context are focused on implementing control algorithms with already preprogrammed desired trajectories (passive robots case) or trajectory generation based on feedback sensors (active robots case). However, gesture based control robot can be considered as another channel of system control which is not widely discussed. This paper focuses on a Kinect-based real-time interactive control system implementation. Based on LabVIEW integrated development environment (IDE), a developed human-machine-interface (HMI) allows user to control in real time a Lynxmotion robotic arm. The Kinect software development kit (SDK) provides a tool to keep track of human body skeleton and abstract it into 3-dimensional coordinates. Therefore, the Kinect sensor is integrated into our control system to detect the different user joints coordinates. The Lynxmotion dynamic has been implemented in a real-time sliding mode control algorithm. The experimental results are carried out to test the effectiveness of the system, and the results verify the tracking ability, stability, and robustness.

1. Introduction

The use of manipulator robot controlled by a natural human-computer interaction offers new possibilities in the execution of complex tasks in dynamic workspaces. Human-robot interaction (HRI) is a key feature which differentiates the new generation of robots from conventional robots [1, 2].

Many manipulator robots have similar behavior with human arm. Thanks to the gesture recognition using Kinect sensor, the motion planning is easier when the robotic arm has the same degree of freedom of the human arm. Gesture recognition using Kinect sensor is based on the skeleton detection, although current development packages and toolkits provide base to identify the joints position of the detected skeleton.

Kinect is a motion detection and recognition smart sensor which allows human/computer interaction without the need of any physical controllers [3]. Indeed, Kinect sensor is a motion sensor that provides a natural user interface available for several applications in different fields including game-based learning systems [4, 5], stroke rehabilitation [6, 7], helping visually impaired people [8, 9], navigation systems [1012], and other fields [13, 14]. Based on its internal processor, Kinect sensor can recognize human movement patterns to generate a corresponding skeleton coordinates that can be provided in a computer environment such as Matlab [15, 16] and LabVIEW [17, 18]. Therefore, the dynamic gesture recognition technology has gained increased attention.

Several approaches have been developed, in order to improve recognition or take advantage of the existing algorithms. Reference [19] proposes easy gesture recognition approach in order to reduce the effort involved in implementing gesture recognizers with Kinect; the practical results with these developed packages are acceptable. Furthermore, an eigenspace-based method was developed in [20] for identifying and recognizing human gestures by using Kinect 3D data. A user adaptation scheme for this method was added to enhance Eigen3Dgesture and the performance of the constructed eigenspace of Kinect 3D data. Based on Microsoft’s “Kinect for Windows SDK,” [21] uses the API for measuring movement of people with Parkinson’s disease.

Gesture control is mainly used for telemanipulation in several modes. In [22], a client/server structured robot teleoperation application system is developed in networked robot mode. Reference [23] describes gesture based telemanipulation of an industrial robotic arm in master-slave mode for unstructured and hazardous environments, a maximum velocity drift control approach is applied allowing the amateur user to control the robot by simple gestures, and force control was combined with the gesture based control to perform safe manipulation. However, this solution is costly and depends on reliability of the force-torque sensor. In addition, a method of human-robot interaction using markerless Kinect-based tracking of the human hand for teleoperation of a dual robot manipulator is presented in [24]. Based on hand motion, the user can control the robot manipulators to perform tasks of picking up and placing; the work was applied in virtual environment.

For robot teleoperation, many researches have been done providing the possibility of controlling the robotic systems dedicated for complex tasks [25]. Several techniques are used to control robot motion; a human-robot interface based on hand-gesture recognition is developed in [2630]. Others have based on hand-arm tracking; [31] presents a method of real-time robot manipulator teleoperation using markerless image-based hand-arm tracking.

Many problems occurred during the human-robot interaction when using the Kinect sensor. After the acquisition of 3D data Kinect, these data will be processed to control the robot. Several approaches have been used to check the control. A Cartesian impedance control is used to control the dual robot arm [32]; this approach allows the robot to follow the human arm motion without solving inverse kinematic problems and avoiding self-collision problems. Likewise, a proportional derivative control is implemented with inverse kinematic algorithm in [33]. In [34], a PID controller is also used to control a 2-degree of freedom Lego Mind storm NXT robotic arm.

Based on online HIL (Hardware-in-the-Loop) experimental data, the acquired input data from the Kinect sensor are processed in a closed loop PID controller with feedback from motors encoders. Although these results are satisfactory side position control, they remain imperfect from velocity control stand point due to the use of simple controller used. Moreover, compared to the conventional computed torque control used in [35], the Lynxmotion dynamic has been implemented in a real-time sliding mode control algorithm.

To combine the real and virtual objects and the implemented algorithms, we need to design a user interface that manages the overall system. Indeed, the human-machine interface (HMI) is highly recommended in robotics fields and many human-machine interfaces have been developed to control the robotic systems [36, 37]. In [38], a human-machine interface is designed to control the SCARA robot using Kinect sensor. This interface is implemented in C# using Kinect library OpenNI.

In this paper, we have used the package provided by Virtual Instrument Package Manager for LabVIEW. The recognition algorithm calculates the angle of each human arm joint dedicated for control. This algorithm is detailed in Section 1.

Compared to Shirwalkar et al. [23], firstly, the proposed control strategy takes into account the dynamics system. Also our algorithm detects all user joints and not only the hand, while for the security side we propose an algorithm security which avoids any type of parasite that may occur in the Kinect sensing field. Moreover, the proposed algorithm avoids any singularity position.

Likewise, a proportional derivative control is implemented with inverse kinematic algorithm [33]. However, on the one hand, this classic controller ignores the robot dynamics. On the other hand, this controller is limited to position control and does not consider the motion control.

Regarding the work in [38], we develop a generic interface that encompasses the different components of the global system: virtual 3D manipulator, Kinect sensor feedback values, skeleton image, implemented dynamic model, implemented sliding mode control, implemented recognition algorithm for calculating the joints position, and implemented security algorithm. This LabVIEW-based user interface is described in Section 5.

The present paper is organized as follows: in Section 2, we presents the recognition method. The dynamic model of the Lynxmotion robotic arm is detailed in Section 3. In Section 4, the control design based on the sliding mode control approach is described. Human-machine interface is presented as well as experiment results in Section 5. Finally, we conclude with a work summary.

2. Recognition Method

Several methods are used to detect the human arm movements. Kinect sensor is the most popular thanks to skeleton detect and depth computing. The Kinect is a motion sensor that can measure three-dimensional motion of a person. In fact, Kinect sensor generates a depth and image (, , and ) of human body. In addition, the Kinect sensor can provide space coordinates for each joint.

To enjoy these benefits, we need an Application Programmer’s Interface (API) to the Kinect hardware and its skeletal tracking software. Microsoft’s “Kinect for Windows SDK” was used to provide an API to the Kinect hardware. This API provides an estimate for the position of 20 anatomical landmarks in 3D at a frequency of 30 Hz and spatial and depth resolution of pixels as shown in Figure 1. The default smoothing parameters are as follows:(i)Correction factor = 0.5.(ii)Smoothing factor = 0.5.(iii)Jitter radius = 0.05 m.(iv)Maximum deviation radius = 0.04 m.(v)Future prediction = 0 frames.

Based on Microsoft Kinect SDK, the Kinesthesia Toolkit for Microsoft Kinect is developed at University of Leeds initially for the medical rehabilitation and surgical tools. In addition, the toolkit helps the NI LabVIEW programmers to access and use the popular functions of the Microsoft Kinect camera such as RGB video, depth camera, and skeletal tracking. The toolkit comes fully packaged using JKI’s VI Package Manager.

In addition, the toolkit allows the user to initialize and close the Kinect’s different components through polymorphic VIs (RGB camera, depth camera, and skeletal tracking) dependent on the functionalities they require. The Operational Layout for the Kinect LabVIEW toolkit is described in Figure 2.

The first step is to initialize the Kinect; this act opens the device and returns a reference. The second step is to configure the Kinect. In this step, we can choose the RGB stream format, enable the skeleton smoothing, and select the appropriate event (video feedback from the RGB camera, 3D skeleton information, or depth information). In the next step, we can get skeleton, depth, and video information from the Kinect in the main loop such as while loop. The additional implementations can be put in the same loop. Finally, we stop the Kinect and close the communication.

In this paper, a user interface containing a skeleton display screen has been developed as shown in Figure 3. The screen has been reserved to the skeleton display and the user video recording.

In this work, we focus on the human arm portion of the skeleton data. In fact, the human arm can be considered as being a 4DOF manipulator arm similar to Lynxmotion robotic arm. In our case, a robot manipulator model has been developed with a 3DOF. Therefore, we assume that the user has the opportunity to control the robot based on his right arm gestures. The three considered movements are as follows:(i)Flexion-extension for the right shoulder joint.(ii)Abduction-adduction for the right shoulder joint.(iii)Flexion-extension for the right elbow joint.The similarity of the human arm and the robot manipulator model is shown in Figure 4.

After the acquisition of each elementary joint position, an additional calculation is performed to determine the desired speed and acceleration. All these parameters will be considered as the desired trajectories for the manipulator robot control algorithm.

3. Dynamic Model of Lynxmotion Robotic Arm

The dynamic model deals with the torque and force causing the motion of the mechanical structure. In this section, robotic arm dynamic model has been computed based on Euler-Lagrange formulation. The potential and kinetic energies of each link have been computed by the two following equations:where is the th link mass, is the th link linear velocity, is the th link angular velocity, is the th inertia tensor, and is the th link position.

Further calculation, the dynamic model of a full actuated robot manipulator is expressed as follows: where is vector consisting on applied generalized torque, is symmetric and positive definite inertia matrix, is the vector of nonlinearity term, and .

The vector of nonlinearity term hails from centrifugal, Coriolis, and gravity terms described in the following equations: Here, robot manipulator dynamic equation can be written as where is matrix of Coriolis torque, is matrix of centrifugal torque, is vector of joint velocity obtainable by , and is vector witch can be obtained by .

In our work, a 3 DOF manipulator model has been developed using the Lagrange formulation. The goal was to determine the Coriolis, centrifugal, and gravitational matrices for real implementation reasons.

4. Control Design

Robot manipulator has highly nonlinear dynamic parameters. For this reason, the design of a robust controller is required. In this section, we study the motion controller “sliding-mode control.”

The dynamic model of the robot manipulator is described as follows: where and .

Let denote the desired trajectory. The tracking error is defined asDefine , where s is a definite positive matrix.

According to the linear characteristic of robotic, we obtainwhere is the robot estimated parameters vector and is a repressor matrix.

Define The sliding vector is selected as We propose the sliding mode control law as follows:where .

Define

Theorem 1. The proposed controller guarantees the asymptotic stability of the system.

Proof. Select the LEC as Replacing by its expression (10) yieldswhere such as .
And such as .
Yield This proves the stability.

5. Experiment Results

In the beginning, we had some packages to install. Indeed, the JKI’s VI Package Manager such as Kinesthesia Toolkit for Microsoft Kinect, LabVIEW Interface for Arduino (LIFA), and control design and simulation module are strictly required for the implementation and then for the proper system functioning.

In this section, we present a LabVIEW-based human-machine interface. The virtual robot has been designed in 3D LabVIEW virtual environment. The virtual robot has the same kinematics parameters of the real robot such as joints axes and links distances. A user interface screen has been reserved to display the virtual robot movement versus the real robot.

The control law based on the sliding mode control has been successfully implemented with the parameters discussed in the previous section.

Going from gesture applied by the human arm, Kinect sensor captures the human arm motion and abstracts it in 3D coordinates. Using Kinect for Windows SDK, the skeleton data are provided and transmitted to the recognition algorithm. Based on Kinesthesia Toolkit for Microsoft Kinect, the human arm joint positions, velocities, and accelerations are computed and considered as desired trajectories of the robotic arm. By helping of LabVIEW control design and simulation, the developed dynamic model of 3DOF Lynxmotion robotic arm is implemented. Also, a sliding mode control algorithm is implemented ensuring the high performance in the trajectories tracking. Furthermore, the computed torque is transmitted to the servomotors moving the mechanical structure of the robotic arm. Moreover, the desired joint positions are applied in the virtual environment to move the virtual robot in order to prove the compatibility of the desired motion in both real and virtual environment. Hence, an augmented reality approach is proposed to validate the control system.

The system overview can be presented by Figure 5.

In our experiment, we only use the first three joints and the remaining joints are not considered. Note that the user interface contains four screens.

The first screen is designed to the setting and hardware configuration. The screen displays the tracking process state such us invalid positions, joint positions, and 3D coordinates and user position checking. A security algorithm has been developed to secure the tracking process. Note that where the security program indicates a skeleton detection error, for one reason or another, the last calculated angle (already validated) of each joint will be sent to the robot until the operator solves the problem. Based on the feedback node which returns the last valid result with a nonnumeric input, the security algorithm is developed and abstracted in sub-VI for importing in the user interface. Figure 11 illustrates the imported sub-VI. In addition, the user can also set the Arduino board communication such us connection type and baud rate and Arduino board type and select the Arduino resource.

In the tracking process, the operator’s arm can twitch, leading to unwanted random movement. In order to avoid this undesired movement, we propose a Butterworth low pass filter-based smoothing signal algorithm. In the same screen, the user can enable the smoothing signal algorithm by pressing on the bouton control. The designed filter is abstracted in a sub-VI as shown in Figure 10. The filtered trajectories, presented as the desired trajectories in Figures 15, 16, and 17, demonstrate the effectiveness of the filtering algorithm.

Finally, the stop button is set in the developed interface to stop the tracking process and close all connected hardware. This screen is developed as shown in Figure 6.

The second screen, presented in Figure 3, is designed to record the user’s video and skeleton display. Figure 7 shows the third screen to prove the augmented reality. In this page, we show the virtual and real environment. The user and the virtual robot will be displayed in the user video portion whereas the virtual robot will be presented in the virtual environment software.

The remaining screen describes the joints graphs and operator’s video recording during the experimentation as shown in Figure 8.

After running, the operator must select the first screen, called Kinect Feedback, to adjust its position. A user position checking is triggered to secure the operator’s position in the Kinect sensing field. A green color of the signaling LED indicates the good state of the recognition process. Otherwise, the LED color changes to red to indicate the problem of tracking. Figure 9 describes the executed screen in the user position checking process.

Based on Kinect toolkit for LabVIEW, the operator’s skeleton data will be read by the LabVIEW interface. Then a gesture recognition method is executed to determine each joint angle. The calculated joint positions are considered as desired trajectories for the sliding mode control implemented in the developed software. Ultimately, the computed torque will be sent to the Atmel-based Arduino card via USB protocol. Therefore, the servomotors of the three first joints will be actuated to move the mechanical structure of the Lynxmotion robot.

A three-degree of freedom virtual robot is used to perform this experiment. Essentially, the augmented reality evaluated both the ability of the robot manipulator to copy human hand, arm motions, and the ability of the user to use the human-robot manipulator interface. The proposed controller is validated by the augmented reality. In fact, by building a virtual robot by using LabVIEW other than the real robot, as shown in Figure 12, virtual robot movements are similar to the real robot. Therefore, the proposed control method is validated.

Figure 13 proves how the user can control the manipulator robot based in his detected skeleton.

Photographs series depicting gesture control process execution, each five seconds, with real-time graphs have been presented in Figure 14.

In the teleoperation process, the tracking error convergence is very important and the performance of our system is based on the tracking error optimization. Indeed, in the case that the elementary joint cannot track his similar human arm joint, the robotic arm will not be able to perform the desired task because of the accumulation of each joint position error. In this case, the end-effector localization is different regarding the desired placement. Therefore, the user must adapt the gesture with the current end-effector position to reduce the end-effector placement error. Hence, the adaptive gestures estimated by the user require a particular intelligence and such expertise. In another case, when the user applies the desired gesture with a variable velocity, in this situation, the control algorithm must respect the applied velocities and accelerations to conserve the convergence of the tracking error. In order to overcome this constraint, the use of high performance sliding mode controller is highly required.

Figures 15, 16, and 17 show trajectories of each joint and the desired trajectory for each. Both trajectories are as a function of time. The illustrated results show the effectiveness of the proposed control approach. The illustrations have only one second as response time. In experimental condition, this time can be regarded as excellent. This control strategy should be improved if we hope to exploit this system in telemedicine applications. Moreover, the developed teleoperation approach can be used for industrial applications, in particular, for pic and place tasks in hazardous or unstructured environments. The real-time teleoperation system offers a benefit platform to manage the unexpected scenarios.

6. Conclusion

This paper applies the gesture control to the robot manipulators. Indeed, in this work we use the Kinect technology to establish a human-machine interaction (HMI). The developed control approach can be used in all applications, which require real-time human-robot cooperation. Trajectory tracking and system stability have been ensured by sliding mode control (SMC). Control and supervision both have been provided by a high-friendly human-machine interface developed in LabVIEW. Experiments have shown the high efficiency of the proposed approach. Furthermore, the Kinect-based teleoperation system is validated by augmented reality. In conclusion, the proposed control scheme provides a feasible teleoperation system for many applications such as manipulating in hazardous and unstructured environment. However, there are some constraints when applying the proposed method, such as the sensibility of the desired trajectory generated by the human arm even in case of random and unwanted movements. Indeed, the work space and the regional constraints must be respected in the control schemes by filtering the unaccepted trajectories. Moreover, for pic and place applications, grasping task is not considered in the proposed telemanipulation system. In perspective, this drawback can be overcome by using the electromyography biosignal for grasping task control.

Competing Interests

The authors declare that they have no competing interests.