Abstract

The development of assistive robots is gaining momentum in the robotic and biomedical fields. An assistive robotic system is presented in this research paper for object manipulation to aid people with physical disabilities. The robotic arm design is imported to a simulated environment and tested in a virtual world. This research includes the development of a versatile design and testing platform for robotic applications with joint torque requirements, workspace restrictions, and control tuning parameters. Live user inputs and camera feeds are used to test the movement of the robot in the virtual environment. To create the environment and user interface, a robot operating system (ROS) is used. Live brain computer interface (BCI) commands from a trained user are successfully harvested and used as an input signal to pick a goal point from 3D point cloud data and calculate the goal position of the robots’ mobile base, placing the goal point in the robot arms workspace. The platform created allows for quick design iterations to meet different application criteria and tuning of controllers for desired motion.

1. Introduction

The use of assistive robots for object manipulation has many different types of user-controlled inputs. One type of the input is using brain signal data from a noninvasive electroencephalogram (EEG) cap. This has the benefit of being applicable to anyone despite physical disability, at the cost of intensive training requirements. To increase the accessibility of assistive robots with brain signal inputs, the robot must be designed to minimize the amount of training while keeping its complex object manipulation functionality.

An assistive robot is usually composed of an arm with multiple joints to manipulate objects, a mobile platform to travel in an environment, and a head with a stereovision system to recognize human faces and localize manipulative objects [1,2]. One type of the manipulator control can be referred to as end-effector control, where a user specifies a goal position of the end-effector and path-solving algorithms to automatically move the joints [3]. Control methods for robotic arms using BCI can be accurate with over 95% accuracy but require long-term training [3].

Once a robotic manipulator has a goal position, there are different types of analytical and numerical methods to solve for joint angles [4]. To facilitate the accuracy required to grasp an object, a semiautonomous control approach can be used where the human-robot interface employs a vision system to control robot motion for part of the positioning task [5]. The vision system requires software or algorithms to interpret the incoming data. These can be object recognition software or other forms of image processing that may require calibration. By lowering the amount of user inputs available, more complexity is added to the automated subsystems.

BCI involves a combination of the brain and device both sharing an interface to enable a communication channel between the brain and an object controlled externally [6]. To obtain EEG data, a cap is used with electrodes mapped to specific head locations, as the example seen in Figure 1.

Due to backlashes, eccentric joints, and the influences that the flexibility of driving axles would have on the accuracy and repeat accuracy of the arm positioning, a compensation method should be used for good effects [7]. The use of joint torque sensing technologies on robotic manipulators adds complexity and may require the addition of compliant members to the robot [8]. Another method of compensation can use the vision system and it requires the arm poses to be in the scope of the camera. The use of a vision system for the robot arm position feedback simplifies the hardware components required for the design but adds initial calibration and physical markers to be added to the robotic arm. For safe manipulation of a humanoid robot arm, self-loads and external loads are verified before an object is manipulated [9].

2. Robot Manipulator

The design of the robot arm is based on a person’s hanging arm with the intent of making the user-input intuitive. To keep the first design iteration simple, the arm has a planar workspace with three joints: the shoulder, elbow, and wrist joints. The arm is rigidly attached to an omnidirectional three-wheel mobile robot, allowing the arms planar workspace to translate and rotate. The end effector is attached at the end of the wrist joint through a triangular attachment 3D printed for this application, as shown in Figure 2.

The planned mounting of all components for the assistive robot system is shown in Figure 3. The robotic arm, battery pack, main computer, motor drivers, and d435 camera are mounted on top of the mobile base. Other sensors and electrical components used in the mobile base are placed inside of the robot. This design allows for a cover to be made that hides all main components except for the robotic arm and the charging port. A slot for the camera is still required but an infrared transparent material can be used for protection.

3. Kinematics

The mathematical model of the robot arm is seen in Figure 5. With this schematic, constraints of joint angles are added to avoid collisions and define the arms workspace. Initial torque requirements and link lengths are calculated for a desired payload placed on the manipulator frame, located at the end of this open loop kinematic chain.

To solve the forward and inverse kinematics, a model representing each relevant frame is created and is as seen in Figure 6. The location of the camera, where the point cloud data are measured, is considered frame zero. Frame 1 represents the shoulder joint, Frame 2 represents the elbow joint, Frame 3 represents the wrist joint, and Frame T represents the tool, or hand, frame.

The distances and angles between each frame are measured with the values shown in Figure 7 and are summarized in Table 1. These measurements came from the existing SolidWorks model and any design iteration that changes these distances will affect the calculated transformation matrices. The transformation matrices translate distances from the camera frame to any other frame on the robot.

To derive the translation between each frame, an X-Y-Z fixed angles method is used [10]. Using values from Table 1, the transformation matrix and its inverse are found in equations 4 and 5.

The goal point comes from the camera frame and is selected by the user through the point cloud data camera feed. This point is translated to the hand frame to obtain . Based on the grabbing orientation desired, a grabbing point is calculated based on the desired arm position. The grabbing point is translated to the camera frame , and the difference between the and is used to move the robot to a new location. Once the robot moves to the desired location, the goal point is in the arms workspace and the grabbing operation starts.

Since the arm has a planar movement defined the z-y axis of the hand frame, the x component will always be zero. The z component is fixed, meaning we assume the robot moves in a flat environment. This means that the x and z components of are known based on the input . To calculate the y component of , a grabbing pose is picked to determine the bounding equations. The schematic of a sample point and grabbing at the edge of the workspace is shown in Figure 8. The purple vector is the grabbing point measured from the hand frame before actuation. The orange vector is the z-offset between the hand frame and the axis of rotation of the shoulder joint. It is shown as a number for is defined by the variables defined earlier in Figure 7.

For the grabbing point to be at the edge of the workspace (Figure 8), the Pythagorean theorem is used for the right triangle created using and is shown in (6). Equations (7) and (8) solve for and .

Since the mobile base can move in the x and y directions, the arm will only label a point unreachable if the z component is outside the workspace. This restriction is shown in (9).

The schematic for grabbing with a horizontal pose is shown in Figure 9. For this pose to be valid, equations (10), (11), and (12) must be true. These are used to solve for the joint angles and determine if the point is reachable.

With these equations, the arm joint angles and the trajectory vector for the mobile base can be solved for an input goal point. Since each grabbing pose has a different set of inverse kinematic equations, the desired grabbing pose must be selected before, or as a part of, the user input. The tests in this research use the grabbing at the edge of workspace pose.

4. Simulation Environment

The software used are ROS and Gazebo. ROS is a framework used to build robots with useful tools to create control diagrams and graphical interfaces for a robot. Gazebo is robot simulation software using a physics engine for dynamic simulation and provides simulated sensor data to test robots. With the use of these open-source programs, we can import the existing design to test and tune different control algorithms. We can also use real sensors and live data to provide a simulation command and modify the design based on performance goals.

The visual model, shown in Figure 10, is imported from SolidWorks using an STL file format. This model was created before planning to simulate the robot and is used as a reference to design the collision and inertial models. The simulation environment uses the collision and inertial models to run the physics engine while the visual model is only a skin for the robot. Because of this, a robot design process should start with collision and inertial models and end with a visual model.

The collision model in this project is shown in Figure 11. In this model, the location of all joints and links are defined. The origin of each link and joint is referenced to the previous joint. This model uses simple geometry to define the collision surfaces of the robot. Once a robot design is finalized, the simple geometry can be replaced with the visual model to have more detailed collision points, but at this point, it would be more beneficial to finalize all testing on a real-world environment.

The inertial model defines the center of mass and the rotational inertias in all directions for a link. This is done by defining a link 3D inertia tensor I shown in (13). This equation is valid for a solid cuboid of width , height h, and mass m.

By estimating the masses of links, the inertial model is defined for the entire robot and is shown in Figure 12. These values are used to calculate the torques on joints at any time during the motion of the arm.

5. Hybrid Control

ROS uses a control block diagram to communicate between different subsystems. Each node in the control block diagram can represent a robot model, a controller, an input or output, etc. These are usually stand-alone programs that are connected using topics. A topic is a message that a node can publish or subscribe to. Figure 13 shows a part of the control block diagram used on the simulated model. The omnibot_arm node subscribes to the clicked_point topic, which publishes commands from the user interface. The user publishes a message to the clicked_point using BCI commands through the user interface.

The testing done focuses on a semiautonomous end-effector control mode, but the system allows for direct joint control through BCI for different applications. The omnibot_arm node solves the inverse kinematic equations to create a goal angle for each joint of the robot arm. The goal angle message is published to each motor’s controller. The controller uses tuned PID values to move each joint and the output is sent to Gazebo for visualization. The output of the omnibot_arm can also write to Jetson Xavier’s communication pins to send commands to external hardware. The trajectory_markers_node also solves inverse kinematic equations and adds visualization vectors to the user interface to show the location of the selected point, the grabbing point, and the goal position of the mobile base.

Figure 14 shows the user interface to select , a goal point from a 3D cloud feed. This interface is made up of two displays: The visualization display on the left and the user-input screen on the right. The point cloud data feed is shown in both displays. The visualization display is a 3D environment used to verify selected points, visualize markers, and give the user a different perspective of the environment. The user-input screen is a 2D camera feed with the point cloud data as an overlay. A BCI controlled cursor is bound to stay within the user-input display during operation.

Four of a user’s BCI commands will map to up, down, left, and right movements of the BCI controlled cursor. The fifth BCI command will map to grab while the last command will map to confirm the action. This mapping is summarized in Table 2. If the grabbing action is confirmed, a new point selected on the user-input screen will be a release action, where the robot will move to the desired goal point and place the object. A confirmation command is used here as well. The grabbing and releasing modes switch back and forth when each action is confirmed. The confirmation is time limited, meaning the user must confirm the action within a timeout period or the action will be invalidated.

To tune the arms joint controllers, a ROS tool called rqt is used. A PID position controller is used to generate the output to all arm joints. The values of the PID gains can be changed dynamically, making the tuning process quick. The interface for changing PID values of different joints is shown in Figure 15.

A command is sent to a joint using a message publisher. The tuning process can be used to tune a single joint or to tune for more complex motions by moving multiple joints at once. The message can be a constant value or a time-dependent function. Both are shown in Figure 16, where the shoulder moves to a set angle while the elbow moves sinusoidally.

Gazebo is used to simulate the robot’s movement and verify desired motion. The entire tuning setup is shown in Figure 17. The effects of gravity are seen in the actual movement of the shoulder joint, noted by the dark blue line. The joint lags on an upward motion but does not struggle to move downward with the given goal signal, noted with the light blue line. This is used to tune the motors, but it can also be used to size up the motor used, decrease the workspace reach, or limit the motion paths and joint speeds of the arm. Changing each parameter is an iterative process that ends when all acceptable functionality is met.

6. Test Data and Results

Each arm motor is tuned and due to the torque limits on each joint and the inertia of the arm, small changes in PID parameters did not affect the output motion. This tool is useful in choosing the torque ratings of each motor if certain speeds or load capacities are the design constraints. The final PID gains used are stored in a file read by the robot definition source code. Figure 18 shows the response of the shoulder joint to a goal angle command while the elbow joint undergoes sinusoidal motion. The dark blue line represents the motion of the shoulder joint, which rises to the goal angle with disturbances due to the oscillating mass. The disturbances on the shoulder joint can be used to calculate the torque on the joint.

Figure 18 shows the response of the shoulder joint to a goal angle command while the elbow joint undergoes sinusoidal motion. The dark blue line represents the motion of the shoulder joint, which rises to the goal angle with disturbances due to the oscillating mass. Since stepper motors only move in discrete steps, the real joint would not experience any motion until the holding torque value is exceeded. Other DC motors may experience the disturbance motion, depending on the motor characteristics. Figure 18 can also be used to determine rise time, overshoot, steady-state error, and other control-based design constraints used to tune the joints for any application. The black lines show the peaks of the disturbance and the centroid of the disturbances is shown with the dashed orange line. This dashed orange line also shows a steady-state error occurs due to the dynamic load present.

The kinematics of the robot were verified by the user interface created using Rviz. By selecting a point from the point cloud, visualization vectors were added to the environment to show the robots actions. Figure 19 shows the markers created when a point is selected. The green vector points to the selected goal point from the camera frame. The red vector points to the same goal point from the hand frame, verifying the transformation matrix derived. The grabbing point is derived from the hand frame. This grabbing point is then translated to the camera frame, shown as the blue vector, using the inverse transformation matrix. The difference between the goal point and the grabbing point is calculated and shown as the white vector. The white vector is the goal position command sent to the mobile base. The movement of the base places the goal point to the grabbing point in the arms workspace. The goal position command should always have a zero z-component as we assume the robot cannot climb or descend in elevation.

Figures 19 and 20 show sample points manually selected to verify distances and visualization markers. Figure 21 shows a stream of BCI data that moved a cursor and selected grabbing points. The BCI data stream was recorded by a user that underwent the BCI training process and was able to generate commands.

Verifying the inverse kinematics in the visualization environment ensures the simulations work correctly. To verify the motion of the real robot, the values selected through the camera feed must be calibrated. A simple test is made due to the lack of access to a physical robot and is shown in Figures 22 and 23.

In this test, the object is selected from a known distance. The goal x and goal y distances the mobile base must traverse to place the goal point to the grabbing point is read from the output of the visualization markers program. The physical camera is then manually moved by those goal distances. Figures 24 and 25 show the camera and visualization environment after this movement. The original goal point is read again to find the error in each direction. The values for this test are summarized in Table 3. With access to the physical robot, this test would be run multiple times to verify calibration values and ensure the camera is leveled to the ground.

The Brunel hand has been previously tested with different gripping styles depending on the size and shape of an object, as shown in Figure 26. These grabbing modes are selected based on the object but have not been brought to the simulated environment. Bringing these grabbing motions to the simulated environment changes the inverse kinematics of the end-effector. In order to grab an object, the grabbing point must be offset from the goal point as this goal point is the surface of the object. The offset is defined by the limits of the manipulator.

7. Conclusion

As the pandemic waves are dominating in the world, remote diagnosis is becoming the new trend in modern medicine [12]. The new aspect of robotic applications includes human machine interface for remote applications. The human operator controlling an assistive robotic device demands for reliable and efficient operation in order to work in remote conditions. The main objective of our research study in teleoperations was to improve the end user experience providing higher level of information for remote operation [13]. In the study, the BCI system was processed and the neuron signals was translated to operate a robotic system. The applications of such a system are heavily studied to allow people with motor disabilities to control external devices through their brain waves [14].

The assistive robot design in this study was successfully imported into a working simulated environment where the design was optimized by changing geometry, upgrading electrical components, and tuning motor controllers for specific applications. A live stream of BCI commands was used to select points on a user interface and generate goal position movements to perform a grabbing or releasing operation using end-effector control. The user-interface provides an environment that allows for few input commands to generate arm, hand, and mobile base movements. These commands are used for direct joint control of the robotic arm, but such an application would need a wider range of inputs and more user training.

Physical tests on the robot was not possible due to the pandemic restrictions; therefore, a simulation environment was developed using ROS and Gazebo. The errors in movement shown in Table 3 was attributed to camera calibration, measurement errors, and the leveling of the camera. The test was refined and expanded once the camera was mounted on the mobile platform to check for these errors. The next step in this project is to add a complete assistive robot test. This includes a BCI selected goal point, movement of the mobile base to place the goal point to the grabbing point, and confirmation of a successful grab or release.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.