Table of Contents Author Guidelines Submit a Manuscript
Applied Bionics and Biomechanics
Volume 2018 (2018), Article ID 2063628, 14 pages
https://doi.org/10.1155/2018/2063628
Research Article

Development of a New Intelligent Joystick for People with Reduced Mobility

The National Higher School of Engineering of Tunis (ENSIT), Laboratory of Signal Image and Energy Mastery, LR13ES03 (SIME), University of Tunis, Tunis, Tunisia

Correspondence should be addressed to Yassine Rabhi; moc.liamy@ihbarenissay

Received 5 August 2017; Revised 25 December 2017; Accepted 10 January 2018; Published 22 March 2018

Academic Editor: Jean Slawinski

Copyright © 2018 Makrem Mrabet et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Despite the diversity of electric wheelchairs, many people with physical limitations and seniors have difficulty using their standard joystick. As a result, they cannot meet their needs or ensure safe travel. Recent assistive technologies can help to give them autonomy and independence. This work deals with the real-time implementation of an artificial intelligence device to overcome these problems. Following a review of the literature from previous work, we present the methodology and process for implementing our intelligent control system on an electric wheelchair. The system is based on a neural algorithm that overcomes problems with standard joystick maneuvers such as the inability to move correctly in one direction. However, this implies the need for an appropriate methodology to map the position of the joystick handle. Experiments on a real wheelchair are carried out with real patients of the Mohamed Kassab National Institute Orthopedic, Physical and Functional Rehabilitation Hospital of Tunis. The proposed intelligent system gives good results compared to the use of a standard joystick.

1. Introduction

The intelligent methodologies, such as artificial neural network, genetic algorithms, and fuzzy logic, stand as a major base in processing high-level inferences in order to control systems and to deal with nonlinear and complex problems. This technique based on human reasoning plays a pivotal role in many fields such as medicine, robotics, and engineering. Indeed, it is widely deployed in the development of healthcare smart technologies like electric wheelchairs (EWC), where many intelligent prototypes have been designed to fit with every user’s exigency.

These users have difficulty driving their standard electric wheelchairs; they are either elderly, disabled, or severely impaired. In fact, a clinical survey [1] stated that (i)10% of patients surveyed cannot use the standard electric wheelchair in their daily activities.(ii)40% of regular users of electric wheelchairs find it difficult to manage tasks such as going through open doors, and almost 9% find this impossible without assistance.(iii)18% to 26% of patients cannot use a manual or motorized wheelchair because they lack the required motor skills, strength, or visual acuity.

In order to overcome these deficiencies and hence design new EWC using advanced technologies to reach high performances and secure systems, an important number of researchers aim to develop new technologies and functionalities to solve this problem. In [2], the authors use the voice recognition to move the EWC. In fact, based on a control database, the speech controller identifies the voice and applies the command. Another method used in the literature [3] consists of fusing an eye tracker with an infrared technology (IR), the user stares at the desired direction then the IR sensors send eye location to the controller. The head gesture is also an innovative method of controlling an EWC, using cameras, body sensors, or a facial recognition system to detect movement of the user’s head [4]. In [5], the authors use a tube connected to the control system on which the user blows or sucks. The control depends on the amount of air and the hardness or softness of the sips and puffs. The brain-computer interface (BCI) is a direct communication between the brain and the computer, where a set of electrodes attached to the scalp collects brain signals and transfers them to a computer to be pretreated. It is used in [4], to control the EWC.

Despite these highly developed technologies, the joystick remains the most widely used input device and others are little marketed according to clinical research [1], more than 80% of EWC users operate their wheelchairs with the joystick, 9% use a head or chin control interface, and only 10% use it for all other types of control. All the interfaces mentioned above have predetermined commands to move the wheelchair. It allows only four different commands (forward, backward, left, and right) at a predefined speed. In addition, these control types have a tedious impact on the patients when they drive for a long duration.

Furthermore, electric wheelchairs are now becoming smarter with the implementation of intelligent algorithms that assist in the driving process.

Therefore, in this work, a new method of controlling EWC is proposed based on the use of the classic joystick to which we have added an artificial intelligence algorithm to correct all the disabled movements of the patient’s hand.

2. Problem Description

Hand-operating control difficulties are the main reasons that create a huge lack of security while steering an electric wheelchair.

Firstly, in the case of muscle tone, poor endurance, or decreased strength, a wheelchair user is in a position of evident weakness, where its ability to propel the wheelchair is extremely deteriorated. Moreover, when the place or the navigation room is crowded or the floor is not perfectly smooth and flat, crashing and falling become almost certain to result, which makes the patient more likely to make accidents and more severely injured, due to the slow-moving hand response when he faces obstacles.

In order to ensure his security, he should quickly sense and react to each situation; he should correct the undesirable hand movement.

The bad control or falling down from a WC or crashing things will cause many physical and psychological problems for old and handicapped persons.

3. Related Work

Conventional wheelchairs consist of a joystick to perform numerous control tasks. The user needs to be sufficiently flexible to reach and operate them. Some patients are unable to manipulate the joystick of the wheelchair with their arms due to a lack of strength or problems in the superior members caused by Parkinson and quadriplegics diseases.

Many works based on wheelchairs have been proposed to improve its usability [6, 7]. Human interface for easy operation of the intelligent wheelchair is the most popular research issue. The joystick mapping study is also an investigation issue to control the wheelchairs. Rabhi et al. [8] describe an intuitive human interface by changing the standard joystick with another visual interface and also the previous method of mapping the joystick. This method requires the addition of other equipment to the wheelchair, such as a camera and additional treatment to detect the patient’s hand.

Many standard joysticks contain low-pass filters. In [9], the authors have built-in damp features to suit the nature of the rubber boot around the joystick. However, low-pass filters are adequate to filter out some unintended movements such as tremor. Vishnu et al. present in [10] an algorithm which employs natural image features for indoor corridor navigation. This algorithm is then fused with standard joystick input from the user for progressive assistance and trajectory correction. In addition, other wheelchair-based works have been proposed to improve its ergonomics. The human interface for easy use of the electric wheelchair is the most popular research topic. In [11], the author proposes a new method of classifying human facial movement based on multichannel frontal biosignals. Another hands-free control system based on visual recognition of head movements is developed in [12]. The author in [13] developed a voice-operated electric wheelchair. The user can control the wheelchair by voice commands. Nevertheless, many types of interfaces need to be developed to allow for more variety in the wheelchair, thus reducing fatigue/stress, minimizing downtime due to adaptation, and maximizing the effective use of the control module. Unfortunately, most of these interfaces have predetermined controls to move the wheelchair. They have a major defect that requires additional hardware to collect these signals. In addition, it allows only four different commands (front, rear, left, or right) at a preset speed.

4. Proposed Solution

In this work, persons with severe upper extremity impairments are considered. These people are unable to maintain their wheelchairs’ joystick conveniently and to precise their navigation path to move toward the desired point with the suitable acceleration. To solve these limitations, we integrate into the electric wheelchair a behavior control system that shares the control with the user and ensures a safe navigation with a real-time support. This control system is based on an artificial intelligence method. Therefore, we use the recurrent neural network algorithm to design an intelligent controller that constantly corrects undesirable movements of the patient’s hand and ensures smooth and safe navigation, respectively. Recurring neural networks (RNNs) have no limitations and are very efficient in signal sequence modeling. They are very useful for modeling the movements of the hands [14, 15].

This study is aimed at people with severe disabilities of the upper limbs; we offer them an intelligent system integrated into the standard wheelchair joystick allowing to move to the desired point with an appropriate acceleration. The system uses an intelligent algorithm to control and assist variable speed navigation (such as standard joystick navigation). This proposed control system does not require sensors or devices attached to the user’s body or a special camera on the wheelchair.

For our target users, this modality seems very appropriate: this new intelligent joystick can be manipulated even with a tight hand posture, Figure 1. In addition, using the proposed smart interface requires less muscular effort than a standard joystick.

Figure 1: The posture of a dystonia patient’s hand.

In this section, a global presentation of the system is proposed, it explains the used strategy and presents the proposed solution.

5. Materials and Methods

Figure 2 shows the overall architecture of the intelligent assistance system and the connection between its elements. In the next step, the use of each element of the system is explained in detail.

Figure 2: The proposed system structure.

The aim of the project is to develop an intelligent platform that is easily adaptable to any electric wheelchair and to help many people with physical limitations and seniors who have difficulty maneuvering their joysticks. A real prototype was created by adopting a conventional electric wheelchair that has already been used in other works in our laboratory, such as in [8]. The intelligent control system consists of two devices, which are the standard joystick and Raspberry pi2. In order to safely simulate algorithms and methodologies, a 3D simulator was also used. This simulator makes it possible to create a virtual world in which the user can drive an electric wheelchair whose behavior is similar to that of the real prototype with real parameters of the material presented in [8]. The virtual world was developed using the Unity 3D engine, used for 3D game modelling. It provides a toolkit for creating a 3D simulator and for many video games. Wheelchair control mapping is very important to move in the right direction. However, unfortunately, some people did not have the ability to use the standard controller as we have presented previously. That is why different ways of mapping the joystick control were tested and in this work, we have proposed an algorithm based on the recurrent neural network to make the necessary corrections of this mapping.

The joystick controller is the tool used to manoeuvre an EWC with a high degree of flexibility; it is easily adjusted to suit the driving needs of the individual and to displace independently without the assistance of another person. It converts the hand movement into mathematical symbols. Indeed, when the user begins propelling his WC, the joystick receives Cartesian coordinates (x and y) and converts them into polar coordinates (distance and angle). The received value for the x-axis increases as the stick moves to the right, and the value of the y-axis gets greater as the stick moves away from the user. After the control process, it reconverts again the outputs into an analog voltage. Our EWC is equipped with a new VSI controller, which is a multimodule control system.

The joystick mapping can adjust the suitable response behaviour of the electric wheelchair to the patient’s control. The joystick controller converts the hand movement into mathematical symbols. Indeed, when the user begins propelling his wheelchair, the joystick receives the coordinates and converts them into signals. The joystick handle movement is represented in a Cartesian coordinate system, with two axes, x and y, where the x-axis and y-axis are the rotational and translational command simulation of the joystick.

In this work, a control command of (1) equates to maximal forward or left rotation and a control command of (−1) equates to maximal reverse or right rotation by the wheelchair. A command of (0) specifies no motion. Additionally, speed increases or decreases proportionally as the control command deviates from (0, 0).

The speed of left and right wheels (L, R) is represented by normalized values (from −1 to 1). It takes positive values if the wheels rotate forwards and negative values if the wheels turn towards the rear. A joystick is a combination of dual potentiometer [16]. Any movement provides analog voltages. A voltage variation in the x-axis provides a displacement of the wheelchair to the right or left, and a variation in the y-axis provides the speed variation and forward or back movements. For simplicity, in the rest of the paper, the control command is represented in a polar coordinate system, with two numbers, ρ and θ, where ρ is the distance of the handle to the central position of the joystick and θ is the angle with the reference axis.

In the design of the proposed intelligent joystick, any position change provides an analog proportional voltage. The variation in tension on the x-axis allows the wheelchair to move to the right or left, and a variation on the y-axis allows the wheelchair to vary speed, that is, to move forward or backward. Table 1 shows the voltage ranges indicated by the joystick.

Table 1: Intelligent wheelchair tension ranges.

The relationship between the output of the controller and the voltage given to the motors of the EWC are illustrated by

6. The Smart Joystick

For each control system, a calibration algorithm is always required as the first task to be performed. The calibration process is necessary because it is difficult to perfectly align the coordinates of a hand with the steering system behind it. Thereby, a calibration algorithm was developed after identifying the sources of joystick movement errors. Several sources of error affect the ρ and θ coordinates produced by the proposed joystick. The pathological body state of a patient and the noise are the most important sources of error. Any of these errors can produce incorrect data. Hence, it needs to be compensated.

Given the diversity of problems faced by the disabled when driving the wheelchair, we will add to the standard joystick an artificial intelligence algorithm to become a smart controller able to correct these problems and subsequently makes an easier wheelchair.

Deep learning is this branch of machine learning, loosely inspired by how the brain works.

After a thorough study and research, we have chosen an open source library TensorFlow for this application. TensorFlow is a machine-learning library that is used across Google for applying deep learning to many different areas.

In our case, we used this technique, starting by making driving tests for each handicap and deducting eventual causes of driving problems. Then, we put these data (ideal displacement and displacement of handicap) in a recurrent neural network-learning algorithm in order to speculate the optimal one that corrects the different errors appearing during the driving test.

Finally, it is necessary to implant this recurring neural network in the joystick system, which allows each disabled person to have his/her own smart joystick.

6.1. Data Collection

The patient is asked to move his/her joystick in different directions, following a small circle moving on a screen. We have selected 34 points to scan all directions. The points given in Figure 3 represent the desired movement of the map that the patient must follow. The objective of this test is to scan all areas of the joystick. This technique has been well tested in [8] and gives good collection results for samples. To scan all directions, 34 position points were used and each movement lasts 10 seconds. This test is repeated 10 times to collect all possible positions for each desired position. The results obtained are two joystick angle data matrices and amplitude versus time values. These movements will be saved and used as training sets for the recurring neural network.

Figure 3: The desired positions of the joystick.
6.2. Neural Network Corrector

The training set of the recurrent neural network is composed of a couple of data which are the desired positions represented by the vector ρd and the corresponding position given by the patient represented by the vector ρ, the same for the vector and .

The neural network was designed using python 2.7 with Levenberg-Marquardt optimization [17]. Architecture of the neural network used in this study included an input layer of two nodes, the vector. The number of hidden layers and the number of the nodes are configurable according to the patient and their pathology after several tests with hyperbolic-tangent sigmoid functions. The output layer represents the corrected coordinates. The following figure (Figure 4) illustrates the implementation of the different steps to carry out the proposed joystick used in our experiments.

Figure 4: Synoptic of the smart joystick.

In the learning step, the neural network training set consists of some data which are the desired positions represented by the vector and the corresponding positions given by the user are represented by the vector (Figure 5).

Figure 5: Learning phase of the intelligent joystick.

The RNN training model is illustrated in Figure 5. It has many layers of information beginning with an input layer. In this layer, normalized characteristic data is transferred to the model.

The output layer consists of two nodes that provide predicted and corrected data . Only one hidden layer was used. The weights are adjusted with a hyperbolic-tangent transfer function. All layers have a bias. The training is given by minimizing the mean square error (MSE).

A supervised feed-forward algorithm was used. In addition, the number of hidden layers and the number of nodes in each of these layers are selected according to the cross-validation method to select the optimal RNN structure.

The training set is divided into five data sets of equal size (fivefold). Then, five training and validation iterations are done. Each iteration has fourfold for training and onefold for validation. In addition, five cross-validation experiments are performed to select the optimal number of layers and nodes in each layer. Finally, the course is completed when the error is less than or equal to a fixed error (MSE).

Evaluation results are produced with a new data set (called a test set). In which the user is asked to follow the movement of a small circle that appears on the screen in positions other than the collection step.

6.3. Materials

In order to test our new smart virtual joystick without endangering the patient, we have used a 3D simulator system. With this simulator, a virtual world can be created where a user can drive an electric wheelchair with behavior similar to the real prototype. The virtual world was developed using Unity 3D. It provides a toolkit for creating a 3D simulator with real parameters. The wheelchair control mapping is very important to move in the right direction. The proposed hand controller will be integrated into a Raspberry pi2 in a real EWC. This specific one has already been used for different researches in our laboratory and we still have work to integrate new system controls and functionalities. It is basically composed of 4 wheels, a battery, a joystick, a controller chip, and tow motors. Each motor controls one of the rear wheels.

The proposed controller will be implemented in a real EWC with the parameters indicated in Table 2. This has already been used by various research projects developed in our laboratory [18, 19].

Table 2: Characteristics of the proposed intelligent wheelchair.
6.4. Participants

The aim of this research is to develop a comprehensive method of controlling a wheelchair that ensures safety by using the least possible sensor resources and computing power. We focus on the development of an intelligent human-machine interface, adaptable to the difficulties encountered by the user when using the electric wheelchair. The therapist must systematically precede either by progressive steps or by regressive phases, in which certain parameters must be taken into account, among them, the functional evolution of the gesture and the tremors of the hand.

In our case, among the patient selection criteria, we selected the following criteria for our evaluation: (a)Inclusion criteria. Patients will be included if the following criteria are satisfied: (i)Men or women(ii)Different patient ages (over 8 years)(b)Exclusion criteria. Patients will not be considered if there is at least one of the following criteria: (i)Pregnant women(ii)Persons deprived of their liberty(iii)Inclusion in another research protocol(iv)Voluntary withdrawal by the patient(v)The trial will be discontinued by explicit decision of the doctor

In this project, we are interested in the functional evolution of the movements. After validation of a clinical protocol (reference 06.06.2015) by the University of Tunis and the Mohamed Kassab National Institute Orthopedic, Physical and Functional Rehabilitation Hospital of Tunis, various experimental experiments were launched. The following table (Table 3) illustrates the characteristics of participants in the field trial.

Table 3: Patient characteristics.

The GFMSS (Gross Motor Function Classification System) [20] functional level can be assessed on the basis of five functional levels (I, minor to V, major difficulties) for children and youth with cerebral palsy. It is useful because it provides clinicians with a clear description of the patient’s current motor function. It also provides an idea of what equipment or mobility aids a child may need in the future, such as crutches, walking frames, or wheelchairs.

Level V means that children have physical disabilities that limit voluntary control of movement and the ability to maintain head position. The patient cannot sit or stand alone, even with proper equipment, and cannot walk independently, even if assisted mobility can be used.

FIM [21] measures the patient’s dependence in terms of motor, cognitive, psychological, and behavioural abilities by assessing limitations and needs for assistance. It has an ordinal grid of 18 elements, including communication and social cognition.

Figure 6 shows the patients who passed the real-time test of the intelligent wheelchair. They have various types of disabilities.

Figure 6: Real participants in the trial phase.

7. Experimental Results

The experiments used the virtual simulator (Figure 7) for testing the users’ ability in driving the wheelchair with the standard joystick and the proposed joystick. A circuit as defined in the simulator and some objects to be collected (passing through them) were put along the way. These objects (18 coins) determine the trajectory that the wheelchair should follow. The path is shown in the figure below, in which, the user is confronted with some turns (90°) and narrow passages from 100 cm to 150 cm wide. The manoeuvers imposed are therefore delicate and require special attention in order to avoid collisions. This course allows analyzing the behavior of the human-machine system in a very constrained and very congested environment, a frequently encountered situation in indoor environments (apartment, etc.).

Figure 7: The training trajectory is divided into three parts: driving in a straight line, right turn, and left turn.

During the experimental phase, several measures were taken to determine human capacity in relation to the techniques used. Some indicators have been proposed to analyze the performance of smart wheelchairs [2224]. These indicators are as follows: (i)The movement signals of the joystick(ii)Travel time (T): the time required to complete the mission(iii)The trajectory of the geometric centre of gravity of the electric wheelchair(iv)The number of coins (NR) (reference) points to cross (the total number of coins is 18)(v)Number of collisions (NC): number of collisions during the mission(vi)The average time of collision (TC) that corresponds to the time taken by the user in a collision with an obstacle(vii)Average speed (V): average speed during movement.

These data are not independent. For this reason, the increase in the number of collisions will increase travel time by adding a few maneuvers to overcome the situation.

7.1. Joystick Calibration

The figure below shows the data given by the first patient during the data collection phase. They, respectively, represent the displacements in polar coordinates (blue colour) and the desired signal (red colour). The superposition of the signals shows the hand movement limitations of this patient in any direction when compared to the referenced signal. After collecting the patient’s control signals, we will correct the gaps appearing between signals and the reference one through the proposed RNN algorithm. To do this, we have trained the RNN with the data recorded by the first patient and the desired data until minimizing the mean square error.

The overview of the parameters and their values for each patient are given in Table 4.

Table 4: Structure and training results for the neural network models.

The evaluation results of the RNN are performed with a new set of data (referred as the test set), in which the patient is asked to track the movement of a blue circle that appears on the screen in other positions than the tracking phase. Figure 8 illustrates these results.

Figure 8: Data recorded during the data collection phase of the first patient and with recurrent neural network corrector in the polar base.
7.2. Trajectory Analysis

After the creation of the intelligent joystick. We will test its performance against the standard. To do this, we ask a professional and a patient to follow a reference trajectory that appears five times on the 3D prototype. The trajectories followed are shown in the (Figures 911).

Figure 9: Plots of trajectories as driven by patients with standard driving mode within the scope of experimental test runs.
Figure 10: Plots of trajectories as driven by first test patient with assisted driving mode within the scope of experimental test runs.
Figure 11: Plots of trajectories as driven by test patients with assisted driving mode within the scope of experimental test runs.

Figure 12 presents the speed change of the patient during the manoeuvre. It is clear that the correction algorithm of abnormal movements is very relevant for this patient. Equally, it is noted that adding intelligent corrector increases these amplitudes. This leads us to conclude that the proposed neural corrector significantly reduces the time required to complete the navigation task.

Figure 12: Recorded data from the first patient during the manoeuvre in test number 5.

The Hausdorff distance calculation (HD) presented in Table 5 shows the Euclidean distance between the patient’s trajectory and the desired path. This algorithm makes it possible to estimate the geometric similarity of trajectories [25]. They return a nonnegative number (a distance). Two trajectories are similar if their distance is zero, and the opposite indicates dissimilarity.

Table 5: Performance indices from the users’ paths.

The behavioral indicators are mentioned in Table 5, where it can be concluded that all three patients experienced driving difficulties and that these difficulties are fairly clear in the number of collisions measured. We also note that patients are becoming increasingly tired, which results in a loss of time, regardless of how long they travel.

The table also shows an improvement after commissioning the intelligent joystick. This improvement is related to the number of patient transit points, travel time, distance travelled by patients, and the number of collisions.

These results highlight the effectiveness and the reliability of our proposed system, which guarantees a safe navigation for the disabled patient using this EWC.

In this simulation, a clear improvement of the performances in our new assisted navigation mode compared to the standard model is proved. This comparison is done based on the measures of efficiency. The average speed and the number of collisions show that the new strategy generally has a positive effect on control. As a result, execution times are considerably reduced. In addition, the patient acts with the correct amplitude, in addition to the steering control, and there is less variation in the control signal on the wheelchair. Another indicator, such as Hausdorff distance from trajectories, shows large differences in control behavior when using the proposed technique. Note that the effectiveness of the intelligent joystick developed for each patient is confirmed by the doctor during the test phase.

In this context, we performed a 3D simulator manipulated by the intelligent joystick to drive the electric wheelchair with real parameters. The construction of a real electric wheelchair with an intelligent joystick has become possible and easy to implement thanks to the high technological performance.

To justify our concept, it was tested on the same wheelchair and with the same neuronal structure, another 3D scenario more difficult than the first one where the first scenario presented the navigation in a covered environment and the second one outside represented on Figure 13. The results of these experiments confirm the robustness and efficiency of the proposed intelligent interface.

Figure 13: Comparison between the trajectories of the second patient with and without the proposed intelligent joystick: (a) Virtual environment. (b) Trajectories of the first user. (c) Trajectories of the second user. (d) Trajectories of the third user.

To validate the intelligent joystick we proposed, we did a real driving test on the electric wheelchair. In practice, once the patient is decided in the virtual simulation to be able to drive his wheelchair safely, we will check this result by the actual driving. This approach is approved by a medical and technical-medical team of the Mohamed Kassab National Institute Orthopedic, Physical and Functional Rehabilitation Hospital of Tunis.

During the experimentation, each patient must drive the electric wheelchair with his or her own smart joystick using the following steps: first of all, the patient performs real tests in open spaces at a reduced speed. It is then allowed to test at higher speeds. Then, the driving tests will take place in the presence of static and dynamic obstacles. Finally, the patient will do his daily activities without modifying the environmental conditions. This includes the movement of people around the wheelchair and many circumstances with secured space to operate, such as doors or narrow passages around equipment or people (see Figure 14). Note that the transition of the patient from one test driving step to another is only made after validation of medical and technical team using a statistical study.

Figure 14: Intelligent wheelchair controls in real environments.

The data of the navigation performance metrics are recorded in Table 6.

Table 6: Performance indices for assessing simulated a wheelchair driven by the proposed joystick.

After completing the trials with the new joystick, participants were asked to rate their level of satisfaction. To do this, we used the System Usability Scale (S. U. S.). The S.U.S. described in [26] is a scale of ten questions to validate usability tests. It is commonly used in several scientific research projects [2730].

The three participants were very satisfied with the intelligent joystick offered. They responded with an average satisfaction rate of 92%, 95%, and 84%. In interviews, all patients agreed that they had less physical damage and less effort to navigate with our smart joystick.

8. Compared with Other Interfaces

So far, many control methods for wheelchairs have been developed and then they can be classified as intrusive and nonintrusive. They are presented in Table 7. Intrusive methods use electrodes, glasses, a headband, or cap with infrared/ultrasound emitters to measure the user’s intention based on changes in the ultrasound waves or infrared reflect [3133].

Table 7: Wheelchair controls in literature.

On the other hand, nonintrusive methods do not require any additional devices attached to user’s body. As shown in Table 7, voice-based and vision-based methods are the basic nonintrusive methods. Voice control is a natural and friendly access method; however, the existence of other noises in a real environment can lead to command recognition failure, resulting in safety problems [3436]. Consequently, a lot of research has been focused on vision-based interfaces, where control is obtained by recognizing the user’s gestures by processing the images or videos obtained via a simple or special camera. With such interfaces, face or head movements are most widely used to convey the user’s intentions. When a user wishes to move in a certain direction, it is a natural action to look in that direction, this movement is initiated based on nodding the head, while turning is generated by the head direction. However, such systems have a major drawback, as they are unable to discriminate between intentional behaviour and unintentional behaviour. For example, it is natural for a user to look at an obstacle as it gets close; however, the system will turn and go towards that obstacle [37]. In addition to that, these controls are too tedious for the patient when he has to drive for a long time. Indeed, the patient should, for example, blink his eye or make facial gestures or turn his head across all his path.

In another hand, most of these interfaces mentioned above have predetermined commands to move the wheelchair. It allows only four different commands (forward, backward, left, and right) at a predefined speed, which sometimes presents an annoying limitation for the user.

In our case, the proposed control system is based on nonintrusive methods. It does not require sensors or contraptions attached to the user’s body or special human-machine interface on the wheelchair. It offers also a variable speed control in all directions and thereby gives the same benefits as the classical electric wheelchair, known as being the most used until now.

9. Conclusion and Perspectives

Nonlinear problems with uncertainties are quite complex. However, with artificial intelligence means (AI), we can create intelligent systems to deal with those uncertainties. Researchers revealed that the RNN is one of the most efficient methods to imitate the human behaviour and to make decisions based on human strategies.

Based on AI methods, we created an intelligent control system and integrated it into an EWC to ensure the safety of its disabled users. This system not only corrects the hand movements but also gives confidence to patients. Therefore, we achieve our goals with the designed system. It helped us to reach the desired movements.

As the present field of research is extremely rich, we can widely develop our work and think about many other possible perspectives such as adding other security requirements like an intelligent control able to pass through open doors; in addition, we can accomplish real-time object detection and adjust the input speed values in order to avoid collisions while moving or even detect the user emotions and find their influence in the navigation state.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors remain thankful to all the teams of Mohamed Kassab National Institute of Orthopedics, Physical and Functional Rehabilitation Hospital of Tunis (MKNIOPFRT). Thanks are due to clinical agreement signed between (MKNIOPFRT) and our university laboratory (SIME) (Ref 06.06.2015).

References

  1. L. Fehr, W. E. Langbein, and S. B. Skaar, “Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey,” Journal of Rehabilitation Research and Development, vol. 37, no. 3, pp. 353–360, 2000. View at Google Scholar
  2. G. Pires and U. Nunes, “A wheelchair steered through voice commands and assisted by a reactive fuzzy-logic controller,” Journal of Intelligent and Robotic Systems, vol. 34, no. 3, pp. 301–314, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. G. C. Rascanu and R. Solea, “Electric wheelchair control for people with locomotor disabilities using eye movements,” in 15th International Conference on System Theory, Control and Computing, pp. 1–5, Sinaia, Romania, October 2011.
  4. Z. F. Hu, L. Li, Y. Luo, Y. Zhang, and X. Wei, “A novel intelligent wheelchair control approach based on head gesture recognition,” in 2010 International Conference on Computer Application and System Modeling (ICCASM 2010), Taiyuan, China, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. I. Mougharbel, R. El-Hajj, H. Ghamlouch, and E. Monacelli, “Comparative study on different adaptation approaches concerning a sip and puff controller for a powered wheelchair,” in 2013 Science and Information Conference, pp. 597–603, London, UK, October 2013.
  6. R. A. Braga, M. Petry, A. P. Moreira, and L. P. Reis, “Concept and design of the intellwheels platform for developing intelligent wheelchairs,” in Informatics in Control, Automation and Robotics, pp. 191–203, Springer, Berlin Heidelberg, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. R. C. Simpson, “Smart wheelchairs: a literature review,” Journal of Rehabilitation Research and Development, vol. 42, no. 4, pp. 423–436, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. Y. Rabhi, M. Mrabet, and F. Fnaiech, “Intelligent control wheelchair using a new visual joystick,” Journal of Healthcare Engineering, vol. 2018, Article ID 6083565, 20 pages, 2018. View at Publisher · View at Google Scholar
  9. B. E. Dicianno, S. Sibenaller, C. Kimmich, R. A. Cooper, and J. Pyo, “Joystick use for virtual power wheelchair driving in individuals with tremor: pilot study,” The Journal of Rehabilitation Research and Development, vol. 46, no. 2, pp. 269–275, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. V. K. Narayanan, F. Pasteau, M. Marchal, A. Krupa, and M. Babel, “Vision-based adaptive assistance and haptic guidance for safe wheelchair corridor following,” Computer Vision and Image Understanding, vol. 149, pp. 171–185, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. L. Wei, H. Hu, and K. Yuan, “Use of forehead bio-signals for controlling an intelligent wheelchair,” in 2008 IEEE International Conference on Robotics and Biomimetics, pp. 108–113, Bangkok, Thailand, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. F. A. Kondori, S. Yousefi, L. Liu, and H. Li, “Head operated electric wheelchair,” in 2014 Southwest Symposium on Image Analysis and Interpretation, pp. 53–56, San Diego, CA, USA, April 2014.
  13. G. Pacnik, K. Benkic, and B. Brecko, “Voice operated intelligent wheelchair-VOIC,” in Proceedings of the IEEE International Symposium on Industrial Electronics, 2005. ISIE 2005, vol. 3, pp. 1221–1226, Dubrovnik, Croatia, Croatia, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  14. J. C. Sanchez, S. P. Kim, D. Erdogmus et al., “Input–output mapping performance of linear and nonlinear models for estimating hand trajectories from cortical neuronal firing patterns,” in Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, pp. 139–148, Martigny, Switzerland, Switzerland, September 2002. View at Publisher · View at Google Scholar · View at Scopus
  15. V. Pham, T. Bluche, C. Kermorvant, and J. Louradour, “Dropout improves recurrent neural networks for handwriting recognition,” in 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 285–290, Heraklion, Greece, September 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. D. M. Brienza and J. Angelo, “A force feedback joystick and control algorithm for wheelchair obstacle avoidance,” Disability and Rehabilitation, vol. 18, no. 3, pp. 123–129, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. B. M. Wilamowski, S. Iplikci, O. Kaynak, and M. O. Efe, “An algorithm for fast convergence in training neural networks,” in Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on, vol. 3, pp. 1778–1782, Washington, DC, USA, July 2001. View at Publisher · View at Google Scholar
  18. Y. Rabhi, M. Mrabet, F. Fnaiech, and P. Gorce, “A feedforward neural network wheelchair driving joystick,” in 2013 International Conference on Electrical Engineering and Software Applications, pp. 1–6, Hammamet, Tunisia, March 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Rabhi, M. Mrabet, F. Fnaiech, and P. Gorce, “Intelligent joystick for controlling power wheelchair navigation,” in 3rd International Conference on Systems and Control, pp. 1020–1025, Algiers, Algeria, October 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. D. B. R. Silva, L. I. Pfeifer, and C. A. R. Funayama, “Gross motor function classification system expanded & revised (GMFCS E & R): reliability between therapists and parents in Brazil,” Brazilian Journal of Physical Therapy, vol. 17, no. 5, pp. 458–463, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. R. A. Keith, C. V. Granger, B. B. Hamilton, and F. S. Sherwin, “The functional independence measure: a new tool for rehabilitation,” Advances in Clinical Rehabilitation, vol. 1, pp. 6–18, 1987. View at Google Scholar
  22. R. Simpson, D. Poirot, and M. F. Baxter, “Evaluation of the Hephaestus smart wheelchair system,” in Proceedings of ICORR ’99: International Conference on Rehabilitation Robotics, vol. 99, pp. 99–105, Stanford, CA, USA, July 1999.
  23. B. Kuipers, “Building and Evaluating an Intelligent Wheelchair,” Tech. Rep., Tech. Rep., University of Texas at Austin, Austin, TX, USA, 2006. View at Google Scholar
  24. P. J. Holliday, A. Mihailidis, R. Rolfson, and G. Fernie, “Understanding and measuring powered wheelchair mobility and manoeuvrability. Part I. Reach in confined spaces,” Disability and Rehabilitation, vol. 27, no. 16, pp. 939–949, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. S. L. Seyler, A. Kumar, M. F. Thorpe, and O. Beckstein, “Path similarity analysis: a method for quantifying macromolecular pathways,” PLoS Computational Biology, vol. 11, no. 10, article e1004568, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. J. Brooke, “SUS: a “quick and dirty” usability scale,” in Usability Evaluation Industry, P. W. Jordan, B. Thomas, B. A. Weerdmeester, and A. I. McClelland, Eds., Taylor and Francis, London, UK, 1996. View at Google Scholar
  27. L. Lopez-Samaniego, B. Garcia-Zapirain, and A. Mendez-Zorrilla, “Memory and accurate processing brain rehabilitationfor the elderly: LEGO robot and iPad case study,” Bio-medical Materials and Engineering, vol. 24, no. 6, pp. 3549–3556, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. E. Ambrosini, S. Ferrante, M. Rossini et al., “Functional and usability assessment of a robotic exoskeleton arm to support activities of daily life,” Robotica, vol. 32, no. 08, pp. 1213–1224, 2014. View at Publisher · View at Google Scholar · View at Scopus
  29. F. Amirabdollahian, S. Ates, A. Basteris et al., “Design, development and deployment of a hand/wrist exoskeleton for home-based rehabilitation after stroke—SCRIPT project,” Robotica, vol. 32, no. 08, pp. 1331–1346, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. B. Mónica Faria, S. Vasconcelos, L. Paulo Reis, and N. Lau, “Evaluation of distinct input methods of an intelligent wheelchair in simulated and real environments: a performance and usability study,” Assistive Technology, vol. 25, no. 2, pp. 88–98, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. Y.-L. Chen, S.-C. Chen, W.-L. Chen, and J.-F. Lin, “A head orientated wheelchair for people with disabilities,” Disability and Rehabilitation, vol. 25, no. 6, pp. 249–253, 2009. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Mazo, “An integral system for assisted mobility,” IEEE Robotics & Automation Magazine, vol. 8, no. 1, pp. 46–56, 2001. View at Publisher · View at Google Scholar · View at Scopus
  33. H. A. Yanco, Shared user-computer control of a robotic wheelchair system, [Ph.D. thesis], Massachusetts Institute of Technology, Cambridge, MA, USA, 2000.
  34. G. Pires, U. Nunes, and A. T. de Almeida, “RobChair - a semi-autonomous wheelchair for disabled people,” IFAC Proceedings Volumes, vol. 31, no. 3, pp. 509–513, 1998. View at Publisher · View at Google Scholar
  35. S. P. Levine, D. A. Bell, L. A. Jaros, R. C. Simpson, Y. Koren, and J. Borenstein, “The NavChair assistive wheelchair navigation system,” IEEE Transactions on Rehabilitation Engineering, vol. 7, no. 4, pp. 443–451, 1999. View at Publisher · View at Google Scholar · View at Scopus
  36. T. Lu, K. Yuan, H. Zhu, and H. Hu, “An embedded control system for intelligent wheelchair,” in 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, January 2006. View at Publisher · View at Google Scholar
  37. P. Jia, H. H. Hu, T. Lu, and K. Yuan, “Head gesture recognition for hands-free control of an intelligent wheelchair,” Industrial Robot: An International Journal, vol. 34, no. 1, pp. 60–68, 2007. View at Publisher · View at Google Scholar · View at Scopus
  38. Y. Kuno, N. Shimada, and Y. Shirai, “A robotic wheelchair based on the integration of human and environmental observations - look where you're going,” IEEE Robotics & Automation Magazine, vol. 10, no. 1, pp. 26–34, 2003. View at Publisher · View at Google Scholar · View at Scopus