Abstract

The developments of medical practices and medical technologies have always progressed concurrently. The relatively recent developments in endoscopic technologies have allowed the realization of the “minimally invasive” form of surgeries. The advancements in robotics facilitate precise surgeries that are often integrated with medical image guidance capability. This in turn has driven the further development of technology to compensate for the unique complexities engendered by this new format and to improve the performance and broaden the scope of the procedures that can be performed. Medical robotics has been a central component of this development due to the highly suitable characteristics that a robotic system can purport, including highly optimizable mechanical conformation and the ability to program assistive functions in medical robots for surgeons to perform safe and accurate minimally invasive surgeries. In addition, combining the robot-assisted interventions with touch-sensing and medical imaging technologies can greatly improve the available information and thus help to ensure that minimally invasive surgeries continue to gain popularity and stay at the focus of modern medical technology development. This paper presents a state-of-the-art review of robotic systems for minimally invasive and noninvasive surgeries, precise surgeries, diagnoses, and their corresponding technologies.

1. Introduction

Based on the degree of invasiveness of surgical procedures, there are roughly three main categories: invasive procedures also known as open surgery, minimally invasive procedures, and noninvasive procedures.

Minimally invasive surgery (MIS) is a form of surgery intended to provide great benefits to the patient over conventional open surgery by minimizing unnecessary trauma caused in the process of performing a medical procedure. Besides less trauma, pain, blood loss, scarring, and better cosmesis, these differences lead to a shorter recovery time for the patient and reduced risk of complications [1]. However, it is also well documented that this approach brings a number of corresponding difficulties to the clinical staff performing it. These difficulties are due to the highly limited workspace, specialized tools requiring further staff training and adaption for use, and greatly reduced visual and touch information.

Despite the aforementioned drawbacks, MIS has continued to gain popularity and to be widely used [2]. A prime factor in this continued adoption is the corresponding development of medical tools and devices intended for use in MIS. Medical robotics has been applied in MIS; robotic platforms are particularly suitable due to their favorable characteristics. Such characteristics include high accuracy, repeatability, and the possibility of designing specialized mechanisms that can be applied to specific procedures and organs. Furthermore, medical robots can incorporate sensors to return touch and force information and can also be combined with medical imaging technology to allow autonomous, semiautonomous, or teleoperated control, which can improve surgical performance and the scope of MIS. In addition, by incorporating emerging imaging techniques and non-invasive modalities, more precise, cost-effective, and portable treatment tools can be made possible. Medical robotics has been in use for approximately 30 years; the first generation of medical robots were used as tool holders and positioners, before the development of active medical robots in the early 1990s [3]. Despite the fact that this technology has been around for three decades, medical robots have not been widely adopted due to limitations of control and the high risk of their applications. The associated regulations and standards, and in particular for active systems, which are intended to be powered during a procedure, are necessarily demanding and as such, the standard development time is very long in this field. There has also been the issue of acceptance from the medical community and surgeons, who still view with distrust any technology that would create a separation of the surgeon from the surgery. A few systems have seen commercial success; these are operated remotely or semiautonomously, where control of the robot remains strictly with the clinician at all times except, in some cases, during a highly restricted function, which can be monitored.

A current trend for medical robotics and devices being developed for use with MIS and non-invasive surgery (NIS) is to further increase the scope of applications with the design of highly function-specific devices that can be combined with medical imaging technology. Another trend for MIS is the development of systems to return touch and force information to aid in surgery or to provide more information for diagnosis purposes. This paper describes the development of robotic systems for MIS, the move towards less autonomous but more form-specialized systems, the rise in research and development of haptics technology in medical robotics, and the trend from MIS towards imaged-guided NIS.

2. MIS Robots and Their Technologies

2.1. From Open Surgery to MIS Robotic Surgery

Most of the first generation of medical robots were designed for MIS tool positioning, in which high accuracy and repeatable motion gave them a significant advantage [19]. Some open surgeries, such as total hip replacement [20], have also benefited from these precise systems, where the main purpose is to augment the performance of human surgeons in precise bone machining procedures. Tool positioners are significant in MIS as these procedures are inherently more difficult to perform; these types of robots essentially reduce the burden on a surgeon [21]. Orthopedic surgeries were the very first type of medical procedures in which medical robots were developed with the specific functionalities required to play an active role in an operation. The intention for many of the robots developed in the 80s and 90s for orthopedic surgeries was to automate part of the procedure during the surgery. These “active” or automated robots would implement a preoperative plan based on preoperative imaging techniques, such as magnetic resonance imaging (MRI) or computed tomography (CT), and then perform the operations without input from the surgeon. It was perceived that the use of robots in this fashion would improve the overall outcome of a surgery through increased accuracy of implant placement. Furthermore, robots were particularly suitable for this application due to the nondeformability of bones, relative simplicity of the task, and the fact that the minimally invasive approach made knee surgery particularly challenging for surgeons. Commercial robots developed for this task include the ROBODOC (Integrated Systems) and Caspar (Orto Maquet). Development of ROBODOC started in the 1980s and the first human trial in 1992 showed significant improvement of the surgical results gained by using this system over human conducted surgery [22].

2.2. Human-Controlled Robot-Assisted Surgery

Application of automated robotics to soft tissues was found to yield a particular challenge due to the tissue deformability [23]. This means that the imaging registration process to match the robot to the patient from the preoperative images would be insufficient to guarantee positional accuracy within safe limits throughout the procedure. As a solution, either intraoperative imaging or deformable tissue models can be used in conjunction with the preoperative planning to implement real-time controls; however, this is an extremely challenging technical issue [24, 25]. Another issue with automated robotics is that no robotic system can be guaranteed to become 100 percent safe, and robotic artificial intelligence is not sufficiently advanced to the point where the robot can be held responsible if a mistake is made. This is an issue of culpability that concerns manufacturers and has led to the shifting of the MIS technology towards nonautomated telerobotic systems. Telerobotic systems are in fact master-slave platforms, in which the surgeon has direct control of the action of the robotic manipulator. Visual feedback is provided by way of endoscopic sight and results in advantages over minimally invasive procedures through the use of binocular vision, tremor filtering, and motion scaling.

The concept of telesurgery has in fact developed separately from that of the automated surgical robot and was initially driven by the desire to treat critically wounded soldiers near the front lines. The first surgical telerobotic system was developed under the DARPA advanced combat casualty care program in the early 1990s [26]. More advanced and sophisticated examples of these types of systems are the da Vinci robot of Intuitive Surgical Inc. (http://www.intuitivesurgical.com/), Titan Medical Inc’s Amadeus System (http://www.titanmedicalinc.com/), and the ZEUS of Computer Motion Inc. (acquired by Intuitive Surgical in 2003). In the use of conventional MIS techniques in these systems, several robotic arms control endoscopic instruments, and an additional arm guides a laparoscopic camera; the arms are rigid and cable driven. The surgeon operates the robot from a console via two hand masters for the tools and pedal controls or voice control (ZEUS) for the laparoscope arm. With the da Vinci system, the surgeon can view the surgical site through a viewfinder, which generates pseudo-3D images. The da Vinci system shown in Figure 1 also incorporates “endo-wrists,” a special feature that provides two extra degrees of freedom (DOF) at the point of intervention and significantly increases the ease of use, especially for cutting and suturing operations [4]. Other examples of minimally invasive procedures performed by the da Vinci system include cardiac surgery, urological surgery, and prostatectomy [27].

Laparoscopic and endoscopic techniques are presently dominated by multiport access methods, which mimic the conventional hand-eye coordination with an instrument-stereovision system. Those techniques require multiple ports in the patient’s body for the MIS instrument insertions. Towards even less invasive techniques, surgeons are progressively moving to single-port access (SPA) MIS, with the assistance of robotic arms and computer assistive devices in the operating room. SPA can potentially minimize complications associated with multiport incisions [2830] although SPA poses significant challenges for the design of endoscopic instruments due to the maneuverability constraints through a single port.

2.3. Surgical Robots with Parallel Mechanisms

Most of the robotic manipulators used for minimally invasive or non-invasive medical interventions are serial structures. The serial structure robots have the advantage of providing a large workspace, high dexterity, and high maneuverability; however, they suffer from low stiffness and poor positioning accuracy. To address the drawbacks associated with serial structures, more attention has recently been paid to parallel structure robots, due to their simplicity, large payload capacity, positional accuracy, and high stiffness. The very first parallel platform was developed by Stewart in 1965 [31], whose platform was composed of a fixed base, a movable platform, and six variable-length actuators connecting with base and platform.

In recent years, several designs of parallel robot structures have been developed for a variety of medical procedures. Brandt et al. developed a compact robotic system for image-guided orthopedic surgery (CRIGOS) that comprised a parallel robot and a computational core for planning of the surgical interventions and control of the parallel platform [32]. Tanikawa and Arai developed a dexterous micromanipulation system based on a parallel mechanism and utilized it for performing microsurgery among a few other tasks [33]. Merlet developed a micro robot (named MIPS) with a parallel manipulator. The 3-DOF system allowed fine positioning of a surgical tool and was used as an active wrist at the tip of an endoscope [34]. In 2003, Shoham et al. developed a miniature robot for surgical procedures (MARS), which was a cylindrical 6-DOF parallel mechanism. MARS was shown to have the capability of being used in a variety of surgical procedures including spine and trauma surgery, where accurate positioning and orientation of a handheld surgical robot is of interest [35]. Maurin et al. developed a 5-DOF parallel robotic platform for CT-guided percutaneous procedures with force feedback and automatic patient-to-image registration of needle [36]. Tsai and Hsu developed a parallel surgical robot for precise skull drilling in stereotactic neurosurgical operations [37]. As mentioned earlier, a major drawback of parallel mechanisms is their restricted workspace compared to serial link robots. In neurosurgical operations, the workspace lies on the surface of the skull located at one side of the robot. Tsai and Hsu analyzed this asymmetric workspace and found the optimal relative positions of the skull and the parallel mechanism to determine the maximum workspace on the skull [37].

More recent developments in the use of parallel structures for medical interventions include the work by Fine et al. [5], where the authors designed a dual-arm ophthalmic surgical robot using a parallel structure mechanism (Figure 2(a)) with high precision (<5 μm). Their platform was used in both vascular cannulation and stent deployment in animal models. Fischer and his colleagues at WPI have been developing MRI-compatible parallel mechanism robots for prostate brachytherapy and neurosurgery applications [6, 38]. Figure 2(b) shows a prototype of their surgical manipulator, where the bottom figure illustrates a base platform for prostate interventions. This platform includes the 4-DOF parallel manipulator integrated with a 3-DOF needle driver to provide needle translation, rotation, and stylet retraction, thus in total providing 7-DOF needle motion [6].

Parallel robots are currently being designed and utilized for needle surgery by many other groups, including the dynamic model and control of a needle surgery parallel robot “PmarNeedle” reported by D’Angella et al. in 2011 [39]. A very recent work by Salimi et al. has reported the development and testing of a 4-DOF parallel structure platform that is used to assist in MRI-guided intracardiac interventions, in particular aortic valve implantation [40]. The work of Salimi et al. is the first effort that has been reported in the literature that uses parallel platforms for minimally invasive intracardiac interventions [40].

The compact and lightweight designs of medical robots with parallel mechanisms save necessary space for operation and storage, simplifying the relocation of the robot in the operating room. The compact structure allows easy sterilization by way of covering the robot with a closed drape, and its relatively small workspace can also be an important safety feature. Parallel robots, if designed correctly, can provide higher precision than similar serial robots, and as such they are well suited to applications in ophthalmic [41] and orthopedic surgeries [42], cell manipulations [43], and micropositioning [44, 45].

2.4. Human-Controlled Robots for Microsurgery

One of the first surgical robots developed for eye surgery was the microsurgical manipulator developed at Northwestern University [46]. This telerobotic system is designed to drive a micro pipette through the lumen of a hypodermic needle. The system is controlled via a Trackball master and can target retinal blood vessels to an internal diameter of 20 microns. Since the turn of the century, many more examples of telerobotic systems have appeared in the literature. The Johns Hopkins University steady-hand robot for eye surgery emphasizes on motion scaling over tremor filtering and has a positioning resolution of 5 to 10 microns (see Figure 3(a)) [7]. The RAVEN developed at the University of Washington is a 7-DOF cable actuated manipulator controlled by the PHANToM haptic control interface (Sensable Technologies Inc.), and includes wrist joints like the da Vinci System, and has the potential for haptic feedback [47]. At ETH Zurich, a wireless system has been developed to deliver drugs into the retinal blood vessels [48]; this is a “microbot” and requires only one incision to be made in the sclera wall. A telerobotic system known as Heartlander has been developed at Carnegie Mellon University, which uses suction pads to attach itself to the epicardium and propels itself using small onboard piezoelectric motors [49].

2.5. Wireless MIS Robots

Wireless robotic systems that can reach far inside the human body have become a new branch in minimally invasive robotic systems. Many examples of these systems can be found in applications to gastrointestinal (GI) procedures. The intestine presents a long and convoluted environment in GI endoscopic procedures. By performing the conventional GI procedures, patients complain of pain and discomfort, and clinicians face technical difficulties involved in navigating the instruments. As such, robotic devices such as the PillCam developed by Given Imaging Inc. are short and thin, can be inserted into the GI tract, and maneuvered under their own volition. The PillCam also includes a minicamera, LEDs, and RF transmitter but is transported using the body’s own digestive process. The “inchworm” design, developed by Quirini et al. [50] and shown in Figure 3(b), is a quite popular solution although legged microbots have also been employed. Other popular actuation mechanisms have also been explored such as the ones driven by external magnetic fields, proposed by Wang and Meng [51], Ciuti et al. [52] and Lien et al. [53], or hybrid driven by internal/external actuators in [54].

3. Use of Haptic and Tactile Sensing in Robotic MIS

Medical haptics is an underdeveloped field of research that is slowly gaining attention [55]. The motivation stems mainly from the rise of teleoperation and from its potential as a research tool [56]. Most research efforts currently focus on bringing haptic feedback to medical tools to aid in operations, especially those in MIS. Haptic systems have become more widely used in surgical training programs to simulate tool behavior for medical trainees [57]. Over the past 15 years or so, medical haptic systems have been used for either medical training via haptic simulation or improving the function of medical tools in minimally invasive surgery through force feedback [58].

3.1. Surgical Simulation and Training

Development of haptic surgical simulators is driven by the limitations of traditional methods of surgical training. Generally, the method of teaching is in two stages: first, the surgeon studies the anatomy from textbooks and other visual aids; this is followed by “hands on” training in the operating room or by cadaver dissection or simulation mannequins [59]. A problem with the traditional form of training is that availability of cadavers or patients to work with is limited and unreliable [57]. Simulators have the ability to generate realistic human anatomical properties and varied morphologies, and since the early 90s virtual reality (VR) simulations have been available for this purpose [60]. It has also been proposed that surgical simulators would aid in diagnosis and treatment planning.

3.2. Haptic Feedback in Telerobotic Surgery

Haptic feedback has become a vital area of research with the rise of teleoperated minimally invasive surgery. Clinically, such feedback can improve a surgeon’s sense of telepresence, hopefully leading to an improved performance. Evidence strongly suggests that the ability to confer haptic feedback to present surgical robotic systems such as da Vinci would contribute significantly to safe cardiac surgical procedures using these complex systems [61]. The deficiency of haptic feedback in current robotic systems is a significant handicap in performing the technically more intricate and delicate surgical tasks. Such tasks are inherent in specialties like cardiac surgery [62] and lack of haptic feedback and can lead to unsafe levels of force by the clinician [63]. An area of surgical haptics that provides an interim between autonomous medical robotics and master-slave telesurgery is the use of “virtual fixtures” [64] or “active constraints” [65]. Such robots do not actively drive the tools being used, rather the surgeon’s own motive power is used, while the robot can provide controlling forces when the boundary of a predetermined workspace is reached. This synergistic approach adds safety to the robot-assisted surgery and MIS techniques, whilst allowing overall control and judgment to remain with the surgeon [66]. The “Acrobot” developed at Imperial College is an example of this. Another research area that has seen a significant rise in popularity is the development of haptic capabilities for telerobotic systems, such as the da Vinci.

The Black Falcon was created at MIT in the late 90s [67]. Figure 4(a) shows an 8-DOF teleoperator slave for MIS, which includes some novel features to improve surgeon facility during an interventional procedure. The Black Falcon attempts to address some of the main limitations intrinsic to teleoperated MIS. Those restrictions include the discrepancy between tool motions observed via endoscope and the motions of the surgeon’s hand, the poor dexterity due to lack of DOF and the lack of force/tactile feedback. The slave system is a 4-DOF wrist with a 2-DOF gripper and a 3-DOF base positioner. In addition, a modified PHANToM is used as the master system. A dedicated control system called macro-micro is used to implement force reflection. The entire mechanism has careful weight distribution so that the counterbalance is easy to achieve with motor torque. The motors are brushed DC servo motors with planetary gearheads. Transmission is by cable, but the kinematics of the system is decoupled between links due to the cabling scheme.

A robotic telesurgical workstation (RTW) was developed by Çavuşoǧlu et al. at the University of California, Berkeley (see Figure 4(b)) [10]. The surgical tasks accomplished with the Berkeley/UCSF RTW are suturing and knot tying. The improved design includes high system bandwidth and good haptic feedback with sufficient fidelity. The slave system comprises two main sections: the gross positioner has 4 DOF, which is conventional for laparoscopic instruments, with an additional 2 DOF forming an “endo-wrist.” The whole mechanism is actuated by DC servo motors with cable transmission provided for the wrist. The master workstation is composed of a pair of 6-DOF PHANToMs, one for each slave manipulator. Preliminary testing showed that the addition of force feedback made suturing more successful than without it. It also highlighted some weaknesses in the current mechanical design. Since then, other works have also been carried out at other research groups. Okamura have shown that the absence of haptics in telesurgical systems can lead to an increased duration of operations with increased propensity for error [62]. By fitting strain gauges on the lower shaft of da Vinci needle tools and representing data using visual feedback on the display, they showed significantly reduced tool forces.

At Iwate University, Shimachi et al. report on the addition of a force sensor function to an instrument of the da Vinci using an “adapter frame,” through which the instrument is supported by the force sensors [68]. Results show maximum error of 0.5 N due to the frame deformation. Tavakoli and Patel describe how current telerobotic systems still lack the haptic feedback due to the difficulty of creating suitable force sensors and adapting to the complex end effectors [69]. Interpretation of force using visual feedback remains the only solution for the meantime.

3.3. Tactile Devices for Palpation and Characterizing Tissue Stiffness

Enhancing the tactile sensing capability of MIS instruments has become a prime research area [70]. It has been shown that force feedback can significantly improve the performance of robotic surgical systems [1]. Okamura [62] argues that true telemanipulation will require tactile, as well as force feedback. By the end of the 90s, tactile sensing technology was still considered new particularly in the field of medicine [71]. Some of the main challenges with the development of tactile sensing instruments are the limited size and weight permissible, sterility, safety [70], and placement of sensors based on the tool format [72]. Other applications for tactile sensing in medical devices have been developed. The development of tools specifically for quantifying tissue stiffness for use in MIS has been reported. Developing tools to measure tissue stiffness is motivated by the clinicians’ inability to directly touch or “palpate” a surface in MIS [71, 73]. This important quantification, although subjective, allows a quick assessment of the health of a tissue.

Design of tactile sensing has received lots of attention with several arrays now available. Tactile data processing and displays are much less well developed. Human tactile sensing is a tremendously complex system and still not fully understood; as such, the challenge of producing comprehensive tactile displays is still insurmountable. A review of tactile displays by Benali-Khoudja concludes that there are no tactile displays that fully incorporate all the physical parameters [74]. Tactile displays are still too large, imprecise, and expensive to be used in MIS [70] and for similar reasons are very rare in training devices [57]. Force feedback is much more established than tactile feedback, and several commercial devices have been developed. One of the most commonly used in both haptic medical systems and training systems is the PHANToM from Sensable Technology Inc., which evolved from haptics research at MIT [75].

One of the first palpation simulations was developed by Langrana et al. to feedback force information using a virtual knee model and a Rutgers Master [76]. The knee model was comprised of nearly 13,500 polygons and included information for the bones, cartilages, and muscles of the locality. Contact forces fed back to the master were calculated in real-time based on Hooke’s law. The Rutgers master was a glove incorporating actuators placed proximal to the palm. The actuators were air pistons, with a maximum force of 4 N, which could provide force on the fingers in flexion and extension. Having equipped with spherical joints allowed the adduction and abduction normally. The main drawbacks were that the actuators, although small, still limited the mobility of the hand and due to the absence of wrist feedback, it was impossible to simulate the object weight.

One of the first institutes that dedicated some efforts to the development of haptics for medical devices was the BioRobotics lab at Harvard. In fact, the first haptic medical system reported was a minimally invasive tool devised for laparoscopy by Peine et al. at Harvard [77]. The motivation for the system was to use it in MIS to allow the surgeon to palpate a region to locate arteries and detect blood flow in the same way as would be done in open surgery. The device shown in Figure 5(a) consists of a long endoscope-like probe with a tactile sensor array located at the end. The last portion of the probe is flexible, and a trigger mechanism allows the surgeon to orient the tip to a region of interest. The sensing mechanism is capacitive using an array of 64 force-sensitive elements, which are constructed from copper strips and rubber spacers; forces are measured by determining the change of capacitance between top and bottom layers. Initially, there was no force feedback to the user; tactile information was presented on a visual display. This was later combined with a master device (finger) developed to study human factors in earlier research [60]. The haptic display was tested on a phantom formed of rubber buried 5 mm deep inside a softer foam rubber block. During the test, subjects were asked to locate the tumor that was randomly moved within the range of ±2 cm. The tests revealed that with force feedback alone, errors were in the range of 13 mm, while with tactile feedback included this error dropped to average of 3 mm. Ottermo et al. developed another sensorized laparoscopic grasper using a commercially available tactile sensing array known as “TactArray” by Pressure Profile Systems [78]. This device had a size of 3.5 cm2 and comprised a array.

A device for “tactile imaging” was developed at Harvard by Wellman et al. [79]. This device was designed to generate stiffness contour maps of the surface area of an anatomical region, in this case specifically the breast, to detect the presence of (tumorous) lumps. Although manual palpation is possible and common for breast examinations, and additionally ultrasound and MRI elastography can generate stiffness quantification, the former is entirely nonquantitative whilst the latter is highly time and resource expensive. The device uses a piezoresistive sensor array with a resolution of 1.5 mm. In terms of the target size, the device was found to be twice as accurate as either manual or ultrasound breast exams.

Dargahi et al. [12] have developed a sensorized endoscopic grasper with visual force feedback representation with the aim of allowing stiffness measurements of tissues in MIS. Figure 5(b) shows this device with manually actuated grasper jaws, which can be closed around a tissue so that the jaw surfaces which are equipped with sensors are in contact with the tissue. The sensing material used is polyvinylidene fluoride (PVDF) piezoelectric polymer film. The unique features in construction of the device allow the sensor to measure nonlinear properties such as “softness”; however, it cannot measure static forces.

A haptic flexible endoscope was reported by Petra et al. at the University of Birmingham [80]. The design has a flexible digit to resemble the end of an endoscope created using flexible PVC tubes with varying stiffness through the longitudinal crosssection. The digit is part of a master-slave system and actuated accordingly; the user wears an instrumented glove as the master device. The actuation scheme is quite unique; the flexible tube is divided longitudinally into two chambers. These are pressurized separately with fluid and the pressure difference causing the tube to bend. In addition, sensing is achieved using a cantilever structure attached along the length of the tubing as this is bent from its natural position strain gauges, which can be implemented to output a signal.

Tavakoli et al. [81] describe a sensorized endoscopic grasper. Strain gauges are mounted on the end effector, and a linear motor and load cell combination is used to actuate the tip and measure forces imparted on it. These are mounted on the exterior (proximal) end of the endoscopic device. The grasper has an optional “wrist” at the distal end to allow angulations inside the work area. The system was evaluated using two PHANToM master devices.

A sensorized laparoscopic grasper has been developed at the Institute of Healthcare Industries in Germany [82]. This device is based on a standard manually actuated 10 mm laparoscopic grasper, in which both jaws are equipped with custom-made hexagonal array of 32 conductive polymer sensors with a spatial resolution of 1.4 mm. The output is displayed graphically as a 2-dimensional color map. A limitation with the system’s mechanical design is that not all the tissues are graspable.

At the Canadian Surgical Technologies and Advanced Robotics Lab, a tactile sensing instrument for MIS has been developed, which mounts a flat sensor array on the end of a probe [83]. The sensor is a commercially available capacitive array known as “TactArray” by Pressure Profile Systems comprised of 60 elements. The device was used to assess the difference between robot-conducted and manual-conducted palpation with robotic palpation leading to a 55% decrease in maximum forces applied, a 50% decrease in task completion time, and a 40% increase in detection accuracy.

Yao et al. [84] have developed a tactile enhanced probe called “MicroTactus” for MIS at McGill University. The premise is an arthroscopic “hook” type probe with a combined accelerometer and actuator incorporated in the handle to amplify vibrational forces picked up at the tip. The instrument improved the performance of tear detection in a phantom, especially when it is combined with auditory feedback.

3.4. Instruments for Tactile Display

There has been some fairly recent work conducted on the development of tactile displays for the representation of shape and pressure distribution derived from the same sensing system. Some of the first tactile displays were adapted from Braille machines [85]. Most Braille machines are driven by piezoelectric actuators and have high bandwidth but lack the range to render curved surfaces for a display. At Harvard, the first display developed [86] used a frame that held small pins of around 2 mm diameter, arranged in a array, which was actuated up to 3 mm into the fingertip using shape memory alloy (SMA) wires. Problems associated with SMA include hysteresis, and directional asymmetry, and delays caused by slow thermal response. The display was mounted on a force-reflecting master.

The very early work on haptic medical devices was completed at the Kernforschungszentrum Karlsruhe GmbH in Germany. A tactile display (see Figure 6(a)) was developed consisting of an array, which was based on an SMA spring actuation of the haptic tool [13]. The device was fan cooled and capable of a maximum force of 2.5 N and positional accuracy of 0.1 mm but would provide only 0.1 Hz bandwidth. Other developments around the same time period included a master device for general endoscopic surgery simulators developed at Stanford [87]. This consisted of a mechanical device using a novel system of linear and rotary actuators and transmission. The result was a design with low friction and inertia and high stiffness. The design had 4 DOF, a bandwidth of 100 Hz, and could deliver a force to the load with a mass of up to 1.8 kg at the master end.

Bicchi et al. [88] reported the development of a haptic device at the ARTS lab in the University of Pisa, where they modified an ordinary stiff laparoscope by adding a sensing unit located near the handle. The unit included a force sensor made of an aluminum ring attached to two strain gauges and a position sensor formed by the placement of an LED above a semiconductor. Information was returned to the surgeon via a monitor display. The advantage of this system was that it could provide a small and cheap method of haptic (force) feedback that could be incorporated into any commercial MIS tool. The disadvantage laid in the fact that the sensor placement implies that the tool properties, for example, backlash and friction, affect the feedback. Also, this system would be of very limited or no use with nonrigid devices.

The researchers at Salford University developed a tactile shape display (see [14]) using a pin array (Figure 6(b)) by choosing pneumatic cylinders for actuation. The advantage of the latter design is that it is generally smaller and more lightweight. The display is small enough to be mounted onto a fingertip and is incorporated as part of a larger hand master that provides vibrotactile, as well as thermal or shear feedback called “tactile glove.” A drawback is that the secondary components are bulky and heavy.

4. Noninvasive Robots in Imaged-Guided Therapy

4.1. Imaged-Guided Radiosurgery

A large number of modern non-invasive medical procedures were difficult to perform before the advent of positron emission tomography (PET), CT, MRI, and ultrasound; delicate or critical areas such as brain and heart were simply impossible to image and therefore operate on. Non-invasive principles also exacerbated the loss of information. Surgeons are now able to view the operating site in real time and track the progress of their instruments. Specially adopted robotic systems could be designed to perform intricate and accurate functions using the data returned from the imaging system for control. The CyberKnife manufactured by Accuray Inc. was first developed at Stanford University for neurosurgery with CT/X-ray guidance in the early 1990s (see Figure 7) [89]. It consists of a linear accelerator on a 6-DOF robotic arm and can fire precise radiosurgical beams with the accuracy of better than 2 mm without the use of a stereotactic frame. As no rigidly fixed frame of reference is used, it was possible to extend the range of application of the system to other areas such as chest, abdomen, and pelvis. In 1999, the Cyberknife became the first FDA-approved autonomous robotic system for radiosurgery [15]. Concurrent with a rise in sophistication of imaging and medical robotics, a much greater incidence of non-invasive image-guided robotics applications has become apparent in the literature since the turn of the century.

4.2. Imaged-Guided High-Intensity Focused Ultrasound

Non-invasive thermotherapy using high-intensity focused ultrasound (HIFU) has received increasing interest in the past couple of decades [9093]. Treatments using HIFU include liver tumors ablations [90], arterial occlusion [91], coagulating benign breast fibroadenomas [92], and brain surgery [93]. HIFU surgery is considered as an alternative to open surgery for ablating some kinds of cancerous tissues, for example, hepatocellular carcinoma (HCC) [90], that are not responsive to other therapeutic strategies, such as radiotherapy and chemotherapy. Comparing to open surgeries that require breaking the overlying tissues, HIFU surgeries do not need general anesthesia, and they have advantages of shorter recovery time and hospital stay and reduced risk of infection and therefore significantly reduce the overall cost.

In HIFU surgeries, ultrasound beams penetrate through soft tissues and are focused at the target to destroy deep-seated tumor tissue by regional heating without overheating or damaging the overlaying or surrounding tissues. The ultrasound beams can be focused by using self-focusing spherical transducers, lenses, or reflectors [94]. The focused ultrasound can also be achieved by a phased array transducer, in which the elements are individually controlled by excitation signals with proper phase differences. The focus can be very precise and small (in the order of 1 mm). Multiple sonications can be used by steering the entire desired target volume through controlling the electrical signal phases applied to each element of the transducer array or properly positioning the transducer by a robotic manipulator, or both.

The use of HIFU for the treatments of hemorrhage was proposed to save lives in emergency situations, such as in the prehospital phase, early in the hospital phase, or in the battlefield [23, 24]. In these applications, the applicators, which were comprised of ultrasound imaging probes and phase array transducers, were used for both detecting the hemorrhage location and cauterizing the blood vessels from the skin surface. The imaging system was Doppler-based allowing real-time visualization of the internal bleeding situation during HIFU sonications. The specification of the HIFU transducer depends on the target site, size, and temperature required. In [24], it was demonstrated that the required acoustic intensity is in the order of 1500 W cm−2 to cauterize the blood vessels and stop the bleeding. In this case, the HIFU transducer with a crystal aperture of 74 mm and 23-annuli elements operating at 2.2 MHz was used.

4.3. Robots for HIFU Therapies

Sophisticated computer-controlled multichannel HIFU systems have a large number of degrees of freedom to steer the focal point in a wide region and often require a large number of individual power amplifiers and transducer elements making the system nonportable [95]. This also adds to the challenges to use these systems in battlefield or in other emergency cases.

Portable manipulator systems designed for HIFU hemorrhage detection and treatment incorporating with HIFU were proposed by Alvarado et al. and Seip et al. [17, 18]. As shown in Figure 8, both of the HIFU and ultrasound diagnostic probes were attached to the end effectors of the manipulators, which were mounted on a life support for trauma and transport (LSTAT). The focus of the ultrasound beams emitted by the HIFU transducer elements could steer along the ultrasound propagation direction and be controlled by the individual signal phases from the HIFU amplifiers.

In [18], the transducer employed has 20 concentric ring elements operating at 1 MHz. The manipulator system has 6 DOF in order to satisfy the minimum set of controllable force, and at the same time, keep the weight and complexity at a minimum. Operating this system verified that the targeted spots shown on the ultrasound image of biological tissues could be thermally ablated after 60 s of sonication with an acoustic power of 300 W. In the system developed in [17], the end-effector is detachable from the manipulator. The manipulator was designed to place, release, and later retrieve the end effector on the patient. The applicator was scanned, with fine mechanical motion, on the surface of the patient’s skin during treatment. This design significantly improved the positioning accuracy and repeatability as the applicator registration can be better maintained with the low-profile and low-weight end effector when the injured soldier was moved over rough terrain. All of the non-invasive diagnoses and treatments were performed by the end effector under local or remote control during the transport of the patient. Another advantage of this detachable design is that the robot manipulator can be used for other applications when the treatment is in progress. For example, it can provide video feedback of the procedure or manipulate another medical device if necessary.

5. Conclusion

Modern robotics has been applied to facilitate the complex medical interventions including surgeries. Robot-assisted surgical platforms have evolved tremendously from the pre-electronics era to make the medical procedures safer, faster, more reliable, and comprehensive. The use of robotic manipulators in operating rooms is becoming even more justified due to the recent advancements in mechanical design, controls, and computer programming. Roboticists have also worked to improve robots’ capabilities through adaptation, using sensory information to respond to changing conditions, and autonomy. Another advantage is that robots can be designed with infinite different morphologies to deal with any specific operational topology; the surgeons, however, are comparable to general-purpose machines applicable to many procedures but ideal to none. In addition to the historical and technological advancements in robotics, several surgical trends have also affected the focus of research. Primary factors include the increasing emphasis on minimally invasive and non-invasive techniques and the availability of 3D imaging data. Despite the afore described positive aspects, there are still several challenges facing medical robotics. One of the primary antagonists in the application of any robotic system is the inability to actively process diverse sources of information, perform qualitative reasoning, and exercise meaningful judgments. This is a problem of artificial intelligence and is one of the main arguments against autonomous surgery at this time.

Within medicine, application of haptics is fairly limited; however, with the move towards less autonomous systems, the development of advanced haptic feedback for teleoperated medical robotics and haptic endoscopes and devices has become more prevalent. Most of the current research in this area focuses on bringing haptic feedback to medical tools to aid in operations, especially those in MIS. Haptic systems have been more widely used in surgical training programs to simulate tool behavior for medical trainees. Finally, the advance of multi-imaging modalities allows the treatments of NIS radiosurgery and NIS HIFU ablation with real-time image guidance.

Acknowledgment

This work was supported in part by the Japan Society for the Promotion of Science.