Clinical Study | Open Access
F. López-Mir, V. Naranjo, J. J. Fuertes, M. Alcañiz, J. Bueno, E. Pareja, "Design and Validation of an Augmented Reality System for Laparoscopic Surgery in a Real Environment", BioMed Research International, vol. 2013, Article ID 758491, 12 pages, 2013. https://doi.org/10.1155/2013/758491
Design and Validation of an Augmented Reality System for Laparoscopic Surgery in a Real Environment
Purpose. This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Method. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Results and Conclusion. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery.
Laparoscopic surgery has proven to be an alternative to traditional open surgery since smaller incisions are made in the abdomen of the patient . The laparoscopic camera and the different endoscopic instruments are introduced through trocars, the hollow cylindrical instruments that are placed into these incisions. Thanks to these smaller incisions, this surgery offers many advantages to the patient, such as less chance of infection, less subsequent operations to remake the abdominal muscle, and so forth. Consequently, the recovery time for the patient is faster both physically and psychologically, which means a lower postoperative cost for the hospital .
The main drawbacks of laparoscopic surgery in contrast to open surgery are the lack of direct vision, the need for hand-eye coordination, and the lack of tactile feedback to the surgeon. Another problem is related to the trocar placement since it may result in more invasive surgery. Currently, the incisions are made by palpation based on the experience and skill of the surgeon. The improper placement of trocars in an operation, such as in the lymph node dissection in the hepatoduodenal ligament, can complicate the operation. In these cases, a relocation of the trocar might be necessary (and more incisions than strictly necessary will be made) thereby limiting the advantages of laparoscopic surgery mentioned above .
Augmented reality (AR) is a 3D computer vision technique that is characterized by the real-time fusion of virtual elements on a real space . Currently, augmented reality offers enormous potential in many fields such as education, simulation, architecture, advertising, navigation devices, medicine, and rehabilitation . Surgery is the branch of medicine where augmented reality has more potential for application because it can provide surgeons with preoperative information (magnetic resonance imaging or MRI, radiography, 3D reconstructions, etc.) in the same place and at the same time that they are operating. Thus, some of the drawbacks previously cited are alleviated [5, 6].
In , a taxonomy of augmented reality systems in image-guided surgery is proposed. The work compares and analyzes several systems which use augmented reality technology in surgery applications. The analysis is based on the type of input data, the visualization format, and the way in which data is displayed in the operating theatre. The objectives of this comparison are to establish the syntax for defining a system of these characteristics and to show the principal components of an AR system for image-guided applications. In our case, following the analysis suggested in the work , three components have been chosen.(i)Specific data of the patient: our system uses MRI of the patient and generates a 3D model of internal structures.(ii)A visualization format based on color coding for anatomical structures: transparency has been used in the models to give more realistic depth in the virtual model.(iii)A full HD monitor for displaying data.
The main limitation of AR systems is the registration technique employed. In our case, the registration and fusion are done between a 3D volume from the segmented magnetic resonance images of the patient and the real-time image that is recorded by a webcam placed over the patient in the operating theater, specifically above of the patient’s abdomen (Figure 1).
Some authors have developed techniques to improve and automate preoperative placement of trocars. Based on 3D information extracted from computed tomography (CT) images or MRI, the surgeons must remember this information once they are in the operating theatre. In , an optimal access system with virtual endoscopic views is proposed, making the simulation with a phantom. In [9, 10], the problem is addressed in image-guided surgery, and trocar placement is optimized from a robotic point of view. The validation is performed on animals. In , the system requires the use of fiducials that have to be in the same position as when the CT was acquired. In addition, the position and orientation of the patient have to be the same in the operating theater.
Other authors deal with the problem in the operating theater similarly to the method presented in this work, but they are focused on the navigation during the intervention using the image that the endoscopic camera provides. In , a registration with fiducials is carried out to monitor the camera. These fiducials must be placed over the patient in the same positions in the operating theater and when the CT is acquired. In , 3D information is merged into the laparoscopic video. In , the validation was done in animals, and the registration and fusion processes were done manually thanks to the surgeon’s anatomical knowledge. In , a head-mounted display is used, and the validation was carried out in a commercial phantom.
The experiments carried out in this work were performed in laparoscopic cholecystectomies. This type of intervention is a common solution for diseases such as symptomatic gallstones . It is a common operation with low probability of pre- and postoperative complications. The placement of trocars is usually performed using either French or American techniques . The choice of one technique or another does not determine the outcome of the experiments, which can be extrapolated from each other. For our experiments, the surgeons chose the French technique because they are accustomed to using it. Both techniques are based on placing four abdominal trocars. Three of them are placed in the same positions in both cases; the fourth trocar differs in the American and the French technique from the area below the sternum in the American and in the opposite side of the liver in the French. In both cases, the surgeon draws four marks with a biocompatible pen taking into account external anatomical references. These marks serve as the initial references of where to make the final incisions. The first trocar (which is called the Veress needle or Hasson cannula and is different from others) is always located at the same position that has been marked with the pen (one centimeter above the navel after making an incision of about 10 mm with the scalpel). By means of this trocar, the pneumoperitoneum technique is performed, and the abdominal cavity is deformed . Subsequently, the endoscopic camera is inserted through this same trocar to visualize the abdominal cavity (keeping the insufflator hose connected to maintain pneumoperitoneum throughout the entire surgery). The other three incisions are made with the internal vision of the camera and palpation, correcting the position of the marks made with the pen. For two of the incisions, the primary surgeon inserts the surgical instruments (scalpel, forceps, scissors, etc.), and the other incision is used by the secondary surgeon according to the principal surgeon instructions.
The goodness of our system using augmented reality in the operating theatre was determined by measuring the precision offered by the system compared to not using it. Four distances relating to the four incisions made in the patient (, ) were obtained. In this work, the position of the incision () is equal to the position of the pen mark () plus an offset () (1). The measured distance , is due to different factor such as displacements and deformities of the pneumoperitoneum technique (), the distinctive features of the internal anatomy of the patient () that the surgeon does not notice at the time of making the marks with the pen, and the experience and skill of the surgeon ():
The distance is related to any error not taking into account by the other variables, for example, operating theatre characteristics (light, position of the patient in the stretcher). When an augmented reality system is used, may be decomposed into the elements of (3) (similar to (2), but decomposing into three new corrections). The augmented reality system introduces an error due to the applied registration method () which is related to the precision offered by the AR system (Section 2.2.1). There is also another error related to the accuracy in the segmentation procedure () of the 3D model which will be projected onto the patient. In our experiments, this segmentation was previously done by an expert and then reviewed by a second expert to check it. This assumption leads us to conclude that the segmentation error can be considered zero or equal to the pixel resolution. The distance is similar to (and other errors as the manual alignment of the marker are also included in this distance). In any case, the hypothesis of this work is that the errors introduced by the augmented reality system will be compensated by the global improvement in :
The purpose of this work is to measure the distance for the four trocars which (incisions where the trocar is inserted) and (marks made with the pen) are known. These distances are measured in some patients when the system is used and on other patients when the system is not used. The goal is to verify if the use of augmented reality system minimizes .
Several authors have attempted to measure the error caused by an augmented reality system in an operating theater. Most validate it on phantoms and in the maxillofacial and neurosurgery fields. The high resolution of these images and the rigidity of these structures indicate that this error can be explained mainly by (but not limited to) a registration error associated with augmented reality algorithms [19–26]. This error is measured qualitatively  or quantitatively as being on the order of several millimeters. Other authors validate their algorithms using abdominal operations. In , an AR system is applied in a liver phantom limiting the measured error. The AR system presented in  is validated for liver surgery on pigs where registration with 4 fiducials is used to measure its accuracy.
The rest of this paper is organized as follows. Section 2 is divided into two parts, the first part explains the AR system, and the second part describes the protocol of the experiments that were carried out. Section 3 presents the results, and Section 4 presents conclusions and discussions. The primary contribution of this paper is the design, performance, and validation of an augmented reality system, the ergonomic study of the visualization devices and the protocol definition for its validation on real patients in an operating theatre.
2.1. Augmented Reality System
2.1.1. Virtual 3D Model
When MR images are acquired, the patient must lie on a stretcher with his/her back straight and centered on both sides to calculate the position and the orientation relative to an initial coordinate system. A virtual model of the patient’s organs is extracted from these images using techniques of digital image processing, especially our own image segmentation algorithms  and others developed in . With this model, the clinician selects the patient’s navel in the MRI images to establish the origin of 3D space at that point in order to perform the registration with the real-time image (4). The new coordinate system is where , , and are the coordinates of the center of the patient’s navel with respect to the initial coordinate system , (Figure 2).
2.1.2. Camera Calibration and Real-Time Image
The real-time images are recorded with a camera that shows the area of interest throughout the entire surgery. Initially, the intrinsic parameters of the camera are obtained to calibrate it. To do this, it is necessary to have different captures of planar checkerboard patterns (see Figure 3), which should be different for each calibration image. Zhang’s method is used for the calibration step, taking the correspondence between 2D image points and 3D scene points over a number of images .
The 3 × 3 intrinsic matrix and the vector of the camera with the distortion parameters have the following form: where is the focal length, is the optical center of the camera, is the aspect ratio, is the camera skew between the - and -axes, and are the radial distortion parameters, and and are the tangential distortion parameters. The values of these parameters for camera calibration were (all in mm)
Then, a hexadecimal mark is placed on the navel and centered and oriented as shown in Figure 4. It is advisable to keep the camera parallel to the patient’s trunk in order to improve the accuracy of the system, but it is not mandatory (as explained in Section 2.1.3) because the system takes into account the inclination between the patient and the camera position. The next steps are the hexadecimal mark detection and the registration and fusion of the real image with the virtual model of the patient.
2.1.3. Registration, Fusion, and Hexadecimal Mark Detection
A binary hexadecimal code marker of 8.45 × 8.45 centimeters is used in this step. First, the RGB captured image is converted to a binary image, and the edge of the marker is detected thanks to an adaptive threshold algorithm based on the technique of Pintaric . Basically, “this technique evaluates the mean pixel luminance over a thresholding region of interest, which is defined as a bounding rectangle around the marker axis-aligned corner vertices in the screen-space.”
Afterwards, the relative marker position and orientation with respect to the camera (view point) can also be estimated from a planar structure when the internal parameters are known, in order to apply them to the virtual model. First, a 3D/2D homography matrix must be calculated to later obtain the projective matrix, as detailed in .
A 3D/2D correspondence includes a 3D point and a 2D pixel point , which are represented as and , respectively. is related by the 3 × 3 projective matrix as  shows: where is a 3 × 3 rotation matrix, is the translation vector of the camera, and is the homogeneous scale factor that is dependent on . Specifically, considering the plane, the expression of the homography that maps a point onto this plane, and its corresponding 2D point under the perspective can be recovered by writing where , , and are the columns of the matrix . Thus, is related by a 3 × 3 matrix that is called homography matrix:
Generally, the patient’s pose can be refined by nonlinear minimization, since the anterior processes are sensitive to noise, and, therefore, a lack of precision and the “jitter” phenomenon are produced.
In this case, the sum of the reprojection errors is minimized, which is the squared distance between the projection of the 3D points and their measured 2D coordinates. We can therefore write that
This equation will be solved using the Levenberg-Marquardt (LM) algorithm proposed by , providing a solution for the problem “Nonlinear Least Squares Minimization.”
In this way, the 3D virtual model and the patient’s image can be registered and fused. Just then, it is important for the patient to maintain his/her position to avoid possible registration errors.
2.2.1. Error Introduced by the AR System
Before the system was validated by the surgeon in the hospital, to test how the AR module works and to determine its accuracy (), the following experiment was performed. Initially, 512 × 512 CT images with a spacing-resolution of 0.488 × 0.488 × 0.625 mm per pixel were extracted from a jar by means of a GE LightSpeed VCT-5124069 machine. The model used was a 500 mL DURAN GLS 80 jar with a diameter of 101 mm. The 3D virtual model was obtained by applying a region growing algorithm taking the pixels between thresholds 150 and 2200 Hounsfield Units (HU).
The camera was placed at a 90° degree angle relative to the real jar. Then, the middle point of the jar was selected in the CT images as the new origin, and the marker was centered on the jar. The registration and fusion were performed at that moment, taking an image of the real jar and the virtual jar to validate the system’s accuracy. A full graphic example of the experiment is shown in Figure 5.
Different positions of the camera and measures were taken. Finally, it was proved that if the camera was placed at a 90° degree angle relative to the real jar, the system was introduced an error of 3 pixels (the minima of all cases). The real width of the jar and the image width provided by the camera were known, so a direct correspondence was made, and a measurement of mm, was obtained . As mentioned in the introduction, the augmented reality systems introduce an error in the virtual pose calculation with respect to the real space. This error is irrelevant in most of the domains where an augmented reality system is used; however, it is of great importance in medicine. The main causes of this error (but not the only ones) are related to the following: the camera (the internal configuration and different lighting conditions that produce different behaviors), the accuracy of registration algorithms, and the accuracy of segmentation methods. Since it is difficult to give the results for each of these errors separately, this measure is usually given as a whole, and in our case it is defined as .
2.2.2. Real Patient Experiments
We carried out two experiments out on real patients. All the documents required for the adoption of the experiments were presented to the research ethics committee of the hospital. This documentation included the following.(i)A certificate of commitment related to the ethical principles of clinical trials: it includes the fundamental human rights and ethical principles related to biomedical research on humans of the Helsinki and Tokyo declaration.(ii)A certificate of commitment from the researchers that take part in these experiments: in this certificate, the researchers agree to follow the rules and the protocol approved by the research ethics committee of the hospital.(iii)Informed consent, which is delivered to the patient: this document explains the purpose of study, the procedure, confidentiality, the cost, and the right to leave the study at any time without their final treatment being affected.(iv)A manual of the developed research: this document specifies the sample selection, the protocol used for the randomization of the sample into the two experiments, the protocol of the whole experiment, and the collection and analysis of the data.(v)A validation and data collection protocol: this document has the templates and protocols necessary for the data collection of these studies.(vi)A request to the hospital committee for approval of the protocol: this document summarizes all the information explained above and is mandatory in order to apply for approval of the experiments.
Initially, the experiments were to be performed through a segmentation of gadolinium contrast MR images. The use of this agent improves the image contrast and facilitates the segmentation of different organs to extract the patient’s 3D model. Even though it is safe, there is always the possibility of small allergic reactions in the patient. For this reason and since this contrast agent is not commonly used for this type of pathology, the committee rejected its use in the MRI acquisition. This change caused more difficulties in the segmentation procedure of abdominal organs, but it did not affect the results or conclusions of the experiments. After making this change, the clinical research committee approved the study, and the experiments were carried out.
In the first experiment, the augmented reality system is not used. The selected sample consists of 12 patients chosen randomly (eight women and four men). The following protocol was used.(i)Before the operation (the first time the surgeon visits the patient), the informed consent approved by the research ethics committee of the hospital and common information related to the MRI exam are given to the patient.(ii)The day of the surgery, the patient goes to the presurgery room and then passes to the operating theater.(iii)The surgeon performs the usual protocol until the operation ends. This protocol can be summarized as follows.(1)First, with a biocompatible pen, the surgeon marks the points where he/she will make the four incisions through which trocars will be inserted (Figure 6, left).(2)Second, the surgeon performs the four incisions based on his/her skill, experience, and traditional palpation techniques as explained in Section 1 (Figure 6, right).(3)When the four trocars are placed, the surgeon begins the operation according to the specific protocol for this type of surgery.(4)Once the gallbladder has been extracted and the four incisions are sutured, the surgeon measures the four values or distances (Figure 6, right). These four distances measure the difference between the initial pen marks and the real incisions or, in other words, the correction that has to be made for the technique of pneumoperitoneum, the anatomical differences of patients, and the skill of the surgeon.(5)Finally, the four incisions are bandaged.(iv)The surgery ends, and the patient leaves the operating theatre and goes to the postoperation room where he/she wakes up and continues with the recovery protocol.
In the second experiment, the augmented reality system was used. The system hardware, as shown in Figure 7 (left), is composed of a display device and a camera. The goal of the camera is to capture the image in real-time in order to register and merge this sequence with the 3D virtual model of the patient. The display device is responsible for showing the fusion of the video and the virtual object. In this experiment, different display devices were evaluated. Table 1 summarizes the advantages and the disadvantages that different displays offer; this information was obtained by a usability study taking in account operating theatre restrictions.
We chose a 23 inch full HD monitor as the display device based on the criteria of minimal interaction with the patient, minimal discomfort to the surgeon, and low cost. A dual core i3 computer with graphics card “Nvidia GT 240” was used. The screen and the camera were mounted on a stand as shown in Figure 7 (left). The stretcher with the patient was positioned between the stand and the surgeon. The actual image of the abdomen of the patient was captured by the camera which was positioned perpendicularly to the patient as shown in Figure 7.
The sample selected for this experiment also consisted of 12 patients chosen randomly (seven men and five women). The protocol used was similar to the one used in Experiment 1.(i)Before the operation, the same informed consent as in the first experiment is given to the patient. Then, the MRI is acquired.(ii)Thanks to different segmentation algorithms, a 3D model of the patient’s organs is obtained with the MR images. Specifically, in all cases, the liver and kidneys were segmented; in some cases the gallbladder and aorta were extracted (for surgeon requirements). The tool to perform the segmentation was made ad hoc [28, 29].(iii)On the day of the surgery, all the steps were similar to the first experiment, with only one difference: when the surgeon marks with the pen (Figure 8), he/she used the AR system that registers and merges the 3D model with the real-time image (Figure 9). The result of this process is shown on the screen that is directly in front of the surgeon.(iv)Once the 4 marks are drawn, the system is removed, and the surgeon continues the usual protocol until the surgery ends.(v)Finally, the same four values or distances as in the first experiment are measured, and the patient goes to the postoperation room to wake up and continue the recovery protocol.
Ninety-six distances/measures were obtained (four per patient) half of them using the system and the other half without it. The protocol described in both experiments has been followed without major problems. The usual procedure for cholecystectomy surgery was only to be modified when the four distances were measured, after the operation had been completed and before the incisions were bandaged. If any unexpected complication appeared, these distances would not be measured in the patient. During the twenty-four surgeries, no complications occurred, so the measures were taken in all cases. Table 2 shows the mean and standard deviation of the four distances measured in Experiment 1 on twelve patients. In this case, the traditional procedures (palpation and the skill of the surgeon) were used in the placement of trocars.
Table 3 shows the mean and standard deviation of the twelve cases in Experiment 2, that is, when the augmented reality system was used (a new procedure was added to the traditional protocol).
The Mann-Whitney test was used to validate the null hypothesis of equal medians at the default 5% significance level relating to the distance measures of both experiments. A value higher than 0.05 indicates that there is a nonsignificance difference, and therefore, the measures can be added. In Experiment 1, the values between the four distances were , , , , , and . If these values are analyzed the distance has significance differences with the other three distances. However, the median of the other three measurements has nonsignificance changes, and it can be assumed that the three distances has the same distribution. These conclusions are the same for the Experiment 2 distance measures.
Table 4 shows the average of the three distances as a global measure for both experiments. It represents the required correction when an augmented reality system is used and when it is not used.
This paper shows the protocol followed for the validation of an augmented reality system to help surgeons in the placement of trocars on patients in a real environment. First of all, the documentation that is normally required for patient involvement was presented to the hospital. The difference with other experiments, where new drugs or therapies are applicable, lies in the nonimpact that this process has on the patient or on the clinical staff because it does not introduce any additional risk to the surgery and no protocols are changed.
The hypothesis of this paper is that an augmented reality system can improve the placement of trocars in laparoscopic surgery. The results confirm this hypothesis since the average accuracy improved and its variability decreased when the AR system was used.
The Experiment 2 of our work was validated with a 3D model extracted from MR images. The reason for this modeling is that MR images are acquired in the normal protocol of the hospital where we performed the experiments for the laparoscopic cholecystectomies. However, the 3D model can be segmented from other types of images such as CT images. We used the navel as the anatomical structure of reference in the registration procedure of our AR system, but other external reference may be used for other surgeries if it is visible in the CT/MR images.
When Tables 1 and 2 are analyzed, the first trocar that is introduced in the patient () shows a null improvement. This result is consistent since the surgeon has no additional information (i.e., laparoscopic camera view) to change the position since the first mark is drawn with the pen until the first incision is made as explained in the introduction; it is usually located one centimeter above the navel, so a minimal correction is made with or without the system.
As shown in Table 3, the system improved the trocar placement accuracy by 33%, while variability was reduced by 65%. The use of an augmented reality system can be helpful in complex situations by providing additional information, where even the use of an internal camera view is not enough for the required accuracy (and more incisions than necessary may be made as mentioned in Section 1). Another advantage of using the augmented reality system is its low cost and its applicability. When the surgeon has the internal information provided by the laparoscopic camera which has already been introduced into the patient, they have a finite and limited time to make the rest of the incisions. The longer the decision time, the higher the costs and the greater the risks for the patient. The augmented reality system is useful even in the hours before the operation (when the patient is awake and out of risk) making it possible to plan and reduce the time spent in the placement of the trocar in the operating theater. The augmented reality system also has direct application for automating and optimizing the trocar placement for guided surgery.
When (2) and (3) are analyzed, errors introduced by the augmented reality system (, , and ) are much lower than the correction offered by the system (i). The correction (in absolute value) achieved by the system in this work was on the order of mm, while our AR algorithms introduced an error mm. If the correction or distance “” is the consequence of displacement produced by the technique of pneumoperitoneum and/or the subjectivity of the surgeon and/or the patient anatomical particularities, the system helps to correct the subjectivity and the particularities. The deformity and displacement that the pneumoperitoneum technique () produces could be solved if the 3D model was deformed the same way as the real deformations using a biomechanical or predictive model . Since the augmented reality system offers an internal view of the patient’s organs, it is hoped that the system can help to accurately determinate the displacement () that is currently corrected with the laparoscopic camera by introducing references that are not visible when the initial marks are made.
It is difficult to make a direct comparison with the literature results because different scenario particularities, the methods, the surgeries, and the type of “patient” involved in each validation (phantom, animals, or humans) as it was introduced in Section 1. Most of the authors validate their systems using numerical methods and/or using phantoms but few of them are evaluated in clinical settings with real patients, showing that the integration of augmented reality technology into the clinical environment and workflow is not common [9–15, 19–26]. It is often not feasible to evaluate a system based on surgical outcome or the impact of the system on the patient, but it is possible to evaluate these systems indirectly in phantoms and/or controlled environments . The contribution of our work is that the system is validated and evaluated in a real environment with patients, and the benefits of using an AR system are demonstrated in a more realistic manner.
This work has been supported by Centro para el Desarrollo Tecnológico Industrial (CDTI) under the project Oncotic (IDI-20101153) and the Hospital Clinica Benidorm (HCB) and partially supported by the Ministry of Education and Science of Spain (TIN2010-20999-C04-01), the project Consolider-C (SEJ2006-14301/PSIC) and the “CIBER of Physiopathology of Obesity Nutrition, an initiative of ISCIII” Prometheus and Excellence Research Program (Generalitat Valenciana, Department of Education, 2008-157). The authors would like to express their gratitude to the Hospital Clinica Benidorm and to the Hospital Univeritari i Politècnic la Fe (especially the surgical team) for their participation and involvement in this work.
- A. G. Gordon, P. J. Taylor, and C. Royston, Practical Laparoscopy, Blackwell Scientific, 1993.
- C. K. Rowe, M. W. Pierce, K. C. Tecci et al., “A comparative direct cost analysis of pediatric urologic robot-assisted laparoscopic surgery versus open surgery: could robot-assisted surgery be less expensive,” Journal of Endourology, vol. 26, no. 2, pp. 871–877, 2012.
- M. Feuerstein, Augmented reality in laparoscopic surgery new concepts for intraoperative multimodal imaging [Ph.D. thesis], Fakultät für Informatik Technische Universität München, 2007.
- R. T. Azuma, “A survey of augmented reality,” Presence, vol. 6, no. 4, pp. 355–385, 1997.
- E. Samset, D. Schmalstieg, J. V. Sloten et al., “Augmented reality in surgical procedures,” in Human Vision and Electronic Imaging, Proceedings of SPIE, January 2008.
- J. H. Shuhaiber, “Augmented reality in surgery,” Archives of Surgery, vol. 139, no. 2, pp. 170–174, 2004.
- M. Kersten-Oertel, P. Jannin, and D. L. Collins, “DVV: A taxonomy for mixed reality visualization in image guided surgery,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 2, pp. 332–352, 2012.
- A. M. Chiu, D. Dey, M. Drangova, W. D. Boyd, and T. M. Peters, “3-D image guidance for minimally invasive robotic coronary artery bypass,” The Heart Surgery Forum, vol. 3, no. 3, pp. 224–231, 2000.
- J. W. Cannon, J. A. Stoll, S. D. Selha, P. E. Dupont, R. D. Howe, and D. F. Torchiana, “Port placement planning in robot-assisted coronary artery bypass,” IEEE Transactions on Robotics and Automation, vol. 19, no. 5, pp. 912–917, 2003.
- L. Adhami and È. Coste-Manière, “Optimal planning for minimally invasive surgical robots,” IEEE Transactions on Robotics and Automation, vol. 19, no. 5, pp. 854–863, 2003.
- M. Scheuering, A. Schenk, A. Schneider, B. Preim, and G. Greiner, “Intraoperative augmented reality for minimally invasive liver interventions,” in Medical Imaging: Visualization, Image-Guided Procedures and Display, Proceedings of SPIE, pp. 407–417, February 2003.
- C. Bichlmeier, S. M. Heining, M. Feuerstein, and N. Navab, “The virtual mirror: a new interaction paradigm for augmented reality environments,” IEEE Transactions on Medical Imaging, vol. 28, no. 9, pp. 1498–1510, 2009.
- M. Feuerstein, T. Mussack, S. M. Heining, and N. Navab, “Intraoperative laparoscope augmentation for port placement and resection planning in minimally invasive liver resection,” IEEE Transactions on Medical Imaging, vol. 27, no. 3, pp. 355–369, 2008.
- F. Volonte, P. Bucher, F. Pugin et al., “Mixed reality for laparoscopic distal pancreatic resection,” International Journal of Computer Assisted Radiology and Surgery, vol. 5, no. 1, pp. 122–130, 2010.
- V. Ferrari, G. Megali, E. Troia, A. Pietrabissa, and F. Mosca, “A 3-D mixed-reality system for stereoscopic visualization of medical dataset,” IEEE Transactions on Biomedical Engineering, vol. 56, no. 11, pp. 2627–2633, 2009.
- C. K. McSherry, “Cholecystectomy: the gold standard,” The American Journal of Surgery, vol. 158, no. 3, pp. 174–178, 1989.
- C.-K. Kum, E. Eypasch, A. Aljaziri, and H. Troidl, “Randomized comparison of pulmonary function after the 'French' and 'American' techniques of laparoscopic cholecystectomy,” British Journal of Surgery, vol. 83, no. 7, pp. 938–941, 1996.
- F. Martínez-Martínez, M. J. Rupérez, M. A. Lago, F. López-Mir, C. Monserrat, and M. Alcañíz, “Pneumoperitoneum technique simulation in laparoscopic surgery on lamb liver samples and 3D reconstruction,” Studies in Health Technology and Information, vol. 18, pp. 348–350, 2010.
- C. Schönfelder, T. Stark, L. Kahrs et al., “Port visualization for laparoscopic surgery-setup and first intraoperative evaluation,” International Journal of Computer Assisted Radiology and Surgery, vol. 3, no. 1, pp. 141–142, 2008.
- D. Simitopoulos and A. Kosaka, “An augmented reality system for surgical navigation,” in Proceedings of the International Conference on Augmented, Virtual Environments and Three-dimensional Imaging, pp. 152–156, May 2001.
- N. Glossop, Z. Wang, C. Wedlake, J. Moore, and T. Peters, “Augmented reality laser projection device for surgery,” Studies in Health Technology and Informatics, vol. 98, pp. 104–110, 2004.
- R. A. Mischkowski, M. J. Zinser, A. C. Kübler, B. Krug, U. Seifert, and J. E. Zöller, “Application of an augmented reality tool for maxillary positioning in orthognathic surgery: a feasibility study,” Journal of Cranio-Maxillofacial Surgery, vol. 34, no. 8, pp. 478–483, 2006.
- B. W. King, L. A. Reisner, M. D. Klein, G. W. Auner, and A. K. Pandya, “Registered, sensor-integrated virtual reality for surgical applications,” in Proceedings of the IEEE Virtual Reality Conference (VR '07), pp. 277–278, Charlotte, Calif, USA, March 2007.
- T. Kawamata, H. Iseki, T. Shibasaki et al., “Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical note,” Neurosurgery, vol. 50, no. 6, pp. 1393–1397, 2002.
- S. Vogt, A. Khamene, and F. Sauer, “Reality augmentation for medical procedures: system architecture, single camera marker tracking, and system evaluation,” International Journal of Computer Vision, vol. 70, no. 2, pp. 179–190, 2006.
- J. J. Fuertes, F. López-Mir, V. Naranjo, M. Ortega, E. Villanueva, and M. Alcañiz, “Augmented reality system for keyhole surgery: performance and accuracy validation,” in Proceedings of the International Conference on Computer Graphics Theory and Applications (GRAPP '11), pp. 273–279, Algarve, Portugal, March 2011.
- S. Nicolau, L. Soler, D. Mutter, and J. Marescaux, “Augmented reality in laparoscopic surgical oncology,” Surgical Oncology, vol. 20, no. 3, pp. 189–201, 2011.
- F. López-Mir, V. Naranjo, J. Angulo, E. Villanueva, M. Alcañiz, and S. López-Celada, “Aorta segmentation using the watershed algorithm for an augmented reality system in laparoscopic surgery,” in Proceedings of the IEEE International Conference on Image Processing, pp. 2705–2708, Brussels, Belgium, September 2011.
- L. Ibañez, W. Schroeder, L. Ng, and J. Cates, The ITK Sotware Guide, Kitware, 2005.
- Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
- T. Pintaric, “An adaptive thresholding algorithm for augmented reality toolkit,” in Proceedings of the 2nd IEEE International Augmented Reality Toolkit Workshop (ART '03), 2003.
- J. Martin-Gutierrez, J. L. Saorin, M. Contero, M. Alcañiz, D. Pérez-López, and M. Ortega, “Education: design and validation of an augmented book for spatial abilities development in engineering students,” Computers and Graphics, vol. 34, no. 1, pp. 77–91, 2010.
- R. Hartley and A. Zisserman, “Multiple View Geometry,” in Computer Vision, Cambridge University Press, 2nd edition, 2003.
- G. Simon, A. W. Fitzgibbon, and A. Zisserman, “Markerless tracking using planar structures in the scene,” in Proceedings of the IEEE and ACM International Symposium on Augmented Reality (ISAR '00), pp. 120–128, 2000.
- D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” Journal of the Society for Industrial and Applied Mathematics, vol. 11, no. 2, pp. 431–441, 1963.
- F. Martínez-Martínez, M. A. Lago, M. J. Rupérez, and C. Monserrat, “Analysis of several biomechanical models for the simulation of lamb liver behavior using similarity coefficients from medical image,” Computer Methods in Biomechanics and Biomedical Engineering, pp. 1–11, 2012.
Copyright © 2013 F. López-Mir et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.