Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2016 (2016), Article ID 9358369, 6 pages
Research Article

Augmented Reality for Assistance of Total Knee Replacement

Universidad Militar Nueva Granada, Carrera 11 No. 101-80, Bogotá, Colombia

Received 13 January 2016; Revised 25 May 2016; Accepted 12 June 2016

Academic Editor: Jar Ferr Yang

Copyright © 2016 Castillo Daniel and Olga Ramos. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The aim of this work was the development of a surgical assistance system based on augmented reality to support joint replacement procedures and implantation of prosthetic knee. Images of the scene were captured in order to detect the visual markers located on the lateral surface of the patient’s leg for overlapping the 3D models of the prosthesis and the joint, as well as the tool used by the medical specialist. With the marker identification, it was possible to compute its position and orientation for locating the virtual models, obtaining a monitoring system for giving accurate information about the procedure. Also it can be used as training platform for surgeons, without having volunteers or patients for performing real surgeries; instead they can train in a virtual environment. The results have shown an efficient system in terms of cost-benefit relation, taking into account the materials used for developing the system; nevertheless, the accuracy of the algorithm decreases according to the distance between the markers.

1. Introduction

Assisted computer surgery has improved the accuracy of many types of medical procedures, such as laparoscopy where the surgeon manipulates robotic arms performing minimally invasive surgery, using machine vision algorithms to obtain relevant data of the operation area during surgery [1, 2].

It is important to mention that due to the machine vision technique is possible to perform surgical procedures based on augmented reality, as can be seen in [3, 4], where they use image processing for locating virtual tools and 3D models of some organs as well as the incisions, fastenings, and sutures necessary for performing the surgery, showing new fields in medical training without real patients [5].

One of the works that uses the above concepts is presented in [6], focused on pancreatic surgery by using 3D modeling to represent the pancreas and an adjacent tumor, which is monitored by the surgeon during the intervention with some limitations in the real-time processing; however, the impact and importance of this technology in future surgeries and medical training are highlighted.

Similarly, in [7] a work related to oral surgery is presented; it uses a stereoscopic mirror to visualize the objects used for performing the surgery in a monitor, with some delays in the visualization due to the position of the mirrors.

The orthopedic field has also been favored with the development of augmented reality technologies, being an important support in procedures like knee replacement, necessary when the patient suffers excessive wear of cartilage [8], and the cut angles to insert the corresponding prosthesis must be identified accurately, taking into account the fact that the wrong orientation of it could have a negative effect on patient’s life, by mechanical wear [9, 10].

On the other hand, some studies have questioned the effectiveness of computer-assisted surgical procedures, stating that its cost is very high and does not include deformities or anatomical imperfections [1113]; it is relevant to mention that the last developments incorporate magnetic resonance imaging and anatomical studies for each individual patient, increasing the accuracy of virtual reality tools [14].

Other works show results about some components that are in augmented reality systems, looking for methods and applications to improve the performance of these kinds of systems; the work described in [15] presents a methodology to calibrate image acquisition devices in real scenes to identify multiple markers, achieving a better recognition accuracy and therefore the location of virtual models. In [16] a work is presented that improves the 3D pose calculation of the different object that compounds the real scene through a vertical direction in the image, with the aim of enhancing the detection of direction and orientation of the mentioned objects.

Another aspect to consider in augmented reality systems is the force feedback through haptic devices, improving the user experience, so the authors [17] show a design capable of amplifying the forces generated in surgical tools, placing sensors between the user and the tool to increase or adjust the perceived strength by the tool and the force felt by the user. Some developments use reflective markers along with infrared cameras for augmented reality applications to know the position and inclination of cutting tool, as well as the angles among the hip, knee, and ankle [18, 19].

For all of the above, in this paper the design of a system of surgical assistance is presented, using magnetic resonance imaging to build 3D models of the tibia and femur and augmented reality to display the cut produced by comparing the inclination angles of the joint with cutting tools, highlighting the cost-benefit ratio of the implemented system, compared to others that are already in operation.

2. Methods and Materials

Due to the high standards required in any operating room when the surgeons are performing any procedure and the physical integrity of the patient is compromised, it is necessary to provide information quickly and accurately about the zone that is being treated. It is necessary to provide information to the surgeon quickly and accurately, on the zone that is being treated. Therefore, the system for telesurgery assistance must be able to give the data in real-time, in order to make modifications on the patient at the same time that the data are updated and displayed in the monitor.

Real-time communication was established using a TCP/IP connection to transmit the data of the images captured by a camera on the site of the surgery. When the client received the picture, it was processed to recognize a marker of augmented reality; then, the virtual objects were added by determining the relevant angles for display on the graphical interface, with the purpose of providing information for the correct development of the surgery, being a support in the location and alignment of the prosthesis. The diagram that describes the general operation of the developed job is shown in Figure 1.

Figure 1: General diagram of the work.
2.1. Acquisition and Transmission

To capture the images associated with the markers of virtual reality a webcam with a resolution of pixels and the RGB color space was used, in a room with controlled illumination.

The transmission of data via the TCP/IP protocol was performed using a technique of binary serialization, which consisted in transforming each value of the channels R (red), G (green), and B (blue) on a byte, whose value varied from 0 to 255 depending on the color content in each pixel. Once the server sent the data frame and has been properly received, the host reconstructed the image using the binary values of the three layers of color, to perform the required processing.

Figure 2 presents an example of the serialization for one of the markers.

Figure 2: Binary serialization of “T” marker.
2.2. Image Processing

Once the image of the scene has been captured, it is processed for determining the location (position and rotation) of each of the markers used. The first stage is the conversion of the image from the RGB color space to the gray scale. Then the binarization is performed to detect the black and white colors of the marker.

The segmentation of the colors that compose the image allows obtaining the logic data which define the details of the marker from the binarization process. Using such information, it is possible to estimate both the sides and the corners of the marker in order to link up the tridimensional model of the articulation of interest and its corresponding prosthesis for the surgery.

The image processing steps for detecting the markers are listed in Figure 3.

Figure 3: Processes for identifying the marker.

From the subprocesses listed in Figure 3, the identification of the markers is obtained as shown in Figure 4. There are highlighted borders of one of the markers, as a result of applying the processing steps for identification.

Figure 4: Marker detection.

After the identification of the markers on a scene, the comparison of each of them with preloaded image templates is performed, in order to calculate the inclination and location of the marker. As a result, the estimation of the inclination and location of the markers is achieved using the information of the corners and borders. This process is known as 3D pose estimation [20] and calculates 6 degrees of freedom by using the edges and corners of each marker in the scene [21].

2.3. Virtual Models Visualization

The tridimensional models of the articulation and the prosthesis were designed from real magnetic resonance images of the parts. The contour segmentation of the MRI samples of the tibia and femur bones was first applied and then smoothed to finally add the physical characteristics to the models.

In the same way the cutting angle between the cutting instrument and the knee was considered, as shown in Figure 5.

Figure 5: Example of arthroplasty surgery environment with augmented reality.

Figure 5 shows the two virtual models developed for the implementation of surgical assistance using augmented reality. Marker “A” contains the data of the articulation and marker “B” is used to locate and orient the tibial prosthesis, using the 3D pose estimation technique.

3. Results

Taking into account all 6 degrees of freedom calculated for each marker, the Graphical User Interface displays the interest values for an arthroplasty surgery. Among the interest values there are the knee flexion and the angular deviations. In the interface, the axes are used to show the relative location of the articulation, the cutting angle, and position, superimposing the previous data in the corresponding magnetic resonance image (MRI). The GUI also shows the values of 6 degrees of freedom of the markers, with the position in centimeters and angles in degrees. In Figure 6 presents the display panel of the interface.

Figure 6: User interface.

The red line in the image of the right indicates the cut made and it is calculated by comparing the inclination angles that has the tibia with the tool by using the expression presented inwhere is rotation angle of the tool in the parallel axis to the distance that separates the marker of the camera or as we named “roll,” is tibia roll, and is inclination angle formed between the tool and the tibia.

Being equally important, the vertical position that separates the joint with the tool () was taking into account to calculate the points that define the cut line, as shown in where is line longitude, is initial vertex, is final vertex, and and are the initial points of the line according to the magnetic resonance image and the markers positions.

The points tagged as D1 and D2 in the image of Figure 6 are the cutting separations that have as reference point the top of the tibia. The blue line indicates the natural angle of the joint known as the valgus angle, used for making modifications in the orientation of the virtual model and upgrading its position with respect to the leg surface. In this way it is possible to adjust the algorithm of augmented reality to the real scene; Table 1 summarizes the values related to the detection, position, and orientation of the markers.

Table 1: Localization of the markers.

In order to calculate the error of augmented reality system, measures have been taken about the real distance of the objects in the real scene and the distance calculated by the interface with the data obtained by the processing of the markers. Table 2 shows the measures made and the relative error of each measure in a separation range among markers of 20 to 70 cm.

Table 2: Length from the test object.

The measure error increased exponentially as the distance among markers was increasing, as shown in Figure 7.

Figure 7: Absolute error of each measure.

4. Discussion

The system developed shows that it is possible to design systems based on machine vision and virtual reality, in order to create platforms for training and support in surgical procedures, which in addition present a correct performance; its cost of production was lower compared to the one exposed in [22].

The error shown in Table 2, taking into account the conventions in Figure 5, is acceptable as long as the distance between the tools would be less than 30 cm, which results in a relative error less than 1%, which translates into an absolute error of less than 4 mm on the virtual model position. The best case of performance was presented when the markers were placed within 20 cm from one another, for which the absolute error of the measure was less than 1 mm, specifically 0.8 mm achieving a proper location of virtual model.

For this type of medical tools the error of measure should be zero, but this is not possible due to two main problems: the first one is related to the capture device which has a resolution of 800 × 600 that is inappropriate for larger distances because the markers are not identified properly, and the other factor that affects the measurement is the luminosity of the place with the addition of noise to the image.

With the aim of improving the performance of the systems it is necessary to do some changes in the characteristics of the capture devices, like use of reflective markers instead of common markers, as well as a camera that allows detecting infrared frequency band of the light spectrum, improving the accuracy in the detection of the markers without depending on the conditions of light; in addition, the use of more markers is proposed to have different points of view and references for the location of the virtual models.

5. Conclusion

The development realized in this work is an approach to professional systems of surgical assistance by using computers, in tasks such as replacements of joints like knee, hip, or ankle. Although the work shown is based on a knee replacement, the methodology used can be applied in any surgical procedure, due to the markers used, which can be printed on surgical adhesives.

The changes of the lighting in the scene generate noise in the identification of the marker, which may compromise the performance of the system during a surgery. Therefore, in future researches the use of reflective markers and infrared cameras is proposed, as well as a greater set of markers in order to increase the accuracy of the system, developing a system that does not depend on a single reference point.

Competing Interests

The authors declared no conflict of interests.


Special thanks go to the Research Vice-Rectory of the Universidad Militar Nueva Granada, for financing the Project IMP/ING 1573 entitled “Prototipo de Robótica Colaborativa Para Asistencia Quirúrgica” in 2015.


  1. A. Bartoli, T. Collins, N. Bourdel, and M. Canis, “Computer assisted minimally invasive surgery: is medical computer vision the answer to improving laparosurgery?” Medical Hypotheses, vol. 79, no. 6, pp. 858–863, 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. F. P. Wieringa, H. Bouma, P. T. Eendebak et al., “Improved depth perception with three-dimensional auxiliary display and computer generated three-dimensional panoramic overviews in robot-assisted laparoscopy,” Journal of Medical Imaging, vol. 1, no. 1, Article ID 015001, 2014. View at Publisher · View at Google Scholar
  3. W. I. M. Willaert, R. Aggarwal, I. V. Herzeele, N. J. Cheshire, and F. E. Vermassen, “Recent advancements in medical simulation: patient-specific virtual reality simulation,” World Journal of Surgery, vol. 36, no. 7, pp. 1703–1712, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Kamphuis, E. Barsom, M. Schijven, and N. Christoph, “Augmented reality in medical education?” Perspectives on Medical Education, vol. 3, no. 4, pp. 300–311, 2014. View at Publisher · View at Google Scholar
  5. O. Ukimura and I. S. Gill, “Image-fusion, augmented reality, and predictive surgical navigation,” Urologic Clinics of North America, vol. 36, no. 2, pp. 115–123, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. T. Okamoto, S. Onda, K. Yanaga, N. Suzuki, and A. Hattori, “Clinical application of navigation surgery using augmented reality in the abdominal field,” Surgery Today, vol. 45, no. 4, pp. 397–406, 2015. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Wang, H. Suenaga, K. Hoshi et al., “Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 4, pp. 1295–1304, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Affatato, Surgical Techniques in Total Knee Arthroplasty and Alternative Procedures, Elsevier, 2014.
  9. Y.-W. Moon, C.-W. Ha, K.-H. Do et al., “Comparison of robot-assisted and conventional total knee arthroplasty: a controlled cadaver study using multiparameter quantitative three-dimensional CT assessment of alignment,” Computer Aided Surgery, vol. 17, no. 2, pp. 86–95, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Highsmith, Comparative outcomes assessment of the C-leg and X2 knee prosthesis [Graduate Theses and Dissertations], University of South Florida, Tampa, Fla, USA, 2012.
  11. C. L. Allen, G. J. Hooper, B. J. Oram, and J. E. Wells, “Does computer-assisted total knee arthroplasty improve the overall component position and patient function?” International Orthopaedics, vol. 38, no. 2, pp. 251–257, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. Ø. Gøthesen, J. Slover, L. Havelin, J. E. Askildsen, H. Malchau, and O. Furnes, “An economic model to evaluate cost-effectiveness of computer assisted knee replacement surgery in Norway,” BMC Musculoskeletal Disorders, vol. 14, article 202, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. A. F. Mavrogenis, O. D. Savvidou, G. Mimidis et al., “Computer-assisted navigation in orthopedic surgery,” Orthopedics, vol. 36, no. 8, pp. 631–642, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. A. Q. Dutton, S.-J. Yeo, K.-Y. Yang, N.-N. Lo, K.-U. Chia, and H.-C. Chong, “Computer-assisted minimally invasive total knee arthroplasty compared with standard total knee arthroplasty: A Prospective, Randomized Study,” Journal of Bone and Joint Surgery—Series A, vol. 90, no. 1, pp. 2–9, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. C. Resch, H. Naik, P. Keitler, S. Benkhardt, and G. Klinker, “On-site semi-automatic calibration and registration of a projector-camera system using arbitrary objects with known geometry,” IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 11, pp. 1211–1220, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. C. Sweeney, J. Flynn, B. Nuernberger, M. Turk, and T. Höllerer, “Efficient computation of absolute pose for gravity-aware augmented reality,” in Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR ’15), pp. 19–24, Fukuoka, Japan, September 2015. View at Publisher · View at Google Scholar
  17. G. Stetten, B. Wu, R. Klatzky et al., “Hand-held force magnifier for surgical instruments,” in Information Processing in Computer-Assisted Interventions: Proceedings of the 2nd International Conference (IPCAI ’11), Berlin, Germany, June 2011, R. H. Taylor and G.-Z. Yang, Eds., pp. 90–100, Springer, Berlin, Germany, 2011. View at Google Scholar
  18. F. Catani and S. Zaffagnini, Knee Surgery Using Computer Assisted Surgery and Robotics, Springer Science & Business Media, 2013.
  19. J. Y. Jenny and D. Saragaglia, “Computer-assisted unicompartmental knee replacement: technique and results,” in Small Implants in Knee Reconstruction, N. Confalonieri and S. Romagnoli, Eds., pp. 71–79, Springer, Milan, Italy, 2013. View at Google Scholar
  20. D. Oberkampf, D. F. DeMenthon, and L. S. Davis, “Iterative pose estimation using coplanar feature points,” Computer Vision and Image Understanding, vol. 63, no. 3, pp. 495–511, 1996. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Kirillov, “From glyph recognition to augmented reality,” CodeProject, 2011,
  22. T. M. Rankin, M. J. Slepian, and D. G. Armstrong, “Augmented reality in surgery,” in Technological Advances in Surgery, Trauma and Critical Care, R. Latifi, P. Rhee, and W. G. R. Gruessner, Eds., pp. 59–71, Springer, New York, NY, USA, 2015. View at Google Scholar