Abstract

Objective. To develop a software for mixed reality (MR) anatomical model creation and study its intraoperative clinical utility to facilitate laparoscopic partial nephrectomy. Materials and Methods. After institutional review board approval, 47 patients were prospectively randomized for LPN into two groups: the control group (24 patients) underwent operation with an intraoperative ultrasound (US) control and the experimental group (23 patients) with smart glasses HoloLens 2 (Microsoft, Seattle, WA, USA). Our team has developed an open-source software package called “HLOIA,” utilization of which allowed to create and use during surgery the MR anatomical model of the kidney with its vascular pedicle and tumor. The study period extended from June 2020 to February 2021 where demographic, perioperative, and pathological data were collected for all qualifying patients. The objective was to assess the utility of a MR model during LPN and through a 5-point Likert scale questionnaire, completed by the surgeon, immediately after LPN. Patient characteristics were tested using the chi-square test for categorical variables and Student’s t-test or Mann–Whitney test for continuous variables. Results. Comparison of the variables between the groups revealed statistically significant differences only in the following parameters: the time for renal pedicle exposure and the time from the renal pedicle to the detection of tumor localization (), which were in favor of the experimental group. The surgeon’s impression of the utility of the MR model by the proposed questionnaire demonstrated high scores in all statements. Conclusions. Developed open-source software “HLOIA” allowed to create the mixed reality anatomical model by operating urologist which is when used with smart glasses has shown improvement in terms of time for renal pedicle exposure and time for renal tumor identification without compromising safety.

1. Introduction

Laparoscopic partial nephrectomy (LPN) is the standard of care for clinical T1 renal tumors [13] and can be a technically challenging procedure with a steep learning curve [48]. The critical steps during LPN include renal pedicle exposure [9], identification of the renal tumor, complete excision, and renorrhaphy [10]. As most minimally invasive procedures lack direct visual inspection of the surgical area, we rely on imaging such as computer tomography (CT) or magnetic resonance imaging (MRI). Poor preoperative planning and negligence during the exposure of the renal pedicle increases the risk of vascular injury and may lead to a significant bleeding often hard to control and requiring a conversion to an open surgery [11]. In turn, a thorough preoperative planning facilitates precise intraoperative identification of the tumor and its relationship with surrounding structures allowing for precision-based resection, thereby, minimizing complications and positive surgical margins whilst maximizing clinical outcomes, especially in complex renal tumors [12].

Over the decades, various technological innovations have been developed to improve oncological outcomes in laparoscopic and robotic surgery, particularly focused at enhanced intraoperative tumor visualization such as 3D laparoscopic surgery, use of artificial intelligence (AI) based software, deep learning 3D model reconstructions, augmented reality (AR), virtual reality (VR), and mixed reality (MR). The latter is an emerging technology different from virtual and augmented realities as herein it overlays virtual objects that can be manipulated whilst still being anchored to the physical environment [13]. Usually, creation of AR and MR anatomical models needs dedicated staff such as bioengineers and graphic designers, which may be a real obstacle to the widespread use of these technologies.

The primary goal of our study was to indigenously develop a software that allows to create and utilize the MR anatomical model and study its clinical utility to help improve LPN outcomes. The secondary goal of the study was to evaluate the subjective utility of the MR model as an intraoperative reference tool.

2. Methods

2.1. Study Design and Randomization

After institutional review board approval, we performed a prospective randomized clinical trial to enroll patients with T1a renal tumors planned for LPN between June 2020 and February 2021. Patients were randomized into two groups. Group 1, the control group, consisted of patients who underwent LPN with solely intraoperative laparoscopic ultrasonography (US). Group 2 consisted of patients who underwent LPN facilitated with the MR model (Figure 1). Randomization was performed using a computerized randomization program (Jamovi 1.8.1, Randomizer Module, Sydney, Australia) and sealed envelopes.

Inclusion criteria were as follows: adults 18 years of age or older, able to sign an informed consent in full mental capacity, preoperatively diagnosed on conventional imaging (CT and or USG) with T1 small renal mass, and amenable for LPN. For each patient, we prospectively collected demographic data including age, body mass index, clinical tumor size, tumor side, location, and complexity score according to the PADUA scoring system; perioperative data, including time for renal pedicle exposure (from the moment of lower pole exposure to the renal pedicle) and time for renal tumor detection (from the moment of Gerota’s fascia incision to the detection of renal tumor); pathological data; and data on postoperative functional outcome and complications, classified according to the Clavien–Dindo system. Exclusion criteria were as follows: patients unwilling to participate or did not meet the inclusion criteria.

2.2. Preparation of the Mixed Reality Model

All patients included in the study underwent enhanced multidetector computed tomography (CT) preoperatively with 0.5 mm thin slice images in a lateral decubitus position. The position during scanning is close to the position of the patient undergoing LPN to minimize inner organ displacement. DICOM images were processed by the “Inobitec DICOM Viewer Pro” (Voronezh, Russia) software. To create a 3D model, the images were segmented by the renal vasculature (both arterial and venous), tumor, kidney surface, and collecting system using either the dynamic region growing or the watershed method. The 3D model created was exported as a stereolithography (STL) file. To obviate the necessity for additional staff like bioengineers and graphic designers, we have developed a software package called “HLOIA” which stands for “Healthy Life: Operations with Innovative Assistance” and is open-source. HLOIA consists of three distinct parts: a web application, a cloud server, and a client application for smart glasses. Previously prepared STL files were uploaded by the surgeon to the HLOIA web application. The HLOIA web application is based on the Three.js library for working with 3D objects and is located at https://hloia.org/. In the Editor section of the HLOIA web application, color and transparency settings of the MR model were adjusted (Figure 2) and saved to the HLOIA cloud server, which is based on .NET Core 3.1.

For the next step, the surgeon must have high-speed wireless internet access and HoloLens 2 smart glasses (Microsoft, Seattle, WA, USA). The surgeon must access the HLOIA website to install the HLOIA client on the glasses. The HLOIA client was created using the Unity platform and Mixed Reality Toolkit for Unity.

After authorization in the application, the previously saved MR model on a cloud server is downloaded and then becomes available for offline use. The obtained MR model could be anchored to any point in the physical world, and the wearer could view the model from any position (Figure 3). Resizing and rotation of the MR model is possible using hand gesture commands.

The entire process of MR model creation from CT segmentation up to downloading it to the smart glasses from the cloud server takes approximately 20 minutes on average and is performed solely by the operating urologist without previous experience in bioengineering or graphic design.

2.3. Intervention

All patients underwent LPN via the transperitoneal approach in the standard lateral decubitus position by a single surgeon with experience in more than 300 laparoscopic surgeries. In the control group, for the study purposes, the first step was exposure of the lower pole. The next step was renal pedicle exposure. Gerota’s (anterior renal) fascia was then opened to locate the tumor. The fat over the tumor was conserved if possible. After detecting the tumor, the decision to clamp the renal pedicle or not was based on the surgeon’s discretion. Using both resection and enucleation techniques, the tumor was removed. After tumor removal, the tumor bed was closed in a watertight manner with a 3-0 barbed suture.

In the experimental group, the steps were as follows. Step 1: exposure of the lower pole of the kidney. Step 2, lower pole of the kidney in the mixed reality model was manually aligned to that of the patient’s kidney. This step is critical for anatomical matching to demonstrate the exact location of the renal pedicle and tumor location under the guidance of the superimposed MR kidney model. Step 3, the location of the tumor was verified using an ultrasound probe in every case. After visual and spatial mapping of the renal tumor, the MR model was removed from the videoendoscopic picture and used as a reference tool during the rest of the surgery.

2.4. Questionnaire

After each LPN in the experimental group, the surgeon completed a 5-point Likert scale questionnaire for the subjective assessment of the MR model utility during LPN. The questionnaire included three items rated on a 5-point Likert scale: 1, strongly disagree; 2, disagree; 3, not clear; 4, agree; and 5, strongly agree. The statements were as follows:(1)“The 3D holographic kidney model was useful for renal pedicle exposure.”(2)“The 3D holographic kidney model was useful for locating renal tumor.”(3)“The 3D holographic kidney model was useful as a reference tool during LPN.”

2.5. Statistics

Patient characteristics were tested using the chi-square test for categorical variables and Student’s t-test or Mann–Whitney test for continuous variables. All results for continuous variables are expressed as mean (SD) or median (IQR), and frequencies and proportions were reported as percentages. Intraoperative and postoperative variables were evaluated, and the differences in quantitative and categorical variables were tested using nonparametric Mann–Whitney and chi-square tests, respectively. Quantitative variables were illustrated with box-and-whisker plots and frequency histograms with densities. Categorical and range variables are presented as bar plots. The questionnaire results were illustrated using a radar chart. For every comparison, exact values were shown, and the results were considered statistically significant when . Data collection was carried out using MS Excel 2016, and statistical processing was implemented using the software package Jamovi v.1.8.1.

3. Results

A total of 47 patients were randomized into the study. There was no statistical significance in preoperative values such as age, BMI, HgB, creatinine level, GFR, and tumor characteristics such as location, size, and complexity scores (Table 1).

Comparison of intra and postoperative variables between the groups revealed statistically significant differences in the following parameters: time for renal pedicle exposure and time from renal pedicle to detection of tumor location () in favor of the experimental group (Figure 4). For the indicated variables, large effects were also observed: Cohen’s d = 1.36 and 1.23, respectively. Ultrasound control confirmed the presence of a tumor in 100% of the cases in the experimental group. The functional outcome evaluation based on creatinine and calculated eGFR values revealed insignificant differences between the groups 101 ± 33.8 and 88 ± 35 ().

The rate of postoperative complications did not differ between the groups (). For the rest of the variables, including pathological data and the rate of positive surgical margins, no statistically significant differences were found either (Table 2).

The mean score of the first statement in the questionnaire “3D holographic kidney model was useful for renal pedicle exposure” was 4.78 ± 0.42. The mean scores for the second and third statements were 4.13 ± 0.55 and 4.35 ± 0.49, respectively.

The surgeon’s impression of the utility of the MR model assessed with the proposed questionnaire demonstrated high scores in all given statements (Figure 5).

4. Discussion

With improvements in optics, endoscopic TV monitoring systems, and surgical instruments, laparoscopic partial nephrectomy has become a viable alternative to open surgical treatment for kidney tumors up to 4 cm [14]. Preoperative enhanced computed or magnet tomography usually provides insights into the number of vessels of the renal pedicle, their route, and exact tumor location [15]. Preoperative images can be brought into the operating theater and placed on an additional screen for reference purposes. Unfortunately, this setup creates a situation in which the surgeon is forced to focus both on the surgical field and look away from the field to consult the preoperative image data on a 2-dimensional screen leading to the so called “switching focus problem” [16, 17]. Different imaging modalities such as ultrasonography, fluorescence imaging, optical coherence tomography, and ex vivo magnetic resonance imaging can be used for improved intraoperative visualization such as for renal tumor detection, but none of them can be applied uniformly for intraoperative enhancement of surgical dissection in every patient [18, 19]. A three-dimensional (3D) printed model was studied and found to be a reliable tool for preoperative planning and intraoperative navigation because it can reveal the real size, depth, and location of both the kidney mass and arteriovenous systems and may thus prevent damage to the surrounding structures [20]. Instead of using printed 3D anatomical models, production of which is costly and time-consuming, immersive technology (IT) that blurs the boundary between the physical and virtual worlds can come to the rescue. For clarity, the IT definition includes augmented reality (AR), virtual reality (VR), and mixed reality (Figure 6) [21]. Although AR utilization during partial nephrectomy has been thoroughly studied and provides clinical benefit, it requires additional staff such as software engineers for setting up and using the software and additional hardware such as a tracking system, video parsers, and powerful computers [2224]. MR is an emerging technology that overlays virtual objects that can be manipulated and that are anchored to the physical environment. It has already been used with smart glasses as a new tool for visualization of preoperative images that facilitates anatomical understanding by the patient [25, 26]. To make the process of acquisition of the MR model easier and less demanding, we have developed a dedicated software called “HLOIA.” With its assistance, it has become possible to prepare the MR model by the operating surgeon in 20 minutes. Our study revealed statistically significant differences in time for renal pedicle exposure and time from renal pedicle to detection of tumor location () in favor of the experimental group with large effects size: Cohen’s d = 1.36 and 1.23, respectively. Ultrasound control confirmed the presence of a tumor in 100% of cases in the experimental group, indicating the accuracy of kidney tumor detection with the use of a superimposed MR model. When the surgeon assessed the utility of the MR model during LPN, high scores were given for all statements in the proposed questionnaire. To our knowledge, this is the first study on the intraoperative utilization of mixed reality technology for LPN.

Our study has some limitations. First, we rely on precise segmentation of the preoperative CT image of the kidney and MR model matching with real-time endoscopic picture in OR. These are done manually and are subjected to human error; however, the use of automated segmentation software to obtain more precise segmentation images and intraoperative automated MR model calibration could be a solution. Second, the sample size of our study was limited owing to the newness of this technology, yet we were able to demonstrate significant differences in the sample size enrolled prospectively. Third, even though “HLOIA” software is an open-source, one needs smart glasses for MR model use, which are relatively expensive. The retail cost of HoloLens 2 glasses (Microsoft, Seattle, WA, USA) utilized in our study is approximately four thousand US dollars.

Despite the abovementioned limitations, our findings demonstrate time savings during renal pedicle exposure and renal tumor identification in favor of the experimental group. This has given us confidence to proceed on our future projects: identification of totally endophytic masses with the MR model in comparison with ICG imaging [27] and on automated precise MR model matching with real-time endoscopic images during whole LPN in an electromagnetic field which will make utilization of MR technology more accurate and useful.

5. Conclusion

The utilization of mixed reality technology during laparoscopic partial nephrectomy with the use of indigenously developed software called “HLOIA” and smart glasses has shown improvement in terms of time for renal pedicle exposure and time for renal tumor identification without compromising safety and effectiveness. Our findings indicate that this technology has the potential to enhance real-time precision-based surgery.

Abbreviations

MR:Mixed reality
LPN:Laparoscopic partial nephrectomy
US:Ultrasound
3D:Three-dimensional
CT:Computed tomography.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Disclosure

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Gadzhiev Nariman conceived of the presented idea and wrote the manuscript. Semenyakin Igor conceived of the presented idea, collected data, and contributed to the final version of the manuscript. Morshnev Aleksandr developed the software and has made computations. Antonio Alcaraz and Okhunov Zhamshid supervised the project and contributed to the final version. Vineet Gauhar computed data and contributed to the final version of the manuscript.