Skip to main content

Feasibility and accuracy of a real-time depth-based markerless navigation method for hologram-guided surgery

Abstract

Background

Two-dimensional (2D) medical visualization techniques are often insufficient for displaying complex, three-dimensional (3D) anatomical structures. Moreover, the visualization of medical data on a 2D screen during surgery is undesirable, because it requires a surgeon to continuously switch focus. This switching focus problem also results in extensive use of intraoperative radiation to gain additional insights for a 3D configuration. The use of augmented reality (AR) has the potential to overcome these problems, for instance by using markers on target points that are aligned with the AR solution. However, placing markers for a precise virtual overlay are time-costly, always have to be visible within the field of view and disrupt the surgical workflow. In this study, we developed ARCUS, a depth-based, markerless AR navigation system, which overlays 3D virtual elements onto target body parts to overcome the limitations of 2D medical visualization techniques.

Methods and results

In a phantom study, our markerless ARCUS system was evaluated for accuracy and precision by comparing it to a Quick Response (QR) code-based AR registration method. The evaluation involved measuring the Euclidean distance between target points on a 3D-printed face and their corresponding points on the virtual overlay using a robotic arm for precise measurements. Correlations between the measuring points provided by our markerless system and the actual measuring points on the 3D-print were high, with promising consistent Euclidean distances between the 3D points and the virtual points generated by both our markerless system and the Vuforia QR Code system. We also show two clinical examples of ex vivo case studies on cadaveric human specimens where our markerless ARCUS system could be applicable to.

Conclusion

The markerless AR navigation system holds strong potential as a 3D visualization method in clinical settings. While both ARCUS and the Vuforia QR code-based method fell short of meeting the surgical threshold of a 2 mm offset, our markerless system demonstrated promising features such as instant registration, markerless operation, and potential compatibility with non-rigid structures. Its automated virtual overlay onto target body parts offers significant advantages, paving the way for investigations into future clinical use.

Peer Review reports

Introduction

Surgeons are confronted with complex anatomical cases on a daily basis. Insight in the specific anatomy of the patient, both before and during surgery, are an essential part of the success of the surgical outcome. Medical visualization techniques help achieve these insights into patient-specific anatomy. Especially three-dimensional (3D) imaging has been used in many surgical disciplines [1,2,3,4]. Typically, operating rooms are equipped with 2D monitors. However, 3D medical imaging data are difficult to display on a two-dimensional (2D) screen since the full perception of depth cannot be estimated. As a result, there could be a gap between the displayed medical 2D images and the actual 3D situation. This problem does not only occur preoperatively, but also intraoperatively when images are used as a reference during surgery.

During Image-Guided Surgery (IGS), surgeons have to switch their attention between the operating field and the screen that displays the two-dimensional medical patient data, causing a switching focus problem [5, 6]. Furthermore, in many cases, a large amount of radiation is used during complex surgeries to visualize the patient-specific problems, which is harmful to both patient and the surgical treatment team [7,8,9,10]

To overcome the abovementioned problems, a medical visualization method that offers full 3D insights in the real-time anatomical situation is desirable. Recent studies have shown that using virtual information and augmented reality (AR) on a head-mounted device could be used as a visualization technique for clinical applications [11,12,13,14]. These head-mounted AR devices can show virtual overlays, often referred to as “holograms”, as if they were part of the real-life surroundings. Case studies using AR solutions that display anatomical virtual overlays preoperatively or intraoperatively have been published in vascular-, cardiothoracic-, neuro-, maxillofacial-, trauma- and orthopedic surgery [5, 14,15,16,17,18,19,20,21,22,23,24]. Unfortunately, a widely applicable, head-mounted hologram-guided surgery system that is robust to real-time movements does not yet exist. Currently, all medical AR registration methods incorporate the use of markers to ensure an accurate virtual overlay. The maximum acceptable offset for image-guided navigation systems in different surgical specializations has been set to a range between 1 and 2 mm [25,26,27,28,29]. QR (Quick Response) codes or optical tracking markers can create a virtual overlay onto the operating field [30, 31]. Although these marker-based AR registration methods are promising, they suffer from a number of drawbacks: 1) They often need time-consuming calibrations during surgery. 2) The markers must be specifically manufactured for sterile use and have to be attached to a fixed point on the patient which needs to be visible at all times for the augmented reality device. 3) Markers always have to be visible within the field of view, disrupting the surgical workflow. 4) The markers could add a systematic error to the total registration error. 5) Marker-based AR registration methods are only suitable for surgical procedures that include anatomical structures without movement. Because the position and orientation of the overlayed virtual element is registered onto the markers and not onto the anatomical structure itself the body part cannot be deformed during surgery. 6) Marker-based registration methods cannot compensate for soft-tissue movements, and the 3D virtual overlay can only be adapted to joint movements if every rigid structure around the joints contains a marker as well. These disadvantages of marker-based registration limit optimal usability in surgical situations [5, 24, 30,31,32].

A solution to the existing problems of marker-based AR registration methods may be found in a method where the registration is performed on the anatomical structure itself. Markerless registration methods that are used in the maintenance and aviation industry demonstrate the potential and ease of use of this visualization technique [33,34,35]. However, these markerless AR projection systems use edge-based registration onto the target object which specifically detect sharp edges and large changes in color contrast within a search environment. This type of AR registration is therefore not suitable for medical purposes, because the human body is characterized by smooth transitions and organic shapes. An alternative to edge-based AR registration was proposed by Gsaxner et al. [36, 37]. They introduced a depth-based markerless registration method and used artificial intelligence (AI) for facial recognition. Because the software registered facial features only, the invented technique was limited to applications that involve the face. Moreover, the use of AI for anatomical recognition is time-consuming, computationally expensive, requires large amounts of data for training, and is often not patient-specific [38, 39]. This stands in sharp contrast with situations that include pathological shapes of anatomical surfaces, for example due to fractures or tumor presence, that are patient-specific and where data is sparse.

The currently available hologram-guided navigation methods all have limitations in their registration methods that make them not suitable for patient-related applications. A depth-based registration method might overcome these patient-related AR registration problems. However, research on its possible use for widely-applicable clinical navigation system has not been reported. In this paper we introduce a depth-based registration method, the Augmented Reality for Clinical Understanding and Surgery (ARCUS) system. The primary aim of this study is to introduce the opportunities that such a depth-based, markerless registration method system offers, to evaluate its feasibility and initial performance compared to a gold standard and to determine possible future clinical applications.

Materials and methods

Markerless hologram-guided registration method

The ARCUS system is the first markerless hologram-guided registration method that could provide an immediate 3D virtual overlay onto a wide range of anatomical targets without any use of markers. This markerless system has the possibility to adapt to real-time movements and does not need extensive calibration.

The method described here has initially been built for the Microsoft HoloLens 2, a head-mounted device used for augmented and mixed reality [40]. It uses live depth data for registration onto a pre-operative, patient-specific surface model. However, the proposed real-time depth-based markerless AR navigation method is certainly not exclusively applicable to the commercial hardware used in this research.

By using the HoloLens 2 Research Mode, the raw sensor data of the HoloLens 2 are accessed. The Time-of-Flight (ToF) depth sensor and the Inertial measuring Unit (IMU) sensors of the HoloLens 2 are used to access real-time positional data and depth data [41, 42]. According to the official whitepaper, the ToF depth-sensor has an error margin of 0 mm +—0.5 mm at an object distance of 1 m to an error margin of 2 mm + -1 mm at an object distance of 2 m in a surrounding with an ambient light level of 3 kLux, which is equal to office light [43, 44]. For near-to real-time registration of the preoperative models to the live depth data, fast point feature histogram (FPFH) elements are used for feature extraction of both point clouds, namely the reference model and target scene. Nearest-neighbor feature pairs are created between the point clouds by using k-dimensional trees. These are used in combination with the TEASER +  + robust outlier filtering algorithm to achieve global registration [45,46,47] TEASER +  + is a rigid body registration algorithm with certain robustness due to the use of Truncated Least Squares (TLS), which makes it less susceptible for outliers in correspondence pairs. Lastly, a point-to-plane iterative-closest point algorithm is applied for a refined local registration [48]. The resulting virtual elements are visualized on the HoloLens 2. To examine both the initial performance and the main areas of improvement of this navigation system, an experiment on a 3D-printed phantom was conducted. The precision and accuracy of the depth-based, markerless registration system were measured in a controlled environment and these scores were compared with a commercially available AR overlaying system. An overview of the basic functions, inputs and outputs of the system is shown in Fig. 1.

Fig. 1
figure 1

A schematic overview of the ARCUS system. Note. HL2 refers to HoloLens 2

Materials

3D-printed face

We created a 3D-printed phantom in the form of the face of a male volunteer, shown in Fig. 2. This anatomical body part has a distinctive silhouette that varies from different angles and allows for comparison of our depth-based, markerless registration method with commercially available edge-based registration systems. Measuring points were designed to be printed on 13 different places on the face. These points were conical holes with a depth of 3.00 mm and a base radius of 2.00 mm. Due to the irregular shape of the test object, the height of the cone-shaped holes differed between places, however, the overall ratio was the same for all conical holes. The 3D model was printed on the Ultimaker S5 (Ultimaker BV, Geldermalsen, the Netherlands) with Ultimaker Pearl-White polylactic acid (PLA) filament [49, 50]. The object was 3D-printed using an Ultimaker AA Core 0.4 mm nozzle with a layer height of 0.2 mm.

Fig. 2
figure 2

The 3D-printed face that was used for measurements in this study. Left: Top view of the 3D print. Middle: Locations of all measuring points on the 3D-printed face as seen from a top view. Right: Side view of the 3D-printed face

Robotic arm

To accurately assess the visualization error during the actual measurement of the real and virtual points as perceived by the observer, a remote-controlled robotic arm was used to measure the exact location of each measurement point on the 3D-printed face, as well as on the virtual overlay. The Adept Viper s850 (Adept Technology Inc., Livermore, California) robotic arm is able to calculate its position in its own coordinate system with a repeatability of 0.05 mm, and is able to move in 6 degrees of freedom (df) [51]. Three df’s were used and the pitch, yaw and roll axes were kept constant throughout, so that each 3D position could only be reached from one orientation. The research setup is displayed in Fig. 3.

Fig. 3
figure 3

The robotic arm used in this research is shown with the 3D printer pointer attached onto it. The defined x-, y- and z-axis of the robotic arm coordinate system are illustrated

3D-printed measuring pointer

A measuring pointer was 3D-printed using PLA filament with the same printer settings as for the 3D-printed face. The tip of this pointer was a conical shape with a height of 1.00 mm and a base radius of 4.00 mm, to achieve an exact inverse shape to the measuring points on the model. The pointer was mounted on the robotic arm.

Procedure

The initial performance of the ARCUS markerless system was assessed by conducting a phantom study. The printed 3D face with measuring points (Fig. 2) was used to compute the precision and accuracy of the virtual overlay. In addition, we compared the performance of the markerless system with a marker-based control technique, an augmented reality registration QR-code system created by Vuforia Augmented Reality Platform (PTC, Boston, United States). Both techniques were implemented into an augmented reality app using the Unity3D engine, version 2019.4 [52].

The location of the QR-code was digitally planned in Unity3D, version 2019.4 to be at exactly the bottom left of the 3D-printed face.

Before measurements started, the 3D-printed face was secured onto the table with clamps to prevent it from moving. Next, the QR-code was positioned to the bottom left corner of the print to match its position on the digital planning with respect to the 3D print as precisely as possible. The QR-code was attached to the table, so that the virtual overlay would stay in place, even when the original 3D-printed model would be removed.

Consecutively, both methods were used for creating a virtual overlay onto the 3D-printed face. This virtual overlay consisted of 13 virtual spheres with a diameter of 1.00 mm.

The virtual measuring points yielded thirteen virtual spheres overlayed onto the 3D print of the face using each of the methods. Theoretically, in an ideal situation, these spheres would exactly be overlayed onto the conical tip of each real measuring point (Fig. 4). The difference between the real measuring points and the overlayed virtual points were measured with the robotic arm for each technique, determining its precision and accuracy.

Fig. 4
figure 4

Left: The overlayed virtual 3D model onto the 3D-printed face as provided by the ARCUS system. Right: The planned measuring points used for the overlayed virtual 3D model are shown relative to the 3D model of the 3D-printed face. For the experiment itself, only the green spheres were displayed onto the 3D-printed face

The positions of the original measuring points on the 3D print were measured twice by visually placing the conical tip of the robotic arm on the measuring point. The average of these two measurements for each point was considered the true coordinate of the measuring point. Next, a virtual overlay with the measuring points was placed onto the 3D print by using the markerless system. Because the navigation system locks the virtual content in place, the 3D-printed face could be removed after registration. The virtually overlayed measuring points from the overlayed 3D model could then be measured by placing the conical tip of the robotic arm in the middle of a measuring point through visual inspection. The second measurement used the Vuforia control system with QR code for registration. Because the 3D print was attached to the table, the QR code and virtual overlay were kept in place.

The positions of the 13 virtual measurement points were next measured twice by the robotic arm for each of the two methods, after which the distance between each original point and virtual point was calculated. In Fig. 4, a superimposed virtual 3D model of the face on the 3D-printed face is shown for illustration of the workflow. Note that the virtual model of the face, as illustrated in this figure, was not overlayed on the 3D-printed face for the experiment. Instead, it was a virtual overlay containing the 13 green spheres as shown in Fig. 4, without the skin-colored surface model. The absolute distances between the 13 original 3D print measuring points and virtually overlayed measuring points were analyzed using 1) the overlay as provided by our markerless ARCUS system when registered onto the visible part of the target 3D model and 2) the marker-based control overlay as provided by the QR system. Furthermore, the correlation between the measuring points of the markerless system and the actual 13 measuring points on the 3D-print was assessed by conducting a Pearson correlation test on the x, y and z dimensions, assessing the relative accuracy.

Results

The correlation between the coordinates of the actual 3D-printed points and the coordinates of the ARCUS virtual points was higher than 0.998, explaining practically all variance (x-axis r = 0.9997, p < 0.001, y-axis r = 0.9998, p < 0.001, z-axis r = 0.9982, p < 0.001) (Fig. 5).

Fig. 5
figure 5

The coordinates of the actual 3D-printed points plotted against the measured coordinates of the virtual points as overlayed by the ARCUS system for each axis in the robotic arm coordinate system. The linear line in each plot shows where an exact match would be

The methods were compared on offset in millimeters between the real 3D-printed points and the virtually overlayed points, expressed in Euclidean distance (Fig. 6). Our markerless system showed a mean Euclidian distance of 12.443 mm (11.273 − 13.614) between the 3D-points and the virtual points, while the QR Code system showed a mean distance of 5.018 mm (4.186 − 5.849) (Table 1). The mean offset of the overlay in each direction as provided by our markerless system is 2.324 mm (1.936 − 2.713) in the x-direction, -6.927 mm (-8.330 − -5.523) in the y-direction and 9.909 mm (9.174 − 10.644) in the z-direction, while the QR-code system provided an offset of -0.756 mm ( -2.849 − 1.336) in the x-direction, -3.334 mm (-4.266 − -2.401) in the y-direction and -0.913 mm (-1.584 − -0.242) in the z-direction.

Fig. 6
figure 6

Euclidean offset per overlaying technique. The figure shows the distribution of the found Euclidean offset between the thirteen 3D-printed measuring points and the 13 virtually overlayed measuring points as provided via the ARCUS system (red circles) and the Vuforia method (blue triangles)

Table 1 The mean accuracy and precision for both methods

Table 1 shows the mean accuracy and precision denoted as 95% confidence interval for the mean for both methods.

Ex vivo case studies

The current study showed that a markerless navigation system is able to provide a virtual overlay onto a 3D-printed phantom. However, the clinical applicability of the system should also be highlighted. We therefore present two examples of simulated surgical procedures on a cadaveric foot (Fig. 7a and b), one simulating a procedure from trauma surgery and one simulating a vascular intervention.

Fig. 7
figure 7

Two simulated surgical cases that demonstrate the clinical applicability of the proposed technology. a and c present the trauma surgical case without (a) and with (c) overlay; b and d present the vascular surgical case without (b) and with (d) overlay

In trauma surgery, precise placement of k-wires is crucial, and is often facilitated by intra-operative X-rays to achieve 3D insight. In an augmented reality setup, radiation can either be avoided or substantially reduced. This shortens the surgical procedure and makes it safer for patient and treatment team. Traditional marker-based AR methods, such as those using QR codes, prove inadequate in these cases due to 1) the movement of the patient’s body during surgery and 2) the varied fracture approaches by the surgeon, during which visibility of the QR marker cannot be guaranteed. The AR software solution proposed in the current paper overcomes these challenges, providing a markerless overlay with real-time re-registration that adjusts to rigid movements. Figure 7c shows a virtual overlay containing the internal bone structures, a simulated vessel and three k-wire trajectories, referred to with arrows. A trauma surgeon wearing the AR headset was then asked to insert the k-wires into the cadaver piece as precisely as possible, following the virtually planned trajectory of the cylinders. The surgeon was able to conduct this procedure with the proposed technology and reported great promise of the technology. Moreover, results albeit underpowered showed that differences between planned entry- and exit points of the bone and the real placement of the k-wire were on par with the results reported in the current study.

In vascular surgery additional ultrasound imaging is often used for finding and puncturing of a vessel. The proposed technology prevents additional imaging. To simulate human vasculature, we introduced a covered endovascular stent (6 mm in diameter) into the foot subcutaneously. This artificial vessel was then filled with contrast agent to visualize on a CT scanner (Fig. 7d). A skilled (endo)vascular surgeon was asked to puncture the vessel while referring to the virtual overlay only. The vascular surgeon was able to conduct this simulated procedure and inserted six needles into the cadaveric foot while wearing the AR headset and also reported great promise for this technology. With the current accuracy of the proposed method as reported in this paper, the vascular surgeon punctured the vessel once, grazed the stent directly against the stent wall twice and missed it three times with a maximum offset of 3.1 mm.

Discussion

The goal of the current study was to evaluate the feasibility of a markerless system as a prototype for possible clinical applications. The correlation between the true 3D-printed points and those estimated by the markerless system were assessed for the x-, y- and z-axis in the robotic arm coordinate system, and the actual offset in millimeters was assessed by comparing the markerless system with a QR code system.

The correlation coefficients show that there is an almost perfect correlation between the true points on the 3D-printed face and the virtual overlay as provided by the markerless system. This shows that the found offsets are mostly translational instead of angular and seem to be systematic. With the relative correlation of the x-, y- and z-axis being this high, these results are promising. However, the absolute distances, meaning the offset in millimeters between de actual coordinate and the estimated coordinate needs improvements for actual clinical use of the technology.

When the absolute offset in millimeters of the markerless system is compared with the QR marker system, results in Fig. 6 show that both the QR marker technique and the markerless technique do not yet reach the accepted surgical threshold of 2 mm offset. The QR marker technique currently operates closer to the threshold (3-8 mm) than the markerless technique (9-16 mm). Both the QR based technique and the markerless technique thus require further improvements for clinical use. However, for further development it is worth emphasizing the advantages of a markerless technique over a QR marker technique. These are summarized in Table 2.

Table 2 A comparison of capabilities for the ARCUS system and the control method

Both markerless and QR marker-based systems solve the switching focus problem [5], reducing exposure to radiation, both harmful for patient and surgical treatment team [9]. The advantage of markerless over marker-based AR registration methods is that future developments could allow for usage on anatomical structures with movement, compensation for soft-tissue movements, and use in an entirely sterile environment. Current markerless AR projection systems use edge-based registration despite the fact that the human body is characterized by smooth transitions and organic shapes. Alternative systems using depth-based markerless registration methods are limited to facial features only. The proposed markerless technique offers an alternative that overcomes these restrictions [36, 37].

A markerless, 3D navigation system for hologram-guided surgery could improve surgical workflow and insight into the individual anatomy by providing a surgeon patient-specific information in accurate dimensions and location, without the need for calibration. Once the desired precision and accuracy have been achieved, the system could provide a precise 3D virtual overlay onto a patient in a matter of seconds. We demonstrated in our first ex-vivo setup, using a human cadaveric foot, that the tested method is appealing to a skilled trauma- and vascular surgeon, appreciating the full potential of this novel technology.

Any planned surgical navigation trajectories could also be implemented, for example a neurosurgical biopsy trajectory or a sacroiliac screw placement trajectory in pelvic surgeries. Further research is needed to assess whether improved surgical workflow and insight in patient-specific pathology decreases the expected surgery time, the use of anesthetics, complication risk, hospital costs and radiation dose.

The current version of the markerless technique we have developed has limitations. During the measurements, the visualization error of the virtual overlay is minimal for the x- and y-axis, but is larger for the z-axis of the HoloLens 2. A systematic shift placed the virtual overlay at a too far distance along this optical axis. A possible explanation for this phenomenon could be lack of depth information, since currently only one depth sensor frame is used for registration. The lack of data could prohibit the HoloLens 2 from proper positioning of the virtual overlay into the room as based on the depth information. Furthermore, the Inertial Measuring Unit of the HoloLens 2 used to calculate its position and orientation in world coordinates could have had a limited accuracy.

The system is currently only applicable to rigid structures and does not yet compensate for non-rigid movements of anatomical body parts. Consequently, if a registration is performed on the HoloLens 2 with the created navigation system, an anatomical structure should be in the exact same position as it was when a medical scan was performed with the currently used registration methods. To overcome these problems, one or more types of non-rigid registration must be implemented for use with deformable body parts. The ARCUS system works with live depth data, which allows for extensive future development with respect to non-rigid adaptation to the real-time situation. Moreover, techniques such as human pose estimation could be combined with automated rigging of patient-specific 3D models to adapt the template 3D model to the live situation.

The robotic arm accuracy tests were conducted by using green spheres to position the tooltip onto. We noticed a constant presence of jitter of the overlayed virtual content when registered with the Vuforia QR code, which could have resulted in a lower outcome than when the stability of the virtual content would have been more secure. However, no changes could be made to the Vuforia system to prevent this issue since it concerns commercial software that is not open-source. Furthermore, for both methods, the precise estimation of the movement in the y- and z-direction of the robotic arm was ambiguous. As soon as the tooltip reached the green sphere with a diameter of 1 mm, the bright light of the virtual overlay obstructed the depth perception for the observer. Therefore, movement in y- and z-direction were hard to distinguish, which could have resulted in a maximum measurement error of 1 mm in these measurements, that is the radius of the green sphere, lowering the accuracy. Future research should therefore include the option to dim the brightness of the overlay during use.

Conclusion

The current paper shows that clinical usability of the depth-based, markerless, augmented reality navigation system is promising. Correlations on the x-, y- and z-axis between actual coordinates and the system estimates are almost perfect. Compared to QR marker techniques, the system underperforms. However, both the markerless system and marker techniques currently do not match surgical standards. Yet, markerless depth-based hologram-guided surgery techniques have important benefits over marker techniques, both theoretical and practical. Future research is needed to explore the clinical importance of such markerless systems as the one proposed in this study.

Availability of data and materials

The data and results analyzed in the current study are available from the corresponding author on reasonable request.

Abbreviations

2D:

Two-dimensional

3D:

Three-dimensional

AI:

Artificial Intelligence

AR:

Augmented Reality

ARCUS:

Augmented Reality for Clinical Understanding and Surgery

Df:

Degrees of freedom

IGS:

Image-guided surgery

Mm:

Millimeter

PLA:

Polylactic acid

QR:

Quick Response

VR:

Virtual Reality

References

  1. Matityahu A, Kahler D, Krettek C, et al. Three-dimensional navigation is more accurate than two-dimensional navigation or conventional fluoroscopy for percutaneous sacroiliac screw fixation in the dysmorphic sacrum: A randomized multicenter study. J Orthop Trauma. 2014;28:707–10.

    Article  PubMed  Google Scholar 

  2. Moon SW, Kim JW. Usefulness of intraoperative three-dimensional imaging in fracture surgery: a prospective study. J Orthop Sci. 2014;19:125–31.

    Article  PubMed  Google Scholar 

  3. Mankovich NJ, Samson D, Pratt W, et al. Surgical Planning Using Three-Dimensional Imaging And Computer Modeling. Otolaryngol Clin North Am. 1994;27:875–89.

    Article  CAS  PubMed  Google Scholar 

  4. Vannier MW, Marsh JL. Three-dimensional imaging, surgical planning, and image-guided therapy. Radiol Clin North Am. 1996;34:545–63.

    Article  CAS  PubMed  Google Scholar 

  5. Meulstee JW, Nijsink J, Schreurs R, et al. Toward Holographic-Guided Surgery. Surg Innov. 2019;26:86–94.

    Article  PubMed  Google Scholar 

  6. Feuerstein M, Navab N, Sielhorst T. Advanced Medical Displays: A Literature Review of Augmented Reality. Journal of Display Technology. 2008;4(4):451–67.

    Article  Google Scholar 

  7. Greffier J, Etard C, Mares O, et al. Patient dose reference levels in surgery: a multicenter study. Eur Radiol. 2019;29:674–81.

    Article  CAS  PubMed  Google Scholar 

  8. Kirkwood ML, Guild JB, Arbique GM, et al. Surgeon radiation dose during complex endovascular procedures Presented at the Thirty-ninth Annual Meeting of the Southern Association for Vascular Surgery, Scottsdale, Ariz, January 14–17, 2015. J Vasc Surg. 2015;62:457–63.

    Article  PubMed  Google Scholar 

  9. Kuttner H, Benninger E, Fretz V, et al. The impact of the fluoroscopic view on radiation exposure in pelvic surgery: organ involvement, effective dose and the misleading concept of only measuring fluoroscopy time or the dose area product. Eur J Orthop Surg Traumatol. Published Online First: 2021. https://doi.org/10.1007/S00590-021-03111-Z.

  10. Schuetze K, Eickhoff A, Dehner C, et al. Radiation exposure for the surgical team in a hybrid-operating room. J Robot Surg. 2019;13:91–8.

    Article  PubMed  Google Scholar 

  11. Microsoft Customer Story-Precision operations with Microsoft HoloLens 2 and 3D visualization. https://customers.microsoft.com/en-us/story/770897-asklepios-apoqlar-azure-hololens-cognitive-services-health-en (accessed 26 March 2021).

  12. Incekara F, Smits M, Dirven C, et al. Clinical Feasibility of a Wearable Mixed-Reality Device in Neurosurgery. World Neurosurg. 2018;118:e422–7.

    Article  PubMed  Google Scholar 

  13. Andrews C, Southworth MK, Silva JNA, et al. Extended Reality in Medical Practice. Curr Treat Options Cardiovasc Med. 2019;21. https://doi.org/10.1007/s11936-019-0722-7.

  14. Glas HH, Kraeima J, van Ooijen PMA, et al. Augmented Reality Visualization for Image-Guided Surgery: A Validation Study Using a Three-Dimensional Printed Phantom. J Oral Maxillofac Surg. 2021;79:1943.e1-1943.e10.

    Article  CAS  PubMed  Google Scholar 

  15. Bussink TW. Augmented reality in craniomaxillofacial surgery. 2020. https://purl.utwente.nl/essays/80423 (accessed 26 March 2021).

  16. Wang L, Sun Z, Zhang X, et al. A hololens based augmented reality navigation system for minimally invasive total knee arthroplasty. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag 2019:519–30. https://doi.org/10.1007/978-3-030-27529-7_44.

  17. Kuhlemann I, Kleemann M, Jauer P, et al. Towards X-ray free endovascular interventions - Using HoloLens for on-line holographic visualisation. Healthcare Technology Letters. Institution of Engineering and Technology 2017:184–7. https://doi.org/10.1049/htl.2017.0061.

  18. Garciá-Vázquez V, von Haxthausen F, Jäckle S, et al. Navigation and visualisation with HoloLens in endovascular aortic repair. Innov Surg Sci. 2020;3:167–77.

    Google Scholar 

  19. Pratt P, Ives M, Lawton G, et al. Through the HoloLens™ looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels. Eur Radiol Exp. 2018;2:2.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Liebmann F, Roner S, von Atzigen M, et al. Pedicle Screw Navigation using Surface Digitization on the Microsoft HoloLens. Int J Comput Assist Radiol Surg. 2019;14:1157–65.

    Article  PubMed  Google Scholar 

  21. Kunz C, Maurer P, Kees F, et al. Infrared marker tracking with the HoloLens for neurosurgical interventions. Current Directions in Biomedical Engineering. 2020;6. https://doi.org/10.1515/cdbme-2020-0027.

  22. Sadeghi AH, el Mathari S, Abjigitova D, et al. Current and Future Applications of Virtual, Augmented, and Mixed Reality in Cardiothoracic Surgery. Ann Thorac Surg. 2022;113:681–91.

    Article  PubMed  Google Scholar 

  23. Jud L, Fotouhi J, Andronic O, et al. Applicability of augmented reality in orthopedic surgery - A systematic review. BMC Musculoskelet Disord. 2020;21:1–13.

    Article  Google Scholar 

  24. Wesselius TS, Meulstee JW, Luijten G, et al. Holographic augmented reality for DIEP Flap harvest. Plast Reconst Surg. 2021. https://doi.org/10.1097/PRS.0000000000007457.

  25. Wei B, Sun G, Hu Q, et al. The Safety and Accuracy of Surgical Navigation Technology in the Treatment of Lesions Involving the Skull Base. Journal of Craniofacial Surgery. 2017;28:1431–4.

    Article  PubMed  Google Scholar 

  26. Azarmehr I, Stokbro K, Bell RB, et al. Surgical Navigation: A Systematic Review of Indications, Treatments, and Outcomes in Oral and Maxillofacial Surgery. J Oral Maxillofac Surg. 2017;75:1987–2005. https://doi.org/10.1016/j.joms.2017.01.004.

    Article  PubMed  Google Scholar 

  27. Labadie RF, Davis BM, Fitzpatrick JM. Image-guided surgery: What is the accuracy? Curr Opin Otolaryngol Head Neck Surg. 2005;13:27–31. https://doi.org/10.1097/00020840-200502000-00008.

    Article  PubMed  Google Scholar 

  28. Ieguchi M, Hoshi M, Takada J, et al. Navigation-assisted surgery for bone and soft tissue tumors with bony extension. Clin Orthop Relat Res. 2012;470:275–83.

    Article  PubMed  Google Scholar 

  29. Zhang S, Gui H, Lin Y, et al. Navigation-guided correction of midfacial post-traumatic deformities (Shanghai experience with 40 cases). J Oral Maxillofac Surg. 2012;70:1426–33.

    Article  PubMed  Google Scholar 

  30. Andrews CM, Henry AB, Soriano IM, et al. Registration Techniques for Clinical Applications of Three-Dimensional Augmented Reality Devices. IEEE J Transl Eng Health Med. 2021;9. https://doi.org/10.1109/JTEHM.2020.3045642.

  31. Makhataeva Z, Varol HA. Augmented Reality for Robotics: A Review. Robotics. 2020;9:21.

    Article  Google Scholar 

  32. Schott D, Heinrich F, Stallmeister L, et al. Exploring object and multi-target instrument tracking for AR-guided interventions. Current Directions in Biomedical Engineering. 2022;8:74–7.

    Article  Google Scholar 

  33. Model Targets | VuforiaLibrary. https://library.vuforia.com/objects/model-targets (accessed 21 February 2023).

  34. The Microsoft HoloLens As Your Maintenance Assistant - Mediaan. https://mediaan.com/mediaan-blog/hololens-maintenance (accessed 21 February 2023).

  35. KLM redefines its cargo training experience. - Accenture. https://www.accenture.com/us-en/case-studies/interactive/klm-cargo-training-experience. Accessed 21 Feb 2023.

  36. Gsaxner C, Pepe A, Wallner J, et al. Markerless Image-to-Face Registration for Untethered Augmented Reality in Head and Neck Surgery. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. 2019;11768:236–44. https://doi.org/10.1007/978-3-030-32254-0_27.

  37. Labini MS, Gsaxner C, Pepe A, et al. Depth-Awareness in a System for Mixed-Reality Aided Surgical Procedures. Published Online First. 2019. https://doi.org/10.1007/978-3-030-26766-7_65.

    Article  Google Scholar 

  38. Reza Boveiri H, Khayami R, Javidan R, et al. Medical Image Registration Using Deep Neural Networks: A Comprehensive Review. Computers & Electrical Engineering. 2020;87. https://doi.org/10.1016/j.compeleceng.2020.106767.

  39. Fu Y, Lei Y, Wang T, et al. Deep learning in medical image registration: a review. Phys Med Biol. 2020;65:20TR01.

    Article  PubMed  PubMed Central  Google Scholar 

  40. HoloLens 2—Overview, Features, and Specs | Microsoft HoloLens. https://www.microsoft.com/en-us/hololens/hardware (accessed 25 March 2022).

  41. Ungureanu D, Bogo F, Galliani S, et al. HoloLens 2 research mode as a tool for computer vision research. 2020. https://doi.org/10.48550/arXiv.2008.11239.

  42. HoloLens 2 hardware | Microsoft Docs. https://docs.microsoft.com/en-us/hololens/hololens2-hardware (accessed 11 March 2021).

  43. Kinnen T, Blut C, Effkemann C, et al. Thermal reality capturing with the Microsoft HoloLens 2 for energy system analysis. Energy Build. 2023;288. https://doi.org/10.1016/j.enbuild.2023.113020.

  44. Bamji CS, Mehta S, Thompson B, et al. 5.8 1Mpixel 65nm BSI 320MHz Demodulated TOF Image Sensor with 3.5μm Global Shutter Pixels and Analog Binning. 2018. https://ieeexplore.ieee.org/document/8310200/.

  45. Yang H, Shi J, Carlone L. TEASER: Fast and certifiable point cloud registration. IEEE transactions on robotics. 2020. https://doi.org/10.48550/arXiv.2001.07715.

  46. Yang H, Carlone L. A polynomial-time solution for robust registration with extreme outlier rates. Robotics: science and systems. 2019. https://doi.org/10.48550/arXiv.1903.08588.

  47. Rusu RB, Blodow N, Beetz M. Fast Point Feature Histograms (FPFH) for 3D Registration. 2009. https://doi.org/10.1109/ROBOT.2009.5152473.

  48. Rusinkiewicz S, Levoy M. Efficient Variants of the ICP Algorithm. Proceedings Third International Conference on 3-D Digital Imaging and Modeling. Published Online First: 2001. https://doi.org/10.1109/IM.2001.924423.

  49. Ultimate 3D Printing Material Properties Table. https://www.simplify3d.com/support/materials-guide/properties-table/ (accessed 28 May 2020).

  50. Ultimaker S5: Reliability at scale. https://ultimaker.com/3d-printers/ultimaker-s5-1 (accessed 6 April 2021).

  51. Adept Viper S850 | Eurobots. https://www.eurobots.net/other-robots-robots-adept-viper-s850-p232-en.html (accessed 5 April 2021).

  52. Unity Real-Time Development Platform | 3D, 2D, VR & AR Engine. https://unity.com/ (accessed 20 April 2023).

Download references

Acknowledgements

Special thanks to Dylan Duits, BSc, for his help with software development that ultimately led to this publication. We thank our colleagues from the from the Department of Medical Imaging, Anatomy in Nijmegen, the Netherlands, Radboud University Medical Center, and the radiology department, surgery department and coroner’s division at the Elisabeth-Tweesteden hospital in Tilburg, The Netherlands for their help and insights during this research.

Funding

This research was partly funded by an unrestricted research grant from W.L. Gore & Associates and a WeCare grant from Elisabeth-Tweesteden hospital and Tilburg University. The usual exculpations apply. W.L. Gore & Associates did not play any role in the design of the study, collection, analysis and interpretation of data, or in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: AG, LB, TM, MB, JH, Analysis: AG, ML. Development ARCUS system: AG, Funding acquisition: AG, JH, ML, Methodology: AG, LB, Supervision: LB, TM, MB, JH, ML, Literature Review: AG, Writing original draft: AG Writing – review & editing: AG, LB, TM, MB, JH, ML. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Annabel Groenenberg.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Medical Ethics Review Committee region Arnhem-Nijmegen (IRB Commissie Mensgebonden Onderzoek (CMO) region Arnhem-Nijmegen (no. 2017–3941)). The cadaver specimens were obtained in accordance with the Dutch Body Donation Program for Science and Education (no. BWBR0005009). Only body donations of humans aged 18 years and older with a valid handwritten testament that contained their own informed consent for autopsy and the use of tissue for research purposes were included. Written informed consent from the volunteer was obtained for the use of a 3D scan of his face for the creation of a 3D-printed test model for this research.

Consent for publication

Written informed consent from the volunteer was obtained for the publication of images of the 3D model of his face in this research article.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Groenenberg, A., Brouwers, L., Bemelman, M. et al. Feasibility and accuracy of a real-time depth-based markerless navigation method for hologram-guided surgery. BMC Digit Health 2, 11 (2024). https://doi.org/10.1186/s44247-024-00067-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s44247-024-00067-y

Keywords