Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-16T15:30:06.219Z Has data issue: false hasContentIssue false

Augmented Reality in Neurosurgery: A Review of Current Concepts and Emerging Applications

Published online by Cambridge University Press:  24 April 2017

Daipayan Guha
Affiliation:
Department of Surgery, University of Toronto. Toronto, Ontario, Canada Institute of Medical Science, University of Toronto. Toronto, Ontario, Canada
Naif M. Alotaibi
Affiliation:
Department of Surgery, University of Toronto. Toronto, Ontario, Canada Institute of Medical Science, University of Toronto. Toronto, Ontario, Canada
Nhu Nguyen
Affiliation:
Department of Electrical and Computer Engineering, Ryerson University. Toronto, Ontario, Canada
Shaurya Gupta
Affiliation:
Faculty of Applied Science and Engineering, University of Toronto. Toronto, Ontario, Canada
Christopher McFaul
Affiliation:
Institute of Medical Science, University of Toronto. Toronto, Ontario, Canada
Victor X.D. Yang*
Affiliation:
Department of Surgery, University of Toronto. Toronto, Ontario, Canada Institute of Medical Science, University of Toronto. Toronto, Ontario, Canada Faculty of Applied Science and Engineering, University of Toronto. Toronto, Ontario, Canada Division of Neurosurgery, Sunnybrook Health Sciences Centre. Toronto, Ontario, Canada Brain Sciences Program, Sunnybrook Research Institute. Toronto, Ontario, Canada
*
Correspondence to: Victor X.D. Yang, Division of Neurosurgery, Sunnybrook Health Sciences Centre, Senior Scientist, Brain Sciences Program/Imaging Research, Sunnybrook Research Institute, 2075 Bayview Avenue, Room M6-156, Toronto, Ontario, M4N 3M5. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Augmented reality (AR) superimposes computer-generated virtual objects onto the user’s view of the real world. Among medical disciplines, neurosurgery has long been at the forefront of image-guided surgery, and it continues to push the frontiers of AR technology in the operating room. In this systematic review, we explore the history of AR in neurosurgery and examine the literature on current neurosurgical applications of AR. Significant challenges to surgical AR exist, including compounded sources of registration error, impaired depth perception, visual and tactile temporal asynchrony, and operator inattentional blindness. Nevertheless, the ability to accurately display multiple three-dimensional datasets congruently over the area where they are most useful, coupled with future advances in imaging, registration, display technology, and robotic actuation, portend a promising role for AR in the neurosurgical operating room.

Résumé

Réalité augmentée en neurochirurgie : revue des concepts actuels et applications émergeantes. La réalité augmentée (RA) superpose des objets virtuels générés par ordinateur à la vision du monde réel de l’utilisateur. Parmi les disciplines médicales, la neurochirurgie a longtemps été à l’avant-garde de la chirurgie guidée par imagerie et continue de repousser les frontières de la technologie de RA en salle d’opération. Nous avons procédé à une revue systématique afin d’explorer l’histoire de la technologie de RA en neurochirurgie et nous avons examiné la littérature portant sur les applications neurochirurgicales actuelles de la RA. Il existe des défis importants dans ce domaine dont l’erreur d’alignement, la perception altérée de la profondeur, l’asynchronie temporelle visuelle et tactile et l’aveuglement due à l’inattention de l’opérateur. Néanmoins, la capacité de permettre une visualisation précise de multiples ensembles de données tridimensionnelles de façon congruente sur la zone où elles sont le plus utiles, couplée à des progrès qui seront réalisés en imagerie, en inscription, en technologie d’affichage et en actionnement robotique laisse entrevoir un rôle prometteur de la RA en salle d’opération neurochirurgicale.

Type
Review Articles
Copyright
Copyright © The Canadian Journal of Neurological Sciences Inc. 2017 

Introduction

Intraoperative image guidance has been used in multiple surgical disciplines over the past two decades for localizing subsurface targets that cannot be visualized directly. Although significant advances have been made, current navigation paradigms require surgeons to mentally transform two-dimensional (2D) patient-specific images (e.g. computed tomography [CT] or magnetic resonance imaging [MRI]) into three-dimensional (3D) anatomy, as well as 3D computer-rendered anatomy to the patient, and subsequently manipulate instruments in the surgical field while looking at a separated display. The promise of congruent virtual and physical realms is held by augmented reality (AR) systems, in which computer-generated 2D or 3D images are superimposed onto a user’s vision of the real world.Reference Tang, Kwoh, Teo, Sing and Ling 1 This contrasts with virtual reality (VR), in which the user is fully immersed in a computer-generated environment without real-world input,Reference Shuhaiber 2 impractical in an operating room but useful in simulation exercises.Reference Alaraj, Charbel and Birk 3

Although not computer-generated, the first system overlaying a virtual image registered to a hidden object was described in 1938 in Austria, using a system of x-ray tubes and mirrors to reveal the position of a hidden bullet.Reference Sielhorst, Feuerstein and Navab 4 Head-up displays were developed in the 1940s to display radar information and artificial horizons in military aircraft; however, it was not until 1968 that a tracked head-mounted display (HMD) was developed by Sutherland.Reference Sutherland 5 With a mechanical ceiling-mounted head position tracking mechanism, this device allowed the overlay of analog line drawings onto the user’s vision of the real world (Figure 1). Medical applications of AR first began in the mid-1980s, with the augmentation of a neurosurgical monoscopic operating microscope with CT imagesReference Roberts, Strohbehn, Hatch, Murray and Kettenberger 6 and the development of a video see-through HMD in the early 1990s for the augmentation of ultrasound images.Reference Sielhorst, Feuerstein and Navab 4

Figure 1 The first described head-mounted display, with ceiling-mounted mechanical head position tracking mechanism.Reference Sutherland 5

Surgical applications of AR have been reviewed extensively over the past decade, with several recent systematic reviews.Reference Shuhaiber 2 , Reference Sielhorst, Feuerstein and Navab 4 , Reference Rankin, Slepian and Armstrong 7 , Reference Kersten-Oertel, Jannin and Collins 8 Reviews of AR applications specifically in neurosurgery, however, are limited, with recent articles relatively minimal in scope.Reference Tagaytayan, Kelemen and Sik-Lanyi 9 , Reference Bastien, Peuchot and Tanguy 10 Here, we review the history of AR particularly as it pertains to neurosurgery, detail the modern paradigms of AR setups as well as their current neurosurgical applications, and explore challenges and future directions in the field.

Components of AR

Surgical AR systems comprise three core components. First, a virtual image or environment must be modelled. In modern AR systems, using neurosurgery as an example, this typically involves a computer-generated 3D reconstruction of a subsurface target, often sourced from segmented cross-sectional imaging (CT or MRI), with color or texture-coded differentiation between anatomic structures. Virtual images are overlaid on the user’s vision of the real world classically by solid or wire-mesh overlays (Figure 2). Nonphotorealistic, or “inverse-realism,” augmentation techniques may improve visualization and depth perception.Reference Lerotic, Chung, Mylonas and Yang 11 In contrast with earlier systems, modern AR devices use on-demand augmentation whereby virtual image layers may be removed when desired.

Figure 2 Current methods of overlaying virtual content. The example shown is a minimally invasive lumbar hemilaminectomy captured with a head-mounted camera. (A) No augmentation; (B) solid overlay; and (C) wire-mesh overlay.

The second requirement for AR systems is the registration of virtual environments with real space. This is particularly critical in AR because our perceptual systems are more sensitive to visual misalignments than to the kinetic errors common in VR.Reference Tuceryan, Greer and Whitaker 12 Registration may be accomplished through a number of means and is the subject of significant ongoing research. Frame-based techniques create a rigid 3D Cartesian system in which the position and pose of imaging devices may be determined, allowing registration of a virtual environment as well as rapid updates as the real-world viewing position changes. More commonly, frameless registration methods using the point-matching of virtual and real spaces using known or rigid anatomic landmarks, including bony landmarks for cranial and spinal surgery, and (relatively) stationary “vessel signatures” for vascular procedures are used.Reference Shuhaiber 2 , Reference Cabrilo, Schaller and Bijlenga 13 This is often enhanced with surface-mapping using infrared light-emitting diode (IR-LED)-tracked instruments, or laser range scanners.Reference Grimson, Ettinger, White, Lozano-Perez, Wells and Kikinis 14

The final requirement for functional AR is a display technology to combine the virtual and real environments. Display techniques may be categorized broadly as HMDs, augmented external monitors, augmented optics, augmented windows, and image projections (Figure 3).Reference Sielhorst, Feuerstein and Navab 4 Virtual environments may be projected onto an HMD, overlaid onto either the user’s vision of the real world (optical see-through), or onto a video feed of the real environment (video see-through). Augmented monitors are simply standalone screens displaying virtual content overlaid onto a video feed from the real world. Augmented optics involve direct augmentation of the oculars of an operating microscope or binoculars. Augmented windows are an emerging technology in which a semitransparent screen is placed directly over the surgical site, allowing the display of virtual objects (on the screen) directly over the real object underneath. Last, virtual environments may be projected directly onto the patient using a standard computer projector, without a separate display.Reference Besharati Tabrizi and Mahvash 15

Figure 3 Examples of current AR display methods. (A) Video see-through HMD, with head-mounted video cameraReference Abe, Sato and Kato 35 ; (B) user’s view of output from video-pass through HMD, with augmentation calibration marker (gray) and overlaid vertebroplasty needle trajectories (red, yellow)Reference Abe, Sato and Kato 35 ; and (C) image projection of cortex and deep lesion (red) onto skin surface for incision planning.Reference Besharati Tabrizi and Mahvash 15

AR in Neurosurgery

Neurosurgery has long been at the forefront of image-guided surgery, with the first frameless stereotactic navigation systems being developed for intracranial tumor localization in the early 1990s. It is unsurprising that many surgical applications of AR were pioneered for neurosurgery (Figure 4). The first augmented operating microscope was developed in 1985 at Dartmouth for cranial surgery.Reference Roberts, Strohbehn, Hatch, Murray and Kettenberger 6 Segmented 2D preoperative CT slices were displayed monoscopically into the optics of a standard operating microscope, which was registered to the operating table using an acoustic localizer system. Real-time tool tracking was not possible however, because repositioning of the microscope necessitated reregistration with the operating table, taking ~20 seconds. It was not until 1995 that the first augmented stereo microscope, offering accurate depth perception, was developed in the United Kingdom.Reference Edwards, Hawkes and Hill 16 This system allowed for the multicolor display of segmented 3D cross-sectional imaging data directly into the microscope oculars, as solid or wire-mesh overlays. Intraoperative registration accuracy of 2mm to 3 mm was reported.

Figure 4 Timeline of neurosurgical applications of augmented reality.

The first video augmentation devices in neurosurgery were developed in 1994.Reference Gleason, Kikinis and Altobelli 17 , Reference Gildenberg, Ledoux, Cosman and Labuz 18 In both systems, a video camera was mounted on a stereotactic frame, allowing registration to the operating table, and trained on the patient from the surgeon’s presumed perspective. Multicolor 3D reconstructions of segmented CT or MRI data were overlaid onto the video feed on an external display. The surgery was subsequently performed either under direct vision or via the external screen.

AR for endovascular applications was demonstrated first in 1998, overlaying reconstructed preoperative vascular anatomy, from CT or MR angiography, onto a virtual screen displaying real-time x-ray fluoroscopy data.Reference Masutani, Dohi, Yamane, Iseki and Takakura 19 With registration accuracies of 2 to 3 mm, this system was intended to obviate the additional contrast load required to generate angiographic roadmaps.

Although endoscopes have been in use in general surgery since the 1980s, the first augmented neurosurgical endoscope was developed in 2002 for endonasal transsphenoidal approaches.Reference Kawamata, Iseki, Shibasaki and Hori 20 Volumetric 3D reconstructions of preoperative CT or MRI data were overlaid onto the endoscope video feed on an external display. IR-LEDs were used to track the endoscope relative to the patient, allowing display of the endoscope trajectory relative to delicate neurovascular structures.

Current Applications

A comprehensive review was performed on the recent literature pertaining to AR for human clinical neurosurgical applications. MEDLINE, Web of Science, and Scopus were searched for English-language literature from 2000 through 2015 using the search terms (augment* AND reality AND (neurosurgery OR spine OR endovascular)). The search was conducted in August 2016. Nonduplicated, peer-reviewed original investigations encompassing in vivo, human phantom, or human cadaveric specimens were included. Of 126 screened abstracts, 44 were either not relevant to AR or neurosurgery or were commentaries on other primary investigations; 14 were reports on VR devices. The full texts of the remaining 68 articles were reviewed independently by two authors (DG, NMA). From these were excluded 15 reviews of previously published literature and 20 technical/engineering papers without clinical translation, leaving 33 primary manuscripts (Table 1).Reference Cabrilo, Schaller and Bijlenga 13 , Reference Besharati Tabrizi and Mahvash 15 Given the significant heterogeneity in reported outcomes, pooled statistics were not computed.

Table 1 Summary of studies on neurosurgical applications of AR

* All values are presented as means or percentages.

AVM=arteriovenous malformation; CTA=computed tomography angiography; fps=frames per second; MCA=middle cerebral artery; mRS=modified Rankin score; NIR=near-infrared; OA=occipital artery; PICA=posterior inferior cerebellar artery; STA=superficial temporal artery; US=ultrasound.

Of the 33 articles on neurosurgical AR, the majority were for applications in tumor resection (16 articles, 48%), open neurovascular surgery (9 articles, 27%), or spinal procedures (7 articles, 21%). Four articles pertained to the stereotactic localization of ventriculostomy catheters or simply a tracked probe (12%) and one pertained to cortical resection in epilepsy. Notably, there were no recent publications on AR for endovascular procedures. Of the 33 total studies, four assessed the role of AR for trainee simulation (12%), with the remainder devoted to intraoperative applications. Nineteen studies were conducted with some in vivo human clinical testing (58%), whereas 14 were exclusively cadaveric or phantom studies. AR stereomicroscopes were assessed in five studies (15%), although three were from the same center. AR HMDs were investigated in three articles (9%), image projection techniques in four (12%; two from the same center), and AR windows in four (12%; two from the same center). All other studies used external AR displays, either standalone or tablets/smartphones.

Evaluation and reporting of outcomes from the use of AR devices were highly heterogeneous across studies. A summary of reported outcomes for each study is presented in Table 1. Subjective feedback of operator comfort, usability, and/or depth perception were reported by most studies, often dichotomized as “satisfactory/unsatisfactory.” Studies investigating AR simulators typically quantified accuracy for the specific simulated task, for instance, the translational deviation of a virtual ventriculostomy catheter from its ideal target or the percentage of pedicle screws placed in satisfactory position.Reference Luciano, Banerjee and Bellotte 29 , Reference Yudkowsky, Luciano and Banerjee 38 , Reference Hooten, Lister, Lombard, Lizdas, Lampotang and Rajon 39 For clinical studies, the most commonly quantified metrics included setup time and overall registration error. Overall registration errors were calculated differently between studies, unsurprising given the variety of augmentation techniques used and hence the types of errors introduced. For instance, camera calibration errors apply to any systems using video imaging of the real world, but are obviated in optical see-through HMDs. Error in tracking and transforming eye movements, however, applies primarily to optical HMDs. Errors in virtual image overlay or reprojection, occurring to various extents with each type of augmented display, were typically not reported separately. Nonetheless, overall registration errors for cranial AR ranged from 0.3mm to 4.2 mm, with most studies reporting 2mm to3 mm. This is well within the range of accuracy achieved by current neuronavigation systems.

AR in Vascular Neurosurgery

With the exception of one study in which AR was used by a remote surgeon to guide a carotid endarterectomyReference Shenai, Dillavou and Shum 31 and one in which volumetric intracranial CT angiography (CTA) data were overlaid onto a real-world video feed,Reference Kersten-Oertel, Gerard and Drouin 47 AR for vascular neurosurgery has focused on the augmentation of stereomicroscopes. Microscope overlays include either fluorescence images from intraoperative indocyanine green (ICG) angiography or segmented preoperative CTA/magnetic resonance angiography/digital subtraction angiography (DSA).Reference Cabrilo, Schaller and Bijlenga 13 , Reference Cabrilo, Bijlenga and Schaller 40 , Reference Cabrilo, Bijlenga and Schaller 41 , Reference Watson, Martirosyan, Skoch, Lemole, Anton and Romanowski 46 Registration of the virtual environment to the patient is done by tracking each component using a standard IR-LED–based neuronavigation system, which would be used typically as standard of care. Verification of registration is performed at each step of skin incision, craniotomy (using standard anatomic landmarks), and arachnoid dissection (by registering the ‘vessel signature’ of the exposed cortex to the preoperative CTA/DSA).Reference Cabrilo, Schaller and Bijlenga 13

Overlay of the target vasculature optimizes both skin incision and craniotomy, with a smaller craniotomy fashioned for 63% of AR-guided cases versus without AR in one series of aneurysm clipping.Reference Cabrilo, Bijlenga and Schaller 41 In this series, AR guidance was felt to be most useful for aneurysms requiring an unusual trajectory or with limited exposition and hidden branches; cases done with AR showed no difference in intraoperative clip correction rates, or 3-month patient functional outcomes, relative to procedures without AR.

For extracranial-intracranial bypass procedures, AR overlays offered the additional advantages of readily identifying donor vessels on the skin surface, facilitating skin incision and vessel harvest. AR guidance proved superior to manual pulse palpation and comparable to Doppler ultrasound or intraoperative DSA-guided donor vessel identification.Reference Cabrilo, Schaller and Bijlenga 13 Craniotomy size was also minimized because of AR display of the preoperatively identified recipient vessel sites.

The role of AR in arteriovenous malformation (AVM) resection may be more limited; in the few series to date, although vessel augmentation was useful for skin incision, craniotomy, and resection planning, the complexity of arterial feeders in most AVMs was not resolvable with current systems, particularly in the context of surrounding hemorrhage from preoperative rupture.Reference Cabrilo, Bijlenga and Schaller 40 Identification of the depth of feeding arteries with AR views was also problematic, despite the use of manually identified markers on deep feeding arteries.Reference Kersten-Oertel, Gerard and Drouin 47

Intraoperative setup of the AR stereomicroscope requires approximately 20 additional minutes beyond the registration of a standard optical navigation system: 10 minutes for registration of the microscope and 10 minutes for verification of registration accuracy.Reference Cabrilo, Bijlenga and Schaller 41 Therefore, there is minimal disruption of the surgical workflow, particularly once the procedure is under way. Segmentation and merging of preoperative cross-sectional imaging as well as DSA, however, does entail additional time before surgery.

AR in Skull Base/Tumor Surgery

As with neurovascular applications, AR guidance is particularly useful in the initial stages of surgery for planning skin incisions and minimizing the extent of craniotomy. When tumor boundaries and planned resection margins are segmented preoperatively, along with adjacent neurovascular structures to be preserved, AR overlays of these targets facilitate maximal safe resection, particularly for gliomas. In one series of 74 patients, 64 with primary or recurrent gliomas, AR overlay of volumetric CT/MRI data was achieved with no additional surgical time or complications and reduced both intensive care unit and hospital length of stay by 40% to 50% relative to non-AR cases.Reference Gildenberg and Labuz 23 AR also offers direct visualization of superficial and deep venous structures, which is particularly useful in the resection of large convexity, parasagittal, and parafalcine meningiomas.Reference Low, Lee, Dip, Ng, Ang and Ng 26 However, as with any neuronavigation system guided by preoperative imaging, current AR devices are unable to account for brain shift during cranial surgery, which may represent a significant source of registration error once large volumes of tumor or cerebrospinal fluid have been removed.Reference Hill, Maurer, Maciunas, Barwise, Fitzpatrick and Wang 51 Recent advancements in surgical navigation include real-time registration updates from intraoperative 3D ultrasound, accounting for brain shift on a time scale on the order of minutes.Reference Reinertsen, Lindseth, Askeland, Iversen and Unsgård 52 Ultrasound-based registration updates are now beginning to be applied to AR views for tumor surgery.Reference Gerard, Kersten-Oertel and Drouin 53

For endoscopic endonasal transsphenoidal approaches to the skull base, although anatomic landmarks are typically sufficient to target midline and avoid injury to the carotid arteries and optic apparatus, these landmarks are absent in reoperations. In the one series of augmented neuroendoscopes to date, AR overlays of both endoscope trajectory and neurovascular anatomy were highly valuable in reaching the sellar floor safely in redo procedures, with no additional operative time or hardware setup required.Reference Kawamata, Iseki, Shibasaki and Hori 20

AR in Stereotactic Localization and Functional Neurosurgery

The advantages of AR overlays in providing “x-ray vision” to identify deep intracranial structures, may be extended to ventriculostomy insertion. AR ventriculostomy simulators providing haptic feedback and 3D visualization of intracranial catheter trajectory have been instructional for junior residents in appreciating not only a proper target, the foramen of Monro, but also an appropriate trajectory and the adjacent nuclei to be avoided. AR simulators have also allowed for the real-time quantification of trainee accuracy, revealing trends of improvement with multiple attempts as well as with seniority in training.Reference Yudkowsky, Luciano and Banerjee 38 , Reference Hooten, Lister, Lombard, Lizdas, Lampotang and Rajon 39

AR in Spinal Surgery

Although the literature on spinal applications of AR in open procedures is relatively sparse, there is promise in the ability of AR to provide real-time trajectory guidance for percutaneous instrumentation at or superficial to skin level.Reference Abe, Sato and Kato 35 Current spinal stereotactic navigation systems are able to guide hardware trajectory relative to bony anatomy, but leave the skin projection of these trajectories at the discretion of the surgeon. The literature on spinal AR has largely focused on percutaneous vertebroplasty/kyphoplasty, although these are readily adaptable to percutaneous pedicle screw placement through identical transpedicular approaches.

C-arm intraoperative fluoroscopy is classically used to guide percutaneous instrumentation. One augmentation technique has involved the placement of a video camera in-line with the x-ray axis, allowing overlay of x-ray and real-world images. In a small cadaver study, although AR decreased radiation exposure compared to a C-arm–only technique, breach rates of pedicle instrumentation were 40%, far greater than the 5% to 15% accepted by most practicing surgeons.Reference Navab, Heining and Traub 28 , Reference Shin, James, Njoku, Hartl and Härtl 54

Overlay of 3D-reconstructed MRI or CT data is an alternative technique.Reference Weiss, Marker, Fischer, Fichtinger, Machado and Carrino 32 In one cadaveric study overlaying intraoperative MRI for vertebroplasty guidance, needle-tip target errors averaged 6.1 mm; however with a mean of six intraoperative MRI scans required per level, this is cumbersome for human clinical application.Reference Fritz, U-Thainual and Ungi 44 In a study projecting 3D-reconstructed spine CT imaging onto cadaveric torsos for transpedicular approaches, although the AR projection facilitated positioning of the C-arm appropriately for initial targeting, with an entry point error of 4.4 mm, AR alone was unable to accurately allow targeting of the final needle position because of lack of angular information, with a target error of 9.1 mm.Reference Wu, Wang, Liu, Hu and Lee 45

Although potentially very useful in percutaneous applications, relevant given the emerging indications for minimally invasive procedures, AR for percutaneous spinal surgery remains insufficiently accurate for clinical application. The overlay of multiplanar cross-sectional imaging rather than 3D reconstructions only, similar to the displays of current stereotactic navigation systems, may provide the angular information required for more accurate targeting of implants to their final transpedicular position.

Challenges in AR

Display of 3D virtual objects onto real-world images presents multiple challenges, some specific to certain display techniques. A basic requirement for AR is the accurate registration of real and virtual spaces, which requires knowledge of the pose and optical characteristics of both real and virtual cameras.Reference Tuceryan, Genc and Navab 55 Registration errors in video see-through systems, in which the real world is imaged through a video camera, are constituted by errors in camera calibration, image distortion, and object-to-patient registration.Reference Tuceryan, Greer and Whitaker 12 Optical see-through systems, although eliminating the need for camera calibration, require tracking of head and eye movements for synchronization of real and virtual content from varying perspectives, introducing additional error.Reference Tuceryan, Genc and Navab 55 Eye tracking is unnecessary for image projection techniques and AR windows. Unfortunately, projection of 2D light onto 3D surfaces becomes inaccurate with highly curved surfaces and is less useful once direct line of sight to the patient is unavailable, for instance, with the introduction of a microscope or other equipment adjacent to the operating table. AR windows, although able to display content from any perspective without eye tracking, must be placed over the area to be imaged and thus obstruct the surgical field. However, in endovascular and other procedures where the site of manipulation is distant from the target, AR windows may be appropriate.

Even with geometrically correct positional registration, problems with impaired depth perception may arise. In viewing a native scene, the human eye converges on a particular 3D point and accommodates onto that plane to view the image clearly. These focus cues are combined with numerous monocular and binocular depth cues, including shading, texture, stereopsis, motion parallax, and occlusion, to generate 3D perception in the brain. Discrepancy in accommodation and divergence impairs depth perception, most evident in optical see-through AR in which the focal plane of the virtual image is at the level of the display panel, whereas the eye must accommodate at a longer distance onto the real-world target to see it clearly.Reference Watt, Akeley, Ernst and Banks 56 Accommodation-divergence discrepancy may also lead to visual fatigue, headaches, and diplopia, particularly after prolonged use.Reference Bando, Iijima and Yano 57 Injection of 3D images into the oculars of stereomicroscopes somewhat alleviates this, but the focal plane of the virtual image remains incongruent with that of the target. Video see-through displays, either HMDs or external displays, minimize perceptual discrepancies between real and virtual environments by having full control of both.Reference Kockro, Tsai and Ng 25 Unfortunately, they are hampered by limited resolution relative to the native eye. Recent work on multifocal plane stereoscopic displays, either via spatial or time-multiplexing, shows promise for the proper display of depth and focus cues in AR.Reference Hu and Hua 58 Occlusion, the partial blockage of an object’s view by another nearer object, is an important monocular depth cue for the perception of relative depth.Reference Nagata 59 A well-documented challenge with AR environments is the occlusion of operator’s hands or instruments by superimposed virtual images, leading to misperceptions of relative proximity. Multiple techniques of occlusion handling have been described, for instance, the detection of edges and color-specific surfaces in the camera feed and retention and display of these features over the virtual object.Reference Lerotic, Chung, Mylonas and Yang 11 , Reference Kersten-Oertel, Chen, Drouin, Sinclair and Collins 60

Temporal synchronization of virtual and real environments is an additional challenge with all AR systems, particularly with rapid perspective changes. This has been most apparent with optical see-through systems, in which even slight delays in remapping of the virtual environment to the real world, following a change in position, are visually jarring to the surgeon.Reference Sielhorst, Feuerstein and Navab 4 By controlling both real and virtual “cameras,” video see-through systems eliminate relative visual lag; however, lag between visual and tactile feedback cannot be avoided by this technique and should ideally be less than 80 ms for the accurate manipulation of delicate structures.Reference Ware and Balakrishnan 61

Finally, though image fusion in AR offers the benefit of visualizing multiple 3D datasets congruently, extraneous information may distract surgeons from unpredictable findings in the operative field. Termed “inattentional blindness,” this phenomenon has been extensively studied in the aviation industry, but only recently so in surgery.Reference Hughes-Hallett and Mayer 62 The predominant driver of inattentional blindness appears to be greater cognitive load of the primary task,Reference Hughes-Hallett and Mayer 62 , Reference Simons and Chabris 63 although augmentation of the visual field has also been implicated in multiple studies.Reference Dixon, Daly, Chan, Vescan, Witterick and Irish 64 - Reference Marcus, Pratt and Hughes-Hallett 66 Use of wire-mesh and “inverse-realism” overlay techniques, rather than solid overlay, has been suggested to potentially reduce inattentional blindness.Reference Marcus, Pratt and Hughes-Hallett 66

Future Directions

Although AR has proven useful in the overlay of multiple 3D imaging modalities onto the working area of interest, much work needs to be done to improve utility, streamline workflow, and minimize surgeon distraction and visual fatigue. Improvements in registration techniques, currently under significant research for nonaugmented navigation technologies, will allow automatic intraoperative reregistration to account for soft-tissue deformation or changes in patient positioning. Improvements in the range of content that can be overlaid—for instance computational fluid dynamics quantifications of blood flow from intraoperative angiograms—will broaden the scope of AR to AVM resections and other procedures where pure anatomic data is not necessarily useful.

Advances in display technology, particularly in augmented optics for microscopes, will streamline AR integration into existing hardware as well as improve visualization and depth perception. The intent of AR is to display contextually relevant content over the area where it is required, theoretically minimizing the current ergonomic hindrances requiring surgeons to look simultaneously at multiple displays, for preoperative planning and intraoperative navigation. Part of the improvements in display ergonomics may come from using compact or readily wearable devices, arenas in which the consumer electronics and gaming sectors are making significant progress. The use of smartphone or tablet cameras and displays, coupled with internal accelerometers for positional tracking, has already shown promise for inexpensive video see-through AR.Reference Eftekhar 50 Given their relative affordability, tablet-based AR windows are particularly useful in telesurgical applications for developing nations.Reference Davis, Can, Pindrik, Rocque and Johnston 67 Consumer AR HMDs, such as Microsoft’s HoloLens, are optimized for consumption of video and gaming content, but certainly may be applicable to surgical environments with the addition of tracking capabilities for head pose and position.

Finally, the interaction between surgeons, augmented displays, and robotically actuated instruments shows tremendous promise for faster and safer surgery in multiple disciplines. Here again, the consumer and gaming sectors have made significant strides in how consumers interaction with virtual content, particularly in gaming environments, where gesture controls must exceed the comfort and accuracy provided by classic handheld controllers. Novel techniques to improve interaction with virtual content, including the ability to freeze and manipulate virtual objects over a live real-world scene, are in development.Reference Arshad, Chowdhury, Chun, Parhizkar and Obeidy 68 Haptic devices, including styli and gloves, continue to evolve in the consumer arena in an effort to improve tactile feedback; although these may not be practical in a sterile intraoperative setting, they show significant promise for preoperative planning and for surgical education.Reference Tang and Tewell 69

Conclusions

In an era when image guidance is used increasingly across multiple surgical disciplines, AR represents the next frontier where guidance systems are integrated seamlessly into the surgical workflow. We review here the current state of AR using neurosurgery as an example, as one of the surgical disciplines most heavily reliant on advanced imaging. This work represents one of the most comprehensive recent overviews of neurosurgery-specific applications of AR. Challenges to the routine adoption of augmented displays remain, from technical aspects, such as depth misperception and temporal asynchrony, to human factors, such as visual fatigue and inattentive blindness. Nonetheless, rapid advances in display technology and interaction techniques, driven in part by the consumer gaming industry, promise a burgeoning role for AR in the modern neurosurgical operating room.

Acknowledgments and Funding

Salary support for DG is provided in part by a Canadian Institutes of Health Research Postdoctoral Fellowship (FRN 142931).

Disclosures

DG, NA, NN, SG, CM, and VXDY do not have anything to disclose.

References

1. Tang, SL, Kwoh, CK, Teo, MY, Sing, NW, Ling, K V. Augmented reality systems for medical applications. IEEE Eng Med Biol Mag. 1998;17:49-58.Google Scholar
2. Shuhaiber, JH. Augmented reality in surgery. Arch Surg. 2004;139:170-174.Google Scholar
3. Alaraj, A, Charbel, FT, Birk, D, et al. Role of cranial and spinal virtual and augmented reality simulation using immersive touch modules in neurosurgical training. Neurosurgery. 2013;72(Suppl 1):115-123.Google Scholar
4. Sielhorst, T, Feuerstein, M, Navab, N. Advanced medical displays: a literature review of augmented reality. Disp Technol J. 2008;4:451-467.Google Scholar
5. Sutherland, IE. A head-mounted three dimensional display. Proc AFIPS Fall Jt Comput Conf. 1968:757-764.Google Scholar
6. Roberts, DW, Strohbehn, JW, Hatch, JF, Murray, W, Kettenberger, H. A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope. J Neurosurg. 1986;65:545-549.CrossRefGoogle ScholarPubMed
7. Rankin, TM, Slepian, MJ, Armstrong, DG. Augmented reality in surgery. Technological Advances in Surgery, and Trauma Critical Care. New York: Springer New York; 2015, p. 59-71.CrossRefGoogle Scholar
8. Kersten-Oertel, M, Jannin, P, Collins, DL. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph. 2013;37:98-112.Google Scholar
9. Tagaytayan, R, Kelemen, A, Sik-Lanyi, C. Augmented reality in neurosurgery. Arch Med Sci. 2016:1-7.Google Scholar
10. Bastien, S, Peuchot, B, Tanguy, A. Augmented reality in spine surgery: critical appraisal and status of development. Stud Health Technol Inform. 2002;88:153-156.Google Scholar
11. Lerotic, M, Chung, AJ, Mylonas, G, Yang, G-Z. Pq-space based non-photorealistic rendering for augmented reality. Med Image Comput Comput Assist Interv. 2007;10:102-109.Google Scholar
12. Tuceryan, M, Greer, DS, Whitaker, RT, et al. Calibration Requirements and Procedures for a Monitor-Based Augmented Reality System. Trans Vis Comput Graph. 1995;1:255-273.Google Scholar
13. Cabrilo, I, Schaller, K, Bijlenga, P. Augmented reality-assisted bypass surgery: embracing minimal invasiveness. World Neurosurg. 2015;83:596-602.Google Scholar
14. Grimson, WL, Ettinger, GJ, White, SJ, Lozano-Perez, T, Wells, WM, Kikinis, R. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization. IEEE Trans Med Imaging. 1996;15:129-140.Google Scholar
15. Besharati Tabrizi, L, Mahvash, M. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique. J Neurosurg. 2015;123:206-211.Google Scholar
16. Edwards, PJ, Hawkes, DJ, Hill, DLG, et al. Augmentation of reality using an operating microscope for otolaryngology and neurosurgical guidance. J Image Guid Surg. 1995;1:172-178.Google Scholar
17. Gleason, PL, Kikinis, R, Altobelli, D, et al. Video registration virtual reality for nonlinkage stereotactic surgery. Stereotact Funct Neurosurg. 1994;63:139-143.Google Scholar
18. Gildenberg, PL, Ledoux, R, Cosman, E, Labuz, J. The exoscope—a frame-based video/graphics system for intraoperative guidance of surgical resection. Stereotact Funct Neurosurg. 1994;63:23-25.CrossRefGoogle ScholarPubMed
19. Masutani, Y, Dohi, T, Yamane, F, Iseki, H, Takakura, K. Augmented reality visualization system for intravascular neurosurgery. Comput Aided Surg. 1998;3:239-247.Google Scholar
20. Kawamata, T, Iseki, H, Shibasaki, T, Hori, T. Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical note. Neurosurgery. 2002;50:1393-1397.Google Scholar
21. Paul, P, Fleig, O, Jannin, P. Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation. IEEE Trans Med Imaging. 2005;24:1500-1511.Google Scholar
22. Pandya, A, Siadat, MR, Auner, G. Design, implementation and accuracy of a prototype for medical augmented reality. Comput Aided Surg. 2005;10:23-35.Google Scholar
23. Gildenberg, PL, Labuz, J. Use of a volumetric target for image-guided surgery. Neurosurgery. 2006;59:651-659.Google Scholar
24. Lovo, EE, Quintana, JC, Puebla, MC, et al. A novel, inexpensive method of image coregistration for applications in image-guided surgery using augmented reality. Neurosurgery. 2007;60:362-366.Google Scholar
25. Kockro, RA, Tsai, YT, Ng, I, et al. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery. 2009;65:795-798.Google Scholar
26. Low, D, Lee, CK, Dip, LLT, Ng, WH, Ang, BT, Ng, I. Augmented reality neurosurgical planning and navigation for surgical excision of parasagittal, falcine and convexity meningiomas. Br J Neurosurg. 2010;24:69-74.Google Scholar
27. Bisson, M, Cheriet, F, Parent, S. 3D visualization tool for minimally invasive discectomy assistance. Stud Health Technol Inform. 2010;158:55-60.Google Scholar
28. Navab, N, Heining, SM, Traub, J. Camera augmented mobile C-arm (CAMC): calibration, accuracy study, and clinical applications. IEEE Trans Med Imaging. 2010;29:1412-1423.Google Scholar
29. Luciano, CJ, Banerjee, PP, Bellotte, B, et al. Learning retention of thoracic pedicle screw placement using a high-resolution augmented reality simulator with haptic feedback. Neurosurgery. 2011;69:ons14-ons19; ; discussion ons19.Google Scholar
30. Wang, A, Mirsattari, SM, Parrent, AG, Peters, TM. Fusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidance. Comput Aided Surg. 2011;16:149-160.Google Scholar
31. Shenai, MB, Dillavou, M, Shum, C, et al. Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance. Neurosurgery. 2011;68:200-207.Google Scholar
32. Weiss, CR, Marker, DR, Fischer, GS, Fichtinger, G, Machado, AJ, Carrino, JA. Augmented reality visualization using image-overlay for MR-guided interventions: system description, feasibility, and initial evaluation in a spine phantom. AJR Am J Roentgenol. 2011;196:W305-W307.Google Scholar
33. Azimi, E, Doswell, J, Kazanzides, P. Augmented reality goggles with an integrated tracking system for navigation in neurosurgery. IEEE Virtual Real Conf 2012 Proc. 2012:123-4.Google Scholar
34. Chang, YZ, Hou, JF, Tsao, YH, Lee, ST. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery. In: Tescher AG, editor. Applied Digital Image Processing, Vol. 8499, Bellingham: Spie-Int Soc Optical Engineering; 2012.Google Scholar
35. Abe, Y, Sato, S, Kato, K, et al. A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note. J Neurosurg Spine. 2013;19:492-501.Google Scholar
36. Mahvash, M, Besharati Tabrizi, L. A novel augmented reality system of image projection for image-guided neurosurgery. Acta Neurochir. 2013;155:943-947.Google Scholar
37. Inoue, D, Cho, B, Mori, M, et al. Preliminary study on the clinical application of augmented reality neuronavigation. J Neurol Surg. 2013;74:71-76.Google Scholar
38. Yudkowsky, R, Luciano, C, Banerjee, P, et al. Practice on an augmented reality/haptic simulator and library of virtual brains improves residents’ ability to perform a ventriculostomy. Simul Heal. 2013;8:25-31.Google Scholar
39. Hooten, KG, Lister, JR, Lombard, G, Lizdas, DE, Lampotang, S, Rajon, DA, et al. Mixed reality ventriculostomy simulation: experience in neurosurgical residency. Neurosurgery. 2014;10(Suppl 4):576-581.Google Scholar
40. Cabrilo, I, Bijlenga, P, Schaller, K. Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations. Acta Neurochir. 2014;156:1769-1774.Google Scholar
41. Cabrilo, I, Bijlenga, P, Schaller, K. Augmented reality in the surgery of cerebral aneurysms: a technical report. Neurosurgery. 2014;10(Suppl 2):251-252.Google Scholar
42. Deng, W, Li, F, Wang, M, Song, Z. Easy-to-use augmented reality neuronavigation using a wireless tablet PC. Stereotact Funct Neurosurg. 2014;92:17-24.Google Scholar
43. Kersten-Oertel, M, Gerard, I, Drouin, S, et al. Augmented reality in neurovascular surgery: first experiences. In: Linte CA, Yaniv Z, Fallavollita P, Abolmaesumi P, Holmes DR, editors. Augmented Environments for Computer Assisted Interventions, Vol. 8678, Berlin: Springer-Verlag, Berlin, 2014, p. 80-89.Google Scholar
44. Fritz, J, U-Thainual, P, Ungi, T, et al. MR-guided vertebroplasty with augmented reality image overlay navigation. Cardiovasc Intervent Radiol. 2014;37:1589-1596.Google Scholar
45. Wu, JR, Wang, ML, Liu, KC, Hu, MH, Lee, PY. Real-time advanced spinal surgery via visible patient model and augmented reality system. Comput Methods Programs Biomed. 2014;113:869-881.Google Scholar
46. Watson, JR, Martirosyan, N, Skoch, J, Lemole, GM, Anton, R, Romanowski, M. Augmented microscopy with near-infrared fluorescence detection. In: Pogue BW, Gioux S, editors. Proc. SPIE, Molecular-Guided Surgery: Molecules, Devices, and Applications, Vol. 9311, Bellingham: Spie-Int Soc Optical Engineering; 2015.Google Scholar
47. Kersten-Oertel, M, Gerard, I, Drouin, S, et al. Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. Int J Comput Assist Radiol Surg. 2015;10:1823-1836.Google Scholar
48. Abhari, K, Baxter, JSH, Chen, ECS, et al. Training for planning tumour resection: augmented reality and human factors. IEEE Trans Biomed Eng. 2015;62:1466-1477.Google Scholar
49. Watanabe, E, Satoh, M, Konno, T, Hirai, M, Yamaguchi, T. The trans-visible navigator: a see-through neuronavigation system using augmented reality. World Neurosurg. 2016;87:399-405.Google Scholar
50. Eftekhar, B. A smartphone app to assist scalp localization of superficial supratentorial lesions—technical note. World Neurosurg. 2016;85:359-363.Google Scholar
51. Hill, DL, Maurer, CR, Maciunas, RJ, Barwise, JA, Fitzpatrick, JM, Wang, MY. Measurement of intraoperative brain surface deformation under a craniotomy. Neurosurgery. 1998;43:514-26-8.Google Scholar
52. Reinertsen, I, Lindseth, F, Askeland, C, Iversen, DH, Unsgård, G. Intra-operative correction of brain-shift. Acta Neurochir (Wien). 2014;156:1301-1310.Google Scholar
53. Gerard, IJ, Kersten-Oertel, M, Drouin, S, et al. Improving patient specific neurosurgical models with intraoperative ultrasound and augmented reality visualizations in a neuronavigation environment. In: Oyarzun Laura C, Shekhar R, Wesarg S, et al. editors Clinical Image-Based Procedures. Translational Research in Medical Imaging, Vol. 9401. Lecture Notes in Computer Science; 2016, p. 28-35.Google Scholar
54. Shin, BJ, James, AR, Njoku, IU, Hartl, R, Härtl, R. Pedicle screw navigation: a systematic review and meta-analysis of perforation risk for computer-navigated versus freehand insertion. J Neurosurg Spine. 2012;17:113-122.Google Scholar
55. Tuceryan, M, Genc, Y, Navab, N. Single-point active alignment method (SPAAM) for optical see-through HMD calibration for augmented reality. Presence Teleoperators Virtual Environ. 2002;11:259-276.Google Scholar
56. Watt, SJ, Akeley, K, Ernst, MO, Banks, MS. Focus cues affect perceived depth. J Vis. 2005;5:834-862.Google Scholar
57. Bando, T, Iijima, A, Yano, S. Visual fatigue caused by stereoscopic images and the search for the requirement to prevent them: a review. Displays. 2012;33:76-83.Google Scholar
58. Hu, X, Hua, H. An optical see-through multi-focal-plane stereoscopic display prototype enabling nearly correct focus cues. Proc SPIE. 2013;8648:86481A-86481A-6.Google Scholar
59. Nagata, S. How to reinforce perception of depth in single two-dimensional pictures. Proc SID. 1983;25:239-246.Google Scholar
60. Kersten-Oertel, M, Chen, SS, Drouin, S, Sinclair, DS, Collins, DL. Augmented reality visualization for guidance in neurovascular surgery. Stud Heal Technol Informatics. 2012;173:225-229.Google Scholar
61. Ware, C, Balakrishnan, R. Reaching for objects in VR displays: lag and frame rate. ACM Trans Comput Interact. 1994;1:331-356.Google Scholar
62. Hughes-Hallett, A, Mayer, EK, et al. Inattention blindness in surgery. Surg Endosc. 2015;29:3184-3189.Google Scholar
63. Simons, DJ, Chabris, CF. Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception. 1999;28:1059-1074.Google Scholar
64. Dixon, BJ, Daly, MJ, Chan, HH, Vescan, A, Witterick, IJ, Irish, JC. Inattentional blindness increased with augmented reality surgical navigation. Am J Rhinol Allergy. 2014;28:433-437.Google Scholar
65. Dixon, BJ, Daly, MJ, Chan, H, Vescan, AD, Witterick, IJ, Irish, JC. Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg Endosc. 2013;27:454-461.Google Scholar
66. Marcus, HJ, Pratt, P, Hughes-Hallett, A, et al. Comparative effectiveness and safety of image guidance systems in surgery: a preclinical randomised study. Lancet. 2015;385:S64.Google Scholar
67. Davis, MC, Can, DD, Pindrik, J, Rocque, BG, Johnston, JM. Virtual interactive presence in global surgical education: international collaboration through augmented reality. World Neurosurg. 2016;86:103-111.Google Scholar
68. Arshad, H, Chowdhury, SA, Chun, LM, Parhizkar, B, Obeidy, WK. A freeze-object interaction technique for handheld augmented reality systems. Multimed Tools Appl. 2016;75:5819-5839.Google Scholar
69. Tang, JKT, Tewell, J. Emerging human-toy interaction techniques with augmented and mixed reality. New York: Springer International Publishing; 2015, p. 77-105.Google Scholar
Figure 0

Figure 1 The first described head-mounted display, with ceiling-mounted mechanical head position tracking mechanism.5

Figure 1

Figure 2 Current methods of overlaying virtual content. The example shown is a minimally invasive lumbar hemilaminectomy captured with a head-mounted camera. (A) No augmentation; (B) solid overlay; and (C) wire-mesh overlay.

Figure 2

Figure 3 Examples of current AR display methods. (A) Video see-through HMD, with head-mounted video camera35; (B) user’s view of output from video-pass through HMD, with augmentation calibration marker (gray) and overlaid vertebroplasty needle trajectories (red, yellow)35; and (C) image projection of cortex and deep lesion (red) onto skin surface for incision planning.15

Figure 3

Figure 4 Timeline of neurosurgical applications of augmented reality.

Figure 4

Table 1 Summary of studies on neurosurgical applications of AR