Neglect results in long-term cognitive disability
Unilateral spatial neglect is a lateralised attention disorder characterised by the failure to orient to, attend to, respond to, or report stimuli appearing on the contralesional hemispace (Buxbaum et al., Reference Buxbaum, Ferraro, Veramonti, Farne, Whyte, Ladavas and Coslett2004; Vuilleumier, Reference Vuilleumier2013), despite having intact sensory abilities (Howard & Rowe, Reference Howard and Rowe2018). It is commonplace, long-lasting, but highly variable following stroke (Kaufmann, Cazzoli, Muri, Nef, & Nyffeler, Reference Kaufmann, Cazzoli, Muri, Nef and Nyffeler2020a; Kaufmann et al., Reference Kaufmann, Cazzoli, Pflugshaupt, Bohlhalter, Vanbellingen, Muri and Nyffeler2020b; Harvey, Learmonth, Rossit, & Chen, Reference Harvey, Learmonth, Rossit and Chen2021; Ringman, Saver, Woolson, Clarke, & Adams, Reference Ringman, Saver, Woolson, Clarke and Adams2004) and results in long-term major disability (Checketts et al., Reference Checketts, Mancuso, Fordell, Chen, Hreha, Eskes and Bowen2020; Conti & Amone, Reference Conti and Arnone2016; Wee & Hopman, Reference Wee and Hopman2008). Lateralised frontoparietal neuroanatomical networks, especially in the right hemisphere, are strongly implicated in the pathology (He et al., Reference He, Snyder, Vincent, Epstein, Shulman and Corbetta2007; Pedrazzini & Ptak, Reference Pedrazzini and Ptak2020; Wu et al., Reference Wu, Wang, Zhang, Zheng, Zhang, Rong and Jiang2016). The effectiveness of current treatment approaches is uncertain (Longley et al., Reference Longley, Hazelton, Heal, Pollock, Woodward-Nutt, Mitchell and Bowen2021; Tavaszi, Nagy, Szabo, & Fazekas, 2021; Umeonwuka, Roos, & Ntsiea, Reference Umeonwuka, Roos and Ntsiea2020).
Classical clinical methods assess neglect incompletely
In clinical settings, neglect is typically assessed using simple pen-and-paper tests such as cancellation tasks (Gauthier, Dehaut, & Joanette, Reference Gauthier, Dehaut and Joanette1989; Halligan & Marshall, Reference Halligan and Marshall1989), clock drawing (Freedman, Leach, Kaplan, Shulman, & Delis, Reference Freedman, Leach, Kaplan, Shulman and Delis1994), and line bisection (Albert, Reference Albert1973). However, pen-and-paper tests do not ensure correct identification of moderate neglect, even when several tests are used (Buxbaum et al., Reference Buxbaum, Ferraro, Veramonti, Farne, Whyte, Ladavas and Coslett2004; Buxbaum, Dawson, & Linsley, Reference Buxbaum, Dawson and Linsley2012; Harvey et al., Reference Harvey, Learmonth, Rossit and Chen2021). Consequently, several researchers have recommended computerised approaches that might detect neglect when classical methods cannot (Bonato, Priftis, Umilta, & Zorzi, Reference Bonato, Priftis, Umilta and Zorzi2013; Buxbaum et al., Reference Buxbaum, Dawson and Linsley2012; Ogourtsova, Souza Silva, Archambault, & Lamontagne, Reference Ogourtsova, Souza Silva, Archambault and Lamontagne2017), including virtual reality (VR) (Coyle, Traynor, & Solowij, Reference Coyle, Traynor and Solowij2015; Fordell, Bodin, Bucht, & Malm, Reference Fordell, Bodin, Bucht and Malm2011; Pedroli, Serino, Cipresso, Pallavicini, & Riva, Reference Pedroli, Serino, Cipresso, Pallavicini and Riva2015).
Virtual reality (VR) is useful for cognitive rehabilitation
Simulated and immersive technologies, such as VR, are often ill defined and confused (Gorman & Gustafsson, Reference Gorman and Gustafsson2020). Broadly speaking, VR replaces the perception of reality with a computer stimulation. Modern VR is most commonly a computer-generated, visual simulation of 3D environments, that can be interacted with naturalistically, in real-time, using a headset and hand controllers. The headset occludes perception of the external world, and thus the user experiences a surrounding 3D virtual space, a quality of VR termed immersive. The key risk of VR is motion sickness, but feelings of motion sickness appear to be low in people with stroke (Laver et al., Reference Laver, Lange, George, Deutsch, Saposnik and Crotty2017).
VR is well suited to clinical settings. Within VR we can build complex environments that allow patients to engage in activities that might be impossible or unsafe for them in the real world (Farrow & Reid, Reference Farrow and Reid2004; Kim et al., Reference Kim, Kim, Kim, Chang, Park, Ohn and Kim2007, Reference Kim, Ku, Chang, Park, Lim, Han and Kim2010). These activities can be delivered and monitored by clinicians remotely via telehealth (Burdea, Reference Burdea2003; Morse, Biggart, Pomeroy, & Rossit, Reference Morse, Biggart, Pomeroy and Rossit2020; Threapleton, Drummond, & Standen, Reference Threapleton, Drummond and Standen2016). VR is readily gamified, therefore highly engaging for patients (Pietrzak, Pullman, & McGuire, Reference Pietrzak, Pullman and McGuire2014; Thornton et al., Reference Thornton, Marshall, McComas, Finestone, McCormick and Sveistrup2005) which may facilitate longer rehabilitation sessions, greater adherence to treatment, and better outcomes (Adlakha, Chhabra, & Shukla, Reference Adlakha, Chhabra and Shukla2020; Lohse, Hilderman, Cheung, Tatla, & Van Der Loos, Reference Lohse, Hilderman, Cheung, Tatla and Van Der Loos2014; Parker, Lord, & Needham, Reference Parker, Lord and Needham2013; Huygelier, Mattheus, Vanden Abeele, Van Ee, & Gillebert, Reference Huygelier, Mattheus, Vanden Abeele, Van Ee and Gillebert2021).
VR can map spatial attention and attention problems in neglect
VR can measure spatial attention and map spatial neglect (Buxbaum et al., Reference Buxbaum, Dawson and Linsley2012; Dvorkin, Bogey, Harvey, & Patton, Reference Dvorkin, Bogey, Harvey and Patton2012; Harada & Ohyama, Reference Harada and Ohyama2019; Knobel et al., Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020). Knobel et al. (Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020) evaluated the feasibility of a simple visual search task for neglect assessment. The VR game had players search for targets (20 white spheres) located among distractors (100 white cubes). Players were to find all spheres, as quickly as possible, by touching them with the handheld controller, changing their colour to red. Cubes were to be avoided. The players stopped when they stated they had found all spheres. Participants reported the VR as usable with minimal adverse effects. Compared with controls, those with neglect identified targets on the right far more than on the left and were slower overall. There was no significant difference in total right-side targets found between the neglect and control groups. The sensitivity of the VR game and pen-and-paper tests to detect neglect were statistically equivalent. Knobel et al. (Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020) identify that this study only brought attention to peri-personal (reaching) space, and that employing available technology such as eye-tracking would provide further insight as to patients’ attentional map. For an extended background on neglect, traditional assessments, and VR in neglect see Appendix I.
Introducing the attention atlas (AA)
Overview
It is from this background that we present a new VR platform for neglect assessment, the AA. The AA aims to provide accurate and detailed neglect diagnostics that are accessible to clinicians and patients, evolve our understanding of cognitive impairment following brain injury, and lead to new rehabilitation opportunities.
We plan to critically evaluate the AA in comparison to pen-and-paper methods, for accurate neglect detection and categorisation. Griffith University researchers and clinicians at Gold Coast University Hospital and Logan Hospital (Queensland, Australia) codesigned the VR game within The Hopkins Centre’s Brain and Enriched Environment (BEEhive) Laboratory for cognitive rehabilitation. The AA arose directly from clinical need for new and effective treatments of neglect. The AA represents an in-progress evolution informed by interdisciplinary discussions, extensive playtesting, patient testers within a pre-existing Gold Coast University Hospital Neurosciences Rehabilitation Unit Recreational Activity Program, and detailed clinical feedback at Logan Hospital.
Features and innovations
The AA presents participants with an immersive 3D virtual environment displaying a target amongst several distractors. Building on previous VR neglect assessments, the AA aims to create detailed maps of visuospatial attention and inattention that are precise, accurate, valid, and reliable (Ogourtsova et al., Reference Ogourtsova, Souza Silva, Archambault and Lamontagne2017). The AA adopts recommendations to use eye-tracking (Dvorkin et al., Reference Dvorkin, Bogey, Harvey and Patton2012; Knobel et al., Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020), quantifies attention in near and far space (Knobel et al., Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020), employs collaborative design (Morse et al., Reference Morse, Biggart, Pomeroy and Rossit2020), and aims to establish a reference database based on larger sample sizes (Dvorkin et al., Reference Dvorkin, Bogey, Harvey and Patton2012; Knobel et al., Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020). Finally, psychometric properties (external validity and reliability) of the instrument will be investigated as part of the research programme through detailed quantitative analysis, directly addressing a major recommendation identified from previous VR attempts to assess and treat unilateral spatial neglect (Ogourtsova et al., Reference Ogourtsova, Souza Silva, Archambault and Lamontagne2017). There are several innovations that may maximise the AA’s sensitivity, practicality, and overall effectiveness. Features and innovations include:
-
1. Basing the software on established visual search localisation paradigms
-
2. Mapping attention using raycasts for continuous and efficient assessment
-
3. Implementing eye-tracking (for HTC Vive Pro Eye)
-
4. Using a variety of coordinates, search modes, and stimulus parameters
-
5. Calibrating an origin to standardise results across players
-
6. Incorporating game design principles, including level variety and progression
-
7. Allowing games to be based on time limits as well as trial counts
-
8. Analysing performance in near-time
-
9. Saving game data robustly
-
10. Opening software access to facilitate neurorehabilitation research
Visual search paradigm
The AA is depicted in Fig. 1. The player is seated in the real world and perceives a virtual space using a VR headset and hand controller. The AA uses the established cognitive psychology visual search paradigm of locating a single target among distractors (Wolfe & Horowitz, Reference Wolfe and Horowitz2004). At the beginning of each trial, the target is cued centrally, serving to remind the participant of the target and to recentre attention centrally, allowing the identification of potential visuospatial attentional biases, for instance, along the horizontal axis. The search array then appears, and the player is required to locate the target among distractors [see Fig. 1(a)]. The paradigm allows the player to move their head, eyes, and hand to find and point toward the target. In this example, the target is the letter 'T' located among distractor letter 'Ls', a task requiring spatial attention to be allocated serially to each element in turn until the target is found, requiring goal-directed selective attention, which is affected in neglect [see Fig. 1(b)].
Attention mapping using Raycasts
In addition to traditional behavioural measures of RT (ms) and accuracy (% correct) for target localisation, which can be computed by target position, the AA uses raycasts. Raycasts have the advantage of being sampled continuously at the display refresh rate of the VR device (e.g., 90 Hz), documenting the search process implicitly and allowing more efficient attentional mapping than possible using traditional measures, which are acquired more slowly (e.g., at <1 Hz, depending on RTs). Raycasts are ray-to-surface collision tests that can quantify the user’s orientation in 3D space. Rays are cast (i.e., projected) in a straight line from each of the raycast sources (headset, controller, eye gaze) to collide with the raycast surface, a low polygon iscosphere, which surrounds the player [see Fig. 1(c)]. Each raycast source has a ray transform consisting of its origin position 3D (x, y, z) vector and forward direction 3D vector. The raw raycast output is the 3D (x, y, z) hit position on the raycast surface, which is converted into spherical coordinates for analysis. The raycast surface is centred at the origin (headset position), determined via a calibration procedure, described below. When required, raycasts are capable of measuring attention in 360° surrounding the player [see Fig. 1(d)].
AA parameters
The AA incorporates a variety of parameters including coordinate systems, depth configurations, search modes, and stimuli. These parameters will be described in turn. The two coordinate systems position search array elements systematically and at a common radius with respect to the origin (headset position). The spherical coordinate system is based on latitude and longitude and produces horizontally and vertically symmetrical positions. The icosphere coordinate system positions search elements at the icosphere’s vertex positions, which are approximately equivalently spaced in 360°, allowing attentional assessment in front, behind, to the left and right, and above and below the player. Icosphere recursion (i.e., subdivision) level, can vary the density of the positions, and the element inclusion angle can adjust the sampled visual field extent from a central field of view [see Fig. 1(e)].
The AA is configured to compare attention at different depths, with two configurations that use a symmetrical placement of elements in near and far depths and counterbalance polar longitude and latitude coordinates with depth radius. This allows attention (raycasts and traditional measures) to be compared for the same latitude and longitude coordinates at different depths with minimal occlusion from foreground elements. Stereoscopic vision inherently provides the depth cue; maintaining the element’s size across depths can provide an additional optional depth cue [see Fig. 1(f)].
To assess different characteristics of visuospatial attention, the colours of the elements can be varied. For example, elements all presented in a common colour (e.g., white) assess serial spatial attention (Wolfe & Horowitz, Reference Wolfe and Horowitz2004). Presenting the target in a unique colour among homogenously coloured distractors assesses bottom-up attention. Presenting half of the elements in a target colour and the other half in the distractor colour, a conjunction colour-shape task, assesses feature-based attentional filtering of the distractor colour feature (Painter, Dux, Travis, & Mattingley, Reference Painter, Dux, Travis and Mattingley2014). Heterogeneously coloured distractors may increase search difficulty due to task-irrelevant featural variation (e.g., Wei, Yu, Müller, Pollmann, & Zhou, Reference Wei, Yu, Müller, Pollmann and Zhou2019) ['rainbow' mode; see Fig. 1(h)].
Various stimulus options are preconfigured that assess serial search to varying degrees: letters with a target 'T' and distractor 'Ls', as previously described, a target '6' among rotated distractor '6s', a target 'ψ' symbol among distractor Georgian characters, a target balloon without a string among distractor balloons with strings, and a target queen of diamonds playing card among royal cards (jacks, queens, & kings) from all suits (diamonds, hearts, spades, & clubs). Each of these stimuli, when presented uniformly in a white colour, require serial spatial attention to varying extents.
Gameplay loop
As the AA is intended for attention quantification in brain injury populations, the game is made to be as accessible as possible, requiring only that the player point to and select the target. As described, each trial consists of a cue followed by the search array, which is randomly generated. Both the cue and target within the array are selected using the hand controller, which acts as a virtual laser pointer that extends from the controller to the raycast surface, indicated as a yellow beam in-game. The laser pointer highlights that an element is selected by colouring that element yellow. The player presses a button on front of the controller to enter their selection for both the cue and array displays. Feedback rewards correct target localisation via a pleasant sound played through the VR headphones and colourful confetti that appears at the target location. After each response, a new cue appears. The instructions for players including element selection are depicted in Fig. 2.
Origin calibration
This origin calibration quantifies the player’s headset position and orientation (i.e., heading direction) within the physical space of the real world, which allows placing the search array and raycast surfaces at common positions relative to the player, irrespective of the headset’s position and orientation. The player (and corresponding virtual VR camera rig) is located near the 3D virtual playing space origin (x = 0, y = 0, z = 0). Four origin targets (spheres, radius = 5 m) are presented with corresponding superimposed arrows at each of the four poles (north, south, east, and west), 100 m from the playing space origin. The player’s task is to move their head (and thus headset) to orient directly toward and raycast collide with one of the origin targets, with their chair in physical space facing toward one of the poles. The use of four poles allows the player to choose a convenient orientation within the physical space [see Fig. 3(a)].
The player views an origin target sphere (grey), and a superimposed arrow indicates which direction the player should move their head to face the origin target directly. If the angle between the headset raycast transform and origin target position 3D vector is small, the arrow reduces in size, indicating that player is within the intended calibration accuracy (<29.6°) and that they should press a button on the VR controller. The sphere changes colour to white, indicating that the player pressed the button. This triggers a 1 s period where the player’s 3D headset position is recorded as a mean (white sphere). Then, the sphere translates in elevation (y) and position corresponding to latitudinal perspective (x or z, depending on orientation) to match the headset position. The sphere then turns yellow, indicating that the headset raycast collides with the origin target. The player is required to hold this position for 4 s, during which time their headset position is recorded. The mean value is taken as the player’s origin, and stimuli are presented with respect to this location [see Fig. 3(b)]. Origin calibration can be performed once at the start of the game or repeated at the start of each level (i.e., trial block) to account for drift in seated position and to provide rests between levels.
Game design: level variety and progression
Early AA configurations seek to identify the optimal parameters that best distinguish gross neglect from non-neglect. We propose that efficient scanning of the stimulus parameter space will be made possible with raycasts due to their continuous and high sampling rate. Efficiency also allows us to incorporate principles of game design (Bavelier & Green, Reference Bavelier and Green2019; Deterding, Dixon, Khaled, & Nacke, Reference Deterding, Dixon, Khaled and Nacke2011; Shah, Basteris, & Amirabdollahian, Reference Shah, Basteris and Amirabdollahian2014), including variety and progression, which we implement as a series of levels with differing stimulus parameters. Consider an example game, depicted in Fig. 4. This example is based on the spherical coordinate system of 24 element positions arranged in four rings and eight radial arms. Positions that fall outside a central field of view, thus requiring the largest head movements for element identification, are depicted in dark grey [see Fig. 4(b)]. Using an element angle inclusion parameter, it is possible to select and present only a subset of positions on any given trial.
Each game is comprised of a series of levels, which can start simple and progressively become more challenging. In contrast to traditional attention studies, which use a set number of trials (e.g., Wolfe, Reference Wolfe1998), each level can last for a fixed duration, which ensures that the AA incorporates players of all abilities within limited clinical schedules. The stimulus parameters within each level can involve randomisation of search conditions across trials, minimising the impact of practice effects on within-level performance comparisons.
The example game starts with a brief tutorial that familiarises users to the search task, with the target defined by a unique feature, making localisation simple. To assess attention along both horizontal and vertical axes, Level 2 intermingles trials with elements presented individually on each of these axes. To assess what stimuli might be most sensitive to neglect, Level 2 contrasts attention to balloons, cards, and symbols. To assess whether neglect affects attention in depth, Level 3 contrasts attention to elements positioned at the same polar coordinates at near (2 m) and far depths (4 m). To map the visual space more completely, Level 4 uses a larger number of target elements. In this example, the duration of each of the levels is proportional to the number of elements and conditions presented [see Fig. 4(b)].
Near-time attention maps and behavioural performance
The AA is designed to provide instantaneous feedback to clinicians on the player’s attentional state so that the AA might be ultimately adapted and personalised for each individual within a gameplay session. Note we use the term near-time as opposed to real-time to indicate that the attention maps are created immediately after rather than during each level. Figure 5 shows an example output for one AA level. The results are collated at the game’s end to provide a single .pdf document of all the level results in the sequence in which they were undertaken. The results show detailed behavioural performance, system performance, and individual-level significance tests for spatial symmetry of attentional raycasts.
Each game is associated with a start time that acts as an anonymous timestamp and game identifier [see Fig. 4(a)]. A level descriptor shows the level name and conditions within each level. Each level is comprised of a series of stimulus options that are presented in multiple repetitions and in random order. This minimises practice effects across levels and facilitates between-condition comparisons. Each level is plotted with all conditions combined and with each condition separately [see Fig. 4(b)]. Raw raycast hit positions are converted into spherical coordinates with a common field of view (rotated appropriately for each calibration orientation) to facilitate comparisons across individuals [see Fig. 4(c)]. Target localisation performance (accuracy and RT), is presented as a function of ordinal trial position to identify potential effects of fatigue [see Fig. 4(d)]. System performance is assessed by analysing the frame rate, which should be close to the maximum for the headset (e.g., 90 Hz). Higher frame rates indicate less input lag, a better player experience, and more temporally detailed raycast data [see Fig. 4(e)]. RTs, accuracy, and number of trials are mapped by target position to identify potential attentional biases, for example on the horizontal axis [see Fig. 4(f–h)]. Raycast heatmaps are plotted for each attentional source: headset, control, and gaze, indicating frequently attended locations of 3D space [see Fig. 4(i–k)]. Latitudinal attentional distribution and symmetry tests for the raycast sources are conducted to identify potential attentional biases on these axes.
Software architecture
The software is comprised of The Game and The Analyser, two integrated systems that work together in near-time. The Game is programmed in C# using custom-written namespaces, classes, methods, and extensions for The Unity Game Engine (2019.4.20f1), which creates the interactable visual search environments, presents the visual stimuli, and saves the data in a variety of formats including human-readable, cross-software compatible, and C# native binary formats. The Analyser assesses the behavioural performance and raycast data to quantify attention during visual search and provide immediate feedback to clinicians. The Analyser uses Python 3.9.5 [MSC v.1928 64-bit (AMD64)]. The key components of The Game and The Analyser are outlined in Tables 1 and 2, respectively.
Plugins and modules
The SteamVR plugin is used to present the VR and to access the raycast source transforms. Other plugins include the TobiiXR plugin, which allows eye-tracking with the HTC Vive Pro Eye and NumSharp, which allows saving C# arrays directly into Python numpy format. Extenal python modules include pandas, pyarrow, matplotlib, seaborn, astropy, PIL, fpdf, and keyboard. The AA has been tested with the HTC Vive, HTC Vive Pro, HTC Vive Pro Eye, and Oculus Rift CV1.
Save game system
The data is saved in gameplay recordings that are anonymised by date-time. Data are saved with the machine name to allow tracing back to the acquisition site. Each level is separately saved and grouped within a common game folder. The data and results folder and file structure is depicted in Fig. 6.
Code availability
The version of the AA software described in this paper is and will be publicly availability on GitHub (https://github.com/davidrosspainter/TheAttentionAtlas). Additionally, the software will be made publicly available on Open Science Framework (https://osf.io/pa96f/) prior to publication. The software will be made available under the CC BY-SA license, which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use and reuse under the same terms.
Research programme
Co-design, feasibility study with users in real-world settings
Our feasibility study employs six focus areas incorporating exploratory mixed-methods analyses in a collaborative design (Bowen et al., Reference Bowen, Kreuter, Spring, Cofta-Woerpel, Linnan, Weiner and Fernandez2009). Mixed-methods has been recommended for feasibility studies of VR use in people’s homes (Laver et al., Reference Laver, Adey-Wakeling, Crotty, Lannin, George and Sherrington2020) and the same applies for hospitals. Morse et al. (Reference Morse, Biggart, Pomeroy and Rossit2020) identifies a need for more mixed-methods studies on patient and clinician perceptions on VR use for neglect, as this approach can enhance our understanding on how to increase therapy engagement.
When creating a health tool for clinical use, Morse et al. (Reference Morse, Biggart, Pomeroy and Rossit2020) recommend involving clinicians, patients, and carers as stakeholders in a collaborative design process; limited engagement with clinicians and patients has been identified in the current literature (Morse et al., Reference Morse, Biggart, Pomeroy and Rossit2020). This guarantees the experiences and concerns of clinicians and patients are included (Santana et al., Reference Santana, Zelinsky, Ahmed, Doktorchik, James, Wilton and Butalia2020) and ensures tools are personalised to the population of interest (Lange, Flynn, Proffitt, Chang, & Rizzo, Reference Lange, Flynn, Proffitt, Chang and Rizzo2010); VR needs to be specifically designed for the stroke population so that various physical and cognitive impairments can be accommodated (Huygelier et al., Reference Huygelier, Mattheus, Vanden Abeele, Van Ee and Gillebert2021).
A useability study should be the first step when testing new VR programmes (Morse et al., Reference Morse, Biggart, Pomeroy and Rossit2020), but to our knowledge only a few studies have measured useability of immersive VR with a neglect population (Knobel et al., Reference Knobel, Kaufmann, Gerber, Cazzoli, Muri, Nyffeler and Nef2020; Morse et al., Reference Morse, Biggart, Pomeroy and Rossit2020; Ogourtsova, Archambault, & Lamontagne, Reference Ogourtsova, Archambault and Lamontagne2019) finding high levels of useability. However, these studies were small scale, and the authors recommend further testing of useability, which was restricted to patients experience only. We aim to test useability on a larger number of patients and with clinicians onsite in a hospital. This allows for ecologically valid testing of the AA. We aim to develop a VR application that is usable and useful for medical and allied health staff and enjoyable and engaging for patients The scale of the current study also allows for ongoing updates to the software based on patient and clinician feedback, in a form of quasi-action-research, before undergoing full testing with people with neglect and people with no neglect. Ultimately, the AA can be further developed into an immersive game experience with maximum usability and clinical function, thereby advancing the field of translational neuroscience.
Hypothesis
We hypothesise that system-level behavioural performance on the AA will produce distinct functional attention maps across one or more spatial dimensions (horizontal, vertical, and depth) for the neurotypical individual and those with brain injury, including for people with neglect.
Design
Studies will address procedural, scientific, and clinical feasibility domains. Each domain is important and distinct where findings from one domain are not required for another. We have incorporated a participatory approach to allow for direct involvement of end users (patients and clinicians) in game design to inform the real-world application of the AA as well as look/feel of the visual search task by incorporating iterative feedback cycles.
The project will be conducted across six separate studies relating to each of the focus areas and three overall aims (see Table 3). Some studies will run in parallel. Findings from this feasibility study will inform the likelihood of successful implementation (and cost-planning) and focus areas for future full-scale efficacy trials and validation studies.
Aims and studies
Aim 1
Aim one is to ascertain the procedural feasibility in a small-scale demonstration study. Tests with healthy subjects identify a leftward visual field bias, known as pseudoneglect (Friedrich, Hunter, & Elias, Reference Friedrich, Hunter and Elias2018; Jewell & McCourt, Reference Jewell and McCourt2000; Ribolsi, Di Lorenzo, Lisi, Niolu, & Siracusano, Reference Ribolsi, Di Lorenzo, Lisi, Niolu and Siracusano2015). For this aim, we will establish the normal variation (visual field variation, task performance variation) within two 25-min test sessions. A convenience sample of healthy participants aged between 18 and 65 years will trial the AA and provide before and after feedback on the task allowing us to quantify normal variation and measure baseline visualspatial attention within the virtual space. Based on power analysis we will recruit a sample size of 27 (t-test: M = 0, α = 0.05, β = 0.80). Quantitative data will be analysed using descriptive methods (SD, mean, median, percentages), interindividual comparisons of visual field regions of interest and histogram-based ray cast attentional map analysis (Blascheck et al., Reference Blascheck, Kurzhals, Raschke, Burch, Weiskopf and Ertl2017).
Study one will determine acceptability of the AA regarding the experience of motion sickness [using the 12 item Simulator Sickness Questionnaire (Kennedy, Lane, Berbaum, & Lilienthal, Reference Kennedy, Lane, Berbaum and Lilienthal1993)] administered before and after the session, usability [measured using the System Usability Scale (Brooke, Reference Brooke1996; Morse et al., Reference Morse, Biggart, Pomeroy and Rossit2020)] and gaming experience [using the Game Experience Questionnaire (Poels, de Kort, & IJsselsteijn, Reference Poels, de Kort and IJsselsteijn2007)]. Short qualitative interviews will further explore user experience.
Study two will quantify inter- and intra-individual standard variation for visuospatial attention. For all participants, we will measure raycasts, RT, and accuracy. The outcomes of these studies will ascertain the overall functionality of the AA for healthy participants and define the AA parameters that inform sampling and error calculation for future validation studies. Also, we will identify the scope of useful field of view for successful, accurate target searches.
Aim 2
Aim two is to determine the scientific feasibility in a small-scale consecutive case series study. Functionally, we know that neglect varies between individuals (Dvorkin et al., Reference Dvorkin, Bogey, Harvey and Patton2012). Although some patients will present with marked spatial inattention of their visual world, others will present with more subtle but still problematic inattention deficits that remain undetected. This feasibility study will include a consecutive case series of N = 50 people with stroke (inclusive of left and right lesions) identified within 4 weeks of their inpatient rehabilitation admission at the Gold Coast University Hospital Neurosciences Rehabilitation Unit over a consecutive 6-month intake period. A full medical file review and documentation of cognitive, sensory, and motor deficits, noting functional symptoms and any description of neglect and/or visual field difficulties will allow the establishment of a clinical test database to archive symptom recording and clinical performance data.
Study three will first establish the procedural feasibility and acceptability of the AA for clinical samples, applying duplicate methods as Study 1. We aim to determine if participants with brain injury can orientate and perform the instructed tasks (accounting for level of hand mobility, visual problems etc.), and their perceptions of acceptability.
Study four will examine the scientific efficacy of the AA by conducting a small-scale experiment with participants who have a diagnosis of stroke with (>1) and without definite neglect as noted on medical file (retrospective cases with internal controls). For all participants, we will conduct standard neglect assessments (line bisection, letter cancellation, clock drawing). Using the AA, we will measure visuospatial attention for each participant, RT, eye gaze and accuracy, and raycast heatmaps in addition to actual time to complete and any procedural variations noted. Quantitative performance data will be mapped and compared across participants (and compared to standard tests). To determine the capacity to identify potential neglect phenotypes and subgroups from demonstration data if the sample allows, we will apply unsupervised K-means cluster analysis. (Henry, Dymnicki, Mohatt, Allen, & Kelly, Reference Henry, Dymnicki, Mohatt, Allen and Kelly2015) This exploratory data analysis will provide valuable insights for further hypothesis generation. Aim two will allow the creation of preliminary neglect performance datasets and accompanying patterns of symptoms for a consecutive sample. For each participant, we will create the first neglect maps by converting raycasts from Cartesian to spherical coordinates.
Aim 3
Aim three will establish the clinical feasibility in a small-scale observational study and qualitative value study. For a rehabilitation tool to be of value to rehabilitation services, patients and clinicians must trust its practical application and potential to guide clinical management (Zeeman, Reference Zeeman, Zeeman, Kendall and Wright2013). A separate study will be conducted with hospital inpatients to establish clinical feasibility of the platform.
Study five will conduct a preliminary, small-scale case-control observational study to establish clinical implementation feasibility of the AA. We aim to identify true positive from true negative test performance between neglect cases (second consecutive sample) and a consecutive comparison group (neurological intact inpatients matched in age and gender). Sample size estimates for Study 5 are based on existing computer-based studies of neglect (Bonato, Priftis, Marenzi, Umiltà, & Zorzi, Reference Bonato, Priftis, Marenzi, Umiltà and Zorzi2010) Taking a conventional large effect size d = 0.8 (α = 0.05 and β = 0.80), the required sample size for this case-control study is N = 42 (21 patients and 21 controls). A large effect size is expected based on studies showing computer-based testing is sensitive and specific in detecting neglect (Bonato, Priftis, Marenzi, Umiltà, & Zorzi, Reference Bonato, Priftis, Marenzi, Umiltà and Zorzi2010, Reference Bonato, Priftis, Marenzi, Umiltà and Zorzi2012; Bonato et al., Reference Bonato, Priftis, Umilta and Zorzi2013). For our purposes, power analysis is based on between-samples t-test comparison of neglect versus controls; for example, the mean horizontal raycast position for neglect versus controls. Quantitative data will be analysed using descriptive statistics and ANOVAs for an independent-samples comparison.
Study six aims to enhance the task and experiential value of the AA. We will individually work with a small number of participants (N = 5 inpatients who are within 1 month of discharge from hospital) to enhance the platform with additional graphics and narrative context. After a demonstration test, we will seek their feedback and suggestions, and will incorporate video game approaches to guide the reward and engagement components of the task to ensure that future patients are motivated in the cognitive assessment platform. We will also work with the onsite clinical team to determine the value of the case report analyses and modifications for producing a clinically meaningful ‘fingerprint’ of the volumetric neglected space for individual patients, along with preliminary procedural steps for a clinician-operated mode (e.g., including a user-friendly graphical user interface). Aim three will establish the clinical value of the AA, accompanied by a full suite of testing options for a larger clinical trial and development into full game capability. We will engage the skills of an expert graphic narrator consultant who will identify and develop opportunities to enrich the task graphics and immersion.
Conclusions and next steps
The AA builds on the foundation of research into VR for detecting neglect. It aims to create a neglect detection VR platform that is highly sensitive, enjoyable, and effective for clinical use. The AA uses the latest technology, integrates eye-tracking, and will provide clinicians with immediate, clinically relevant feedback on a patient’s level of neglect. The research proposed is an iterative process where feedback from patients and clinicians will guide the next phase of the AA. This paper outlines why we have developed the current application, and our research plans to develop it further. Better assessment leads to better treatment; more enjoyable treatment leads to better outcomes. The AA applies the principles of translational neuroscience, whereby new generation cognitive assessment tools may ultimately be developed into VR games offering optimal scientific accuracy and patient engagement.
Supplementary materials
For supplementary material for this article, please visit https://doi.org/10.1017/BrImp.2022.15
Acknowledgements
Griffith University researchers and clinicians at Gold Coast University Hospital and Logan Hospital (Queensland, Australia) codesigned the VR game within The Hopkins Centre’s Brain and Enriched Environment (BEEhive) Laboratory for cognitive rehabilitation.
Financial support
The development work and research programme described in this paper was supported by a National Health and Medical Research Council Ideas Grant (APP2002362) 'DImensional Attention MOdelling for Neglect Detection (DIAMOND): A novel application for brain injury', a Metro South Health Research Support Scheme (MSH RSS) Project Grant (RSS_2021_173) '3D visuospatial attention mapping in patients with stroke and other neurological conditions' and First Prize in the Bionics Queensland Challenge 2020 in AI-Enabled Bionics 'AI-enabled spatial attention assessment and training system'.
DH is supported by an Early Career Research Fellowship from the National Health and Medical Research Council of Australia (GNT1142929).
Conflicts of interest
DSH receives royalties for books about pain and perception from Noigroup publications. He has also received travel support from Reality Health and speaking fees from professional and scientific bodies for lectures on pain.
All other authors have no conflicts of interest to declare that are relevant to the content of this article.
Ethical standards
The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008.
Open practices statement
The version of the AA software described in this paper is and will be publicly availability on GitHub (https://github.com/davidrosspainter/TheAttentionAtlas). Additionally, the software will be made publicly available on Open Science Framework (https://osf.io/pa96f/) prior to publication. The software will be made available under the CC BY-SA license, which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use and reuse under the same terms.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.