Introduction
One of the key aims for the assessment of clinical skills in general is to assure competence in procedural tasks related to a specific role while at the same time differentiating between those individuals who possess those skills and those who do not. Reference McKinley, Strand, Ward, Gray, Alun-Jones and Miller1 But what is understood about the exact construct and attributes of clinical competence? It has been proposed that it is the habitual and judicious combination of communication, knowledge, technical skills and clinical reasoning Reference Epstein and Hundert2 and that a competent professional should possess personal, cognitive and technical attributes. Reference Sidhu, Grober, Musselman and Reznick3
Learners need to demonstrate their knowledge and understanding of the underpinning principles of radiotherapy by applying them in a clinical context. Assessing these links operates at two levels, the first acts as a formative assessment for learning which can guide individual learner development and determine where knowledge has or has not been gained in relation to syllabus content and outcomes. These actions are normally student mediated while summative assessment is the assessment of learning, is instructor or tutor mediated, used to determine whether students have learned the material that they have been taught and to what degree they understand it and can apply it. In his pyramid of clinical competence, Miller Reference Miller4 referred to this as the ‘shows how’ and the ‘does’ levels of performance and the direct observation of procedural skills in a clinical setting remains the primary and overarching method to assess these links in patient care. However, in Radiotherapy, while advances in technology have led to the streamlining and optimisation of many processes such as automatic parameter and couch position settings, they have also reduced opportunities for learners to synthesise, assimilate and demonstrate the application of fundamental principles into clinical practice. Two forces acting against each other have given rise to this reduced opportunity, the first being that linear accelerators now require fewer hands on operations compared to previous generations because of automation. This has resulted in therapeutic radiographers now performing tasks more aligned to system programming. The second is that increasing the level of computerised control has increased the level of precision and an associated reduction in the margins between the tumour target volume and surrounding healthy tissue. So while automation has the potential for safety improvements by removing the risk of human error, it has also moved the emphasis away from hands on (psychomotor) positioning of the patient at the linear accelerator isocentre.
These factors have the potential to widen the gap between the conceptual knowledge of radiotherapy and its practice. Reference Lozano5 Learners need to recognise that automated systems can fail and without experience it is difficult to recognise that an error has been made. It is therefore incumbent on educators to instil in learners a moment to moment appreciation of the potential for failures within the treatment pathway, a condition referred to as safety mindfulness. Reference Mazur, Marks and McLeod6 It has been suggested that the traditional apprenticeship model of learning and assessment in a clinical setting is becoming obsolete in the light of automation, since those actions that were once hands on have now become invisible. So the well-known saying of ‘see one, do one, teach one …’, Reference Siddaiah-Subramanya, Smith and Lonie7 once the conventional wisdom of mastery learning and the often subjective approach to performance assessment are no longer appropriate. Reference McGaghie, Barsuk and Wayne8 This is largely because the ‘see one’ activities have become invisible and those ‘do one’ processes have become button presses.
The introduction of the Virtual Environment for Radiotherapy Training (VERT) platform in 2008 provided opportunities for educators to rethink existing learning, teaching and assessment strategies, and a recent audit reported a wide variety of resource development and implementation internationally. Reference Bridge, Giles, Williams, Boejen, Appleyard and Kirby9 These roles were brought into even sharper focus in the early part of 2020 when the COVID-19 pandemic resulted in restricted access to clinical placement sites and a reduction in clinical opportunities. This left clinical educators needing to rapidly revaluate the role of VERT and other simulation platforms as an alternative method of assessment in order to meet course and institution assessment requirements and regulatory body standards of proficiency. Reference Bridge10 While there is a wealth of evidence to support the role of simulation-based education in the assessment of technical and interprofessional skills in other areas of health care and medical education, the evidence base for the use of simulation platforms for assessment of clinical skills in Medical Radiation Science is limited. Reference Wertz, Wagner, Ward, Mickelsen and Qian11 In radiotherapy specifically, the evidence base underpinning VERT pedagogy is still relatively small in comparison, but growing. Research has tended to focus on the educational effectiveness of VERT Reference Jimenez, Hansen, Juneja and Thwaites12,Reference Leong, Herst and Kane13 and its impact on confidence building, understanding of fundamental principles and clinical performance Reference Appleyard and Coleman14,Reference Green and Appleyard15 with less emphasis on how the platform may be used as a summative assessment tool. Reference Flinton16
The aim of this article is to contribute to this discussion by sharing early experiences of using VERT for the summative assessment of a palliative, parallel pair external beam radiotherapy treatment technique in the School of Radiology at AECC University College, Bournemouth, UK
Method and Materials
Contextual background to the project
The inaugural intake of eight first-year students joined the BSc (Hons) Radiotherapy and Oncology course in the School of Radiology at AECC University College, Bournemouth, UK, in September 2020 and completed their first radiotherapy department-based clinical placement in December 2020. This placement acted primarily as an orientation and observation period but with an expectation that students would progress into active participation with basic pretreatment and treatment delivery tasks towards the end of the 5-week period. During placement two, which was scheduled for, and took place in February and March 2021, these early experiences would be consolidated and progress measured by participation in and successful completion of summative gateway or checkpoint assessments. These involved the delivery of a palliative parallel pair and a multi-field pelvis technique.
Although the second phase of COVID-19 restrictions in England had begun at the beginning of November, the first clinical placement went ahead as scheduled. As the placement progressed, in anticipation of the ongoing restrictions continuing and the possibility of additional restrictions which might potentially impact on placement two, the option for moving the palliative, parallel pair assessment into VERT was discussed by the academic course team. The rationale for transferring this assessment into VERT was based on the desire to reduce scheduling pressures for patient workflows for our clinical partners. This was based on the need to identify enough clinical opportunities and suitable patients for deliberate practice for eight students in the run-up to their summative assessments before the end of the clinical placement period. During the discussions, three questions were considered: could it be done? If it could, how could it be done and what would the reality look like?
Pre-assessment preparation
In January 2021, during the 5-week campus-based run-up to placement two, students had already been scheduled for 4 hours of guided practice with VERT each week to focus on palliative radiotherapy for metastatic disease. During these sessions, they were encouraged to work in pairs or groups of three or four to encourage peer to peer exchange of experiences. As the sessions were supported with a palliative radiotherapy workbook, the incorporation of a summative assessment on its completion was felt to be a natural conclusion to these sessions. The course team selected virtual patient plans for brain, lung and pelvis from the VERT plan database to represent the diagnosis groups of metastatic brain disease, late-stage primary lung cancer and metastatic bone disease in the right or left hip. Each virtual patient had anterior midline and lateral skin marks which supported the incorporation of references from these initial positioning coordinates to actual treatment field centres into the patient set up and positioning instructions to test students understanding and application of translational couch shifts. Eight fictional case scenarios (two for each diagnosis group) were developed and each had a brief case history, demographic details (name, full address and date of birth), a dose prescription with associated field sizes, inter-field (separation) measurements and immobilisation and positioning information for added authenticity and realism. The course team employed a blinded random number allocation method to assign a different virtual patient to each student who then used their case scenario to compile a treatment sheet. As part of this process and by working in pairs, they were required to manually calculate the daily dose and monitor units (MUs) for their patient, to cross-check their partners calculation and to compile their own treatment sheet. While the accuracy of the calculation and treatment sheet information did not form part of the summative assessment, it was used as a formative exercise to link learning outcomes from the Radiotherapy Physics and Equipment module to safe and accurate treatment delivery and to support a parameter check and ‘beam-on’ safety discussion which was assessed.
Developing the assessment checklist
Performance during the assessment was recorded on a clinical task checklist (refer to supplementary information) which identified 25 component steps of the workflow. This was the same checklist that would be used for assessments undertaken in the clinical setting. Those activities identified as safety critical, for example, patient identification, consent checking, infection control measures, correct patient positioning and the completion of an appropriate parameter check were designated as pass or fail and if not fully completed or omitted, would incur an automatic technical fail classification. Workflow subtasks were divided into pre-procedure checks (6 pass or fail items), 17 yes or no treatment delivery items and two pass/fail post-procedure items. While no passing score was applied, each student was assigned an overall pass or fail classification which would be communicated to them immediately on completion of the assessment.
The assessment process
All assessments were completed on the same day in the penultimate week before commencement of placement two with students pre-booking a 30-min time slot for their assessment. The individual assessments were overseen by both members of the course team; one as lead assessor and the other, initially acting as the patient for pre-treatment, three-point positive identification and health and well-being checks, then as the observer and moderator for the lead assessor. Virtual plans were preloaded by the lead assessor prior to the student entering the room, with the couch set to a low position away from the gantry. Students were required to position the patient at the isocentre using skin marks and couch shifts according to the treatment sheet information. When students indicated an acceptable set-up, they were required to outline final parameter checks and the safety procedure for leaving the room. In the absence of a real linear accelerator keypad and control panel emulator, a Microsoft PowerPoint presentation mock-up of a control panel and CCTV monitors was used as the platform for each student to articulate their awareness of patient and radiation safety processes and control room checks. Once completed, students were required to move the treatment couch to a suitable (safe) position for patient offload and to identify the post-treatment patient checks and room cleaning procedures. At this point, the assessment concluded and each student was advised of the assessment outcome. This was followed by a short (10 min) verbal debrief during which the cohort was given the opportunity to reflect on their performance and to receive contingent feedback from the assessment team. This feedback highlighted action points which would feed forward into their learning contracts and action plans for placement two.
Ethical considerations
As the project did not involve an intervention which was considered to sit outside everyday clinical practice, it was classified as a service or process evaluation according to the AECC University College Research Ethics Policy and Guidance. As such it did not require Research Ethics Subcommittee approval.
Results
Observations on student performance
All eight students attended for their assessment as scheduled and seven (87·5%) achieved a pass classification at first attempt. The unsuccessful student received verbal feedback which highlighted a series of action points and activities to focus on during their second clinical placement. This would provide the opportunity for consolidation of skills during placement two, additional support and feedback from the clinical team and bespoke remedial tutorials as required. They also had the opportunity of a second attempt during the scheduled University second assessment period in June 2021.
The assessment also provided the assessment team with the opportunity to observe, compare and contrast the performance of the whole cohort in one session. It was particularly interesting to see how each participant was beginning to develop their approaches to patient positioning and the order of psychomotor actions employed to achieve an accurate set up. Examples included raising the couch first or moving it towards the gantry first and whether or not they used the field light as well as the lasers to determine the correct left to right couch position whilst setting up to the anterior positioning coordinate. It also provided an opportunity to gain an impression of the spatial and situational awareness of each student particularly when applying translational couch shifts from the initial positioning coordinates to the final treatment isocentre.
The academic perspective
The assessment format utilised the same approach that would have been followed if the assessment had been completed in the clinical setting and using the same checklist. It was acknowledged that while VERT may not replicate all the actions required in an assessment of this type, the use of focused questions from the assessor provided opportunities for participants to think aloud and articulate their deeper understanding of those principles that could not be actively demonstrated. A prime example to illustrate this would be the control room processes for parameter checking, beam on and patient monitoring. In addition, there appears to be a general culture of conservatism around testing which contrasts with the rapid pace of change in the technological aspects of health care and the accelerating pace of innovation in education and practice. During the planning stages, the inclusion of standardised patients or patient actors was considered but as campus access was restricted this was not a viable option in this instance. There was a possibility that the role of the patient, as taken by one of the teaching and assessment team could have impacted on the authenticity of the assessment. Reflecting on this risk, the course team held the view that as all participants actively immersed themselves in their role, this was not a major issue. That said there are plans to incorporate patient actors who are not part of the course teams or Radiography cohorts in future iterations of VERT-based assessments.
No separate count of the preparation time required for the palliative radiotherapy learning strand and assessment materials was made at the time because it was felt to be part of the normal learning and teaching resource development for the semester. However, the course team’s experience of developing similar materials over recent year can serve as an approximate guide. Practical workshop time had already been scheduled so the session content was adjusted to meet the assessment learning outcomes. The development of the palliative care workbook involved the collation of previously used material which took approximately 3 hours. While the assessor handbook and instructions were adapted from the existing clinical assessor guidelines in the placement handbook and produced as part of the quality assurance and monitoring cycle took 1 hour. The generation and allocation of the patient scenarios and the design and production of the treatment sheet took somewhat longer at an estimated 4 hours.
The clinical perspective
The radiotherapy education lead and practice educator team from the clinical placement department were contacted via email to ascertain the time commitment for staff involved in single procedure assessments. They identified that, for each individual student, they would expect to spend in the region of 2 hours supervising deliberate practice in preparation for an assessment. On the assessment day itself, their practice was to allocate 1 hour for conducting the assessment, debriefing and providing feedback. As the year 1 clinical assessment package includes two such assessments, this would amount to a total of 48 h (6·4 days or 26%) of the second 5-week placement. Therefore moving the parallel pair assessment into the virtual environment produced an estimated time commitment saving in the region of 13% at a time when services were constrained.
Overall the move to a virtual assessment has delivered time savings for the clinical department and the development of reusable learning, teaching and assessment objects which can now be adapted to support future simulation-based assessments with little additional time requirement. In addition, all participants were assessed on similar parallel pair scenarios and this was felt to have a positive impact in terms of equity compared to often variable clinical opportunities arising in the clinical environment.
Discussion
In framing the discussion, the commonly held conceptions for assessment design in general have been viewed from the perspective of their application to simulation-based assessment. The primary considerations when blueprinting the new assessment format were constructive alignment to clinical learning outcomes Reference Srinivasan, Hwang, West and Yellowlees17,Reference Lammers, Davenport and Korley18 and authenticity, which Gulikers, Bastiaens and Kirschner, Reference Gulikers, Bastiaens and Kirschner19 identified as an assessment which requires learners to use the same combinations of knowledge, skills and attitudes that they need to apply in the criterion situation in professional life. The assessment teams’ previous experience of using VERT for summative assessment involved year 1 students performing a series of Linear Accelerator daily quality assurance routines; predominantly a test of psychomotor skills. It remains to be seen how assessment of attitudes may be replicated and how the proposed new structure will reflect and align to all the learning outcomes for palliative radiotherapy treatment delivery. This is likely to become clearer as more experience of this type of assessment format is gained.
The Association for Simulated Practice in Healthcare standards for practitioners 20 identified that the use of simulation in high-stakes summative assessment is increasing. However, they also highlighted the need for assessments to be based on the intended learning outcomes of the exercise and to include clarity about the knowledge, skills and attitudes to be demonstrated. The standards also state that participants must have prior experience and familiarity with simulation prior to summative evaluation, that minimum expected performance standards should be explicitly shared between participants and trainers and that summative assessment should be based on evaluation tools previously tested with similar populations for validity and reliability. Students had been introduced to VERT and had used it as part of clinical preparation for their first placement; this was followed by 4 weeks of facilitated practice prior to assessment. Performance standards and workflow requirements were integral to the palliative radiotherapy workbook and the content of the evaluation checklist mirrored external beam workflows and had a similar structure to other single procedure checklists employed in the clinic setting.
As simulation has become increasingly important in the support of mastery learning for technical procedural skills and patient care, it has also been reported to be a powerful approach for observing and assessing of skills and for the provision of feedback to learners. Reference Holmboe and Iobst21 One of the benefits of VERT is that it allows a course team to control and standardise both the content of and context for the assessment and to give an indication of end to end safety. Reference McGaghie, Barsuk and Wayne8 Assessment tools also need to predict what a novice practitioner may do when functioning in a clinical team but with a focus on an individual’s skills without the need to also coordinate the activities of other team members. While it is acknowledged that first year students would not necessarily be expected to coordinate the activities of others, possibly a challenge for some, they would be expected to demonstrate an awareness of the actions of other team members and to communicate their own actions to others.
The renewed interest for simulation-based assessment tools in clinical education comes with a requirement for a review of conventional validation benchmarks. So before making the choice of whether to move the assessment traditionally conducted in the radiotherapy clinic setting to a VERT-based assessment important questions needed to be answered. These included: Is it valid? That is, will it measure what it purports to measure? Will it be reliable, will performance be measured consistently over time and finally, is it feasible? The first is relatively straightforward; as the proposed assessment in the virtual environment mirrors a treatment delivery workflow for a specific technique, then yes, performance in the same components can be measured in VERT as they would be in the real clinic setting. The second point concerning reliability is less easy to answer at this moment in time. The Accreditation Council for Graduate Medicine definition of reliability Reference Holmboe and Iobst21 proposes that it is related to outcomes being stable and consistent when repeated under similar circumstances across different cohorts (at the same level) and at different time points in the same institution. As this was the first iteration of the assessment in this format; a wait and see stance will need to be adopted. The final point can be answered with an overwhelming yes. Enough time was built into the student and course team timetable to accommodate all the required timeslots for deliberate practice and final assessment.
While radiotherapy practice has continued to evolve, the course teams’ experience of assessment checklist design and content has changed little. Is this necessarily a problem and does it depend on what an assessment checklist needs to record? In other words, is it the checklist or is it the assessment that needs to be valid and reliable? There appears to be a general culture of conservatism around testing which contrasts with the rapid pace of change in the technological aspects of health care and the accelerating pace of innovation in education and practice. This risks the creation of a widening gap between current approaches to assessment and what actually occurs in the clinic. One barrier, reported by Holmboe et al. Reference Holmboe, Rizzolo, Sachdeva, Rosenberg and Ziv22 a decade ago, was that there appeared to be a lack of courageous leadership generally to use simulation in high-stakes programmes because SBA was perceived as being not yet perfect. While some progress has been made, perhaps the reality is that SBA will always be a work in progress. In radiotherapy specifically, the results of a 2017 international audit Reference Bridge, Giles, Williams, Boejen, Appleyard and Kirby9 to establish the role of VERT found that 32 of 47 respondents (68%) used the platform for summative assessment and that for 7 (22%) its usage was common. Where the theory and evidence may not yet be sufficient for high-stakes summative-type decisions for credentialing, SBA can start as a potent and valuable formative component and as a checkpoint and gateway summative assessment for progression. Given that SBA has unique strengths and can help to improve care, can the radiotherapy community afford to wait for ‘perfect’ evidence before incorporating meaningful, simulation-based assessments into the higher stakes assessment for final credentialing?
Conclusions and Recommendations
While it is acknowledged that the assessment process reported here is a single snapshot, it has demonstrated that a VERT-based summative assessment is possible and several benefits have been identified. Students had the opportunity for end to end practice in treatment delivery and to refresh clinical skills gained during their first placement. In addition, these skills were assessed in an environment removed from day to day clinical workflows at a time of increased pressure on the clinical department. They were introduced to the process of summative, single procedure assessment and provided with opportunities for think aloud discussion to link and apply physical concepts learned in other units (e.g., calculation of dose, patient safety and radiation protection) to the clinical setting. The academic course team was able to observe individual application of knowledge and skills in a structured way and to provide contingent feedback during debrief and it reduced some of the pressure on staff scheduling for the clinical practice educators since not all staff are clinical assessment supervisors.
However, there is more work to do as identified by Bridge. Reference Bridge10 Such work could include the identification of those clinical scenarios which may be suitable for SBA alongside clinical placement as an essential aspect of skill acquisition and the impact that this might have on placement planning. There also exists the potential for further study in the areas relating to the role of patient actors, in determining the psychometric qualities and the predictability of SBA for clinic-based performance and learner behaviours in more complex techniques. Consideration might also be given to near-peer observations and feedback as an alternative to tutoring assessment of performance and whether technical performance needs to be classified by a numerical score or could emphasis be placed on a failing score?
Supplementary Material
For supplementary material accompanying this paper visit https://doi.org/10.1017/S1460396922000073
Acknowledgements
The participation and engagement of the 2020–21 cohort of year 1 Radiotherapy & Oncology students at AECC University College in the assessment process is acknowledged by the authors.
Financial Support
This research received no specific grant from any funding agency, commercial or not-for-profit sectors.
Conflicts of Interest
The authors declare none.
Ethical Standards
The authors assert that all procedures contributing to this work did not require approval by the AECC University College Research Ethics Sub-Committee.