Introduction
Emergency hospital systems are expected to respond effectively to intentional or accidental mass casualty incidents (MCI). Such incidents are largely unpredictable, and increasing worldwide.1-3 Mitigation of the harms associated with these disasters often requires appropriate high-quality healthcare processes to be enacted rapidly, safely and without duplication or error.4 To prepare clinicians and health systems, effective training and rehearsal using the multi-systems involved in an actual response is necessary. Response frameworks (for example, Homeland Security Exercise and Evaluation Program,5 Disaster Management Indicator Model,6 Hospital Incident Command System7 and disaster preparedness courses such as the Major Incident Medical Management Support (MIMMS) courses offered by MIMMS Australia8 have been created to address this need. Simulation exercises also aim to model surge capacity response,9 and test application of frameworks, plans and learnings. Some systems such as the Emergo Train System® (ETS) are designed to enable testing, but reviews of the literature reveal surprisingly little evidence of benefit for broad multidisciplinary response teams.10,11 There is little evidence that simulation translates into better preparedness or that improved preparedness makes differences to actual responses. Consensus is lacking on which measures should be used to evaluate responses to infrequent community disasters. Given the critical importance of timely and effective health system responses to incidents, and the significant investment governments and communities make in funding simulation activities, these uncertainties need to be resolved. This paper uses incident simulation as an investigational method. The objective of our study was to assess the impact of simulation on staff perception of preparedness to respond to a MCI.
Method
Study design
This is pretest-posttest study evaluating the effect of three mass casualty exercises on the change in the self-assessed skills, knowledge and confidence of the participants in responding to one of three MCI.
Setting
The study was set in three public teaching hospitals, two located in the City of Gold Coast, South East Queensland, and one located in Brisbane, Australia, with an approximate total base population of 2.9 million in 2017. The first hospital (A) is a tertiary hospital and trauma centre with 750 beds and more than 100,000 ED presentations annually. The second (B), is a 200 bed hospital with 65,000 emergency department (ED) presentations annually. The third hospital (C) is also a tertiary hospital with 85,000 ED presentations annually. Hospitals A and C had disaster management plans in place at the time of the study.
Participants and participant orientation
Participants were full or part-time staff at each site, invited to participate by the exercise organisers. Between 3 and 10 additional external clinicians with a special interest in disaster management were also invited to attend as unpaid observers.
A list of expected staff participants was available, and each participant signed in for the exercise upon arrival. Non-participating staff not on the list were included in relevant denominator values. All participants and observers were given a pretest survey in a briefing area. The coordinators and architects of the exercise were excluded from the study. Participants were front-line nurses, doctors and ancillary staff (administrators, non-clinical ward staff, blood bank technicians,and pharmacists) from the disciplines of emergency medicine, intensive care, trauma, surgery, pathology, or radiology, hospital executives, and observers. Participants did not attend more than one exercise. No real people acted as patients for this exercise, which employed the ETS methodology. Casualties from the incidents were represented by 10 cm high magnetic “patients” (called “gubers” – see Box 1) placed on white boards, labelled as various areas of the ED, theatre and wards.11
At each exercise, staff were orientated to the simulation structure and rules (but not the actual scenario) during a 20-minute lecture immediately preceding the exercise. The lay-out of the whiteboards in different rooms, and their representation of the actual beds available within the ED, theatre, wards, ICU, and imaging areas was described.
The clinical information content on the guber-patients was explained: general appearance (obvious injuries, consciousness, age bracket) were listed on the front of the guber, and age and vital signs were listed on the back (Box 1). The rules for allocation of resources (including staff, equipment, and procedures) to each guber-patient were described, and an exercise moderator well versed with the rules was assigned to one or more areas for assistance. The timing assigned to each clinical procedure, imaging, and transport, and the summation of delays on the whiteboards were explained.
Description of exercises
Each simulation activity aimed to test each hospital’s response to an intentional MCI, toidentify systems and processes needing adjustment, and to improve staff response efficacy. For each exercise, a scenario plan was developed including details of the disaster, number of casualties, and (for each casualty) expected treatments, destinations and outcomes. Each exercise was limited to testing in-hospital response only, and excluded pre-hospital responses. During each exercise, the incident response was overseen and supported by a Health Emergency Operations Centre (HEOC). Each exercise ran for 3 − 4 h in a set of non-clinical rooms at each hospital. Each exercise had three to four facilitators who rotated between the simulation areas. The facilitators were senior clinicians with experience in using the ETS.
For each exercise, durations of clinical investigations (X-ray/computerised tomography (CT) scan, blood pathology), transfer times between areas (for example, ED to intensive care unit (ICU), and transfer requirements (e.g., having appropriate staff and equipment to accompany the patient), were set out based on hospital policies and statistics of actual times taken to perform imaging, tests and transfers.
Table 1 below describes the three Emergo Train System mass casualty incident simulation exercises.
Staff available to care for guber patients were represented by corresponding nurse and doctor gubers, as was equipment (e.g. CT scanners and ventilators). Staffing at the start of the exercise and actual patients in each area of interest (ED, ICU, and operating theatre (OT)) were based on census and staffing data for each hospital at 6 pm on May 1 2017 (hospital A), April 22 2018 (hospital B), and on 14 November 2018 (hospital C) to ensure that the exercise reflected real and typical conditions patient flow and bed and staff capacity. A fuller description of how the ETS simulation exercise operates can be found on-line.12
The A Hospital exercise occurred in May 2017, simulating response to an intentional vehicular attack taking place at a local mass-gathering event. Victim numbers, injury severities and times of arrival were modelled on the Pasteur 2 Adult Hospital experience during the Nice terror attack on 16 July 2016.13 Injury types were modelled on a summary report of intentional vehicle attacks.14
The B Hospital exercise occurred in February 2018 and was modelled on a “bombs and blasts” scenario. Eighty-one guber victims presented to the ED, following an accident in which a train loaded with commuters derailed and slammed into a power plant, which exploded. During this exercise, the B Hospital response was supported by the HEOC, which in turn was supported by the State Health Emergency Coordination Centre (SHECC), the role of which is to coordinate provision of wider health system support to local hospitals responding to crisis. During this exercise SHECC coordinated transfer of “patients” to other health services, for example, paediatric cases to Queensland Children’s Hospital in Brisbane. Again, data around staffing, patient lengths of stay in each clinical zone and care processes (such as intubation, X-ray, suturing, and requirements for induction agents) were tracked. Movement into each secondary zone was limited by predetermined “bed” spaces and local staffing.
The C hospital exercise occurred in November, 2018 and involved an aviation incident resulting in a “bombs and blast” set of injury types. The exercise included ED, ICU, ward and theatre staff.
Measures
The three outcome measures of interest for this study were the changes in staff perception of their confidence, knowledge and skills before, and immediately following, participation in an EMERGO exercise, measured using a survey tool. The attributes were each measured on a 5-point Likert scale ranging from 1 (not confident/knowledgeable/skilled) to 5 (very confident/knowledgeable/skilled) and linked for each individual. Only those individuals who remained at the end of the exercise completed the post-exercise survey. The surveys are available on-line as appendices I and II at the HTML version of this paper.
Survey development and administration
A multi-stage survey development process was undertaken.15 The first stage was to establish the most effective underpinning educational theory to inform the survey. Given the focus on experiential learning and on moving staff to a reasonable level of competence in disaster management, a five-stage capacity from beginner to expert was used to underpin the survey questions.16 These were reflected in a five point Likert scale assessment for each element.15 It used the key simulation learning outcome components of confidence, knowledge and skill as the elements of investigation.11 Consultation with participant pools and the available literature determined that the survey needed to be brief, paper-based and administered on-site as part of the simulation exercise to optimise engagement with the components under investigation. The linking code was entered at the time of completion of both the pre- and post- survey.
Once drafted, the survey went through several phases of refinement including input from experts in emergency disaster management, clinical education processes and clinical simulations. The wording, phrasing and length of questions were revised to help ensure content and face validity.15 A high level of consistency was maintained between the pre- and post-simulation versions of the survey to best capture the impact of the training. The post-survey at hospital B was modified to capture information specifically requested by that site. Questions 1 – 4 remained the same.
Survey process
The participant surveys were administered immediately before and immediately after the exercises. Pre-simulation surveys (appendix i) were provided to staff at sign-on and prior to the team briefing. To ensure staff responses were immediate and not influenced by senior staff debrief content or overall simulation summaries, the post-simulation survey (appendix ii) provided at the end of the exercise, and prior to a structured debriefing.
Data analysis
Simple descriptive statistics (counts and proportions) were produced in SPSSv24.0. Pre- and post-survey data were linked for each individual staff member, and paired t-test analyses were performed to assess change in skills, confidence and knowledge. A field for open comment about the value of the simulation was also included. These comments were transcribed verbatim and analysed thematically by two authors, using an open-ended interpretative approach.17 This methodology enabled the researchers to capture detailed, contextualized and rich descriptions of participants’ key personal experiences and perceptions.
Results
Overview of participants
According to participant lists, 232 individuals participated in one of the exercises (88 at A, 71 at B, and 73 at C). Of these, 129 (55.6%) completed both pre- and post-exercise surveys. Most participants were clinical staff (69%) (nurses, doctors, radiologists, pharmacists and pathology staff). The non-clinical staff (31%) included executive team members, observers, exercise support and other ancillary members. Characteristics of participants are presented in Table 1.
Survey results
Average self-assessed confidence, skills, and knowledge scores increased significantly (p < 0.001) after the exercise (Fig. 1). Of the two hospitals that asked whether or not participants found the exercise valuable, 113 of 116 respondents to the post-exercise survey (97%) marked “yes”.
Open-ended comments from the survey
Comments on the post-exercise survey were received by 29 staff from A and B hospitals. The themes that emerged were:
1. Pre-briefing
The importance of a comprehensive pre-exercise session on how the exercise works was expressed by many staff. In general, staff participating in the exercise needed a better understanding of how the exercise runs – how they were to interpret the gubers, how to attach clinical treatments to gubers and assign wait times to the exercise boards, when other team specialties should be called for consultations and the telephone numbers to do this in the exercise, a better understanding of the “rules of engagement” and what was “in-exercise” and what was “out-of-scope”.
2. Communications
Concern was raised about the inability to reach other key people for decision-making due to telephone lines constantly engaged. A clear hierarchy of communication and telephone numbers for all team leaders were required. Instructions were required on how to call back large numbers of staff. The physical presence of certain specialty staff at triage (e.g. surgeon, ICU, paediatrics) would have been beneficial. Initial announcement of the MCI was delayed: communication from the hospital command and control team into the clinical areas was poor. Communications to ancillary departments (pharmacy, social work, blood bank) needed improvement.
3. Realism
Some services were not realistically accounted for (e.g. blood bank, pathology, pharmacy and the order of controlled medications). The partnership with ambulance services is critical, but pre-hospital care was not part of the exercises. In settings that are highly electronic and computerised, the lack of computers to look up laboratory results or radiology reports was disconcerting. Some participants would have preferred actors over gubers. Some wounds that would have been obvious in a patient were overlooked as text on gubers. At A, the inability to transfer the casualties to other sites was frustrating, as transfers would happen in real life. At B, transfers occurred and were important, as B ran out of ventilators after 17 “intubations” and six patients had to be ventilated using a bag-valve mask. Many gubers went missing when being transferred to imaging or theatre.
Discussion
This study has demonstrated that a multi-casualty disaster simulation may improve staff self-impressions of efficacy, increasing perceived preparedness by increasing self-assessed knowledge, confidence and skills in the management of a mass casualty incident at their health facility. This is in keeping with other literature that reported the effects of disaster simulation on clinical staff performance.18
Staff at our study sites showed that ETS requires less space and human resources than other types of simulation, but suffers from lack of realism and enforced timing requirements that decrease the flexibility of responses. Simulation exercises can increase a learner’s sense of knowledge and comfort with disaster topics more than other learning and teaching methods. This may increase staff participation in the event of an actual disaster.19 While some data suggest that increasing realism adds to the learning outcomes of disaster simulation,20 combination activities,21,22 three-dimensional-virtual reality23 and video training24 can all enhance engagement with clinical systems and processes used in a disaster. It would be useful to compare and contrast the costs and benefits of the different types of exercises that can be used in this context. Regardless of the effect on individuals, however, a major strength of exercises such as this lies in identifying where systems would potentially fail before they do so in real life.
This paper adds to the evidence base that MCI exercises increase staff preparedness. But,there is scarce evidence that better preparedness makes any difference to actual response. Furthermore, measuring performance during a real disaster requires an internationally accepted set of meaningful indicators that are not onerous to capture. Some authors have reported on real-life measures of performance in disasters, and described the requirements25 and/or response.13,14,26-29 The information provided by clinicians involved in the 2016 Nice disaster allowed the development of scenarios to test other hospitals’ systems, including our own.13 Taking the time to document and share learnings from real incidents is invaluable and should be applauded. Meanwhile, in the absence of agreement on core performance measures, a list of clinical competencies required to respond to disasters should be developed.30
Limitations
This pretest–posttest study is subject to several limitations.Threats to its internal validity include the inability to locate a validated tool to assess knowledge, confidence and skills in simulation. Although care was applied to the survey development, the reproducibility and validity of the test against the outcomes is uncertain.Secondly, the post-test occurred immediately after the intervention with no further test administered at a later time. Therefore, these increases in confidence, skills and knowledge may not be retained. Participants were not randomly selected from staff rosters, but rather were “hand-picked” by their management teams, introducing the possibility of selection bias. It is likely the staff completing these exercises were more experienced and more likely to be in leadership roles, and their improvements might not be transferrable to more junior nursing and medical staff, in general. Additionally, participation in the follow-up survey was relatively low (56%), and only those participants that stayed for the entire exercise were offered a follow-up survey. This is another potential source of bias, as these participants were self-selected to remain and may have been more interested in the topic. Whilst this study showed improvements in confidence, knowledge and skills, without further research it is not possible to generalize these improvements to staff responses in actual MCIs.
This study had no control group of staff that did not participate in the exercise. A control group might have been a group of clinicians that simply took the pre-test survey, spent a few hours doing their usual clinical shift, and then took the post-test survey. Future evaluations of exercises should have this design component as a minimum.
Conclusions
This study lends support to the assumption that the exercises applied have a positive effect on staff attitudes, self-awareness and knowledge, and hence form the basis of an optimal hospital-wide response in practice to a critical incident. However, this remains to be proved. Organizational investment in simulation and thorough evaluation of disaster simulations can help identify system opportunities and improve staff knowledge of, skills for and confidence in managing mass casualty incidents. Application of a short survey can help health organizations capture staff perceptions of simulation and capture valuable learning points which may enhance future simulated and real-life systems responses.