Skip to main content

GENESISS 2—Generating Standards for In-Situ Simulation project: a systematic mapping review

Abstract

Background

In-situ simulation is increasingly employed in healthcare settings to support learning and improve patient, staff and organisational outcomes. It can help participants to problem solve within real, dynamic and familiar clinical settings, develop effective multidisciplinary team working and facilitates learning into practice. There is nevertheless a reported lack of a standardised and cohesive approach across healthcare organisations. The aim of this systematic mapping review was to explore and map the current evidence base for in-situ interventions, identify gaps in the literature and inform future research and evaluation questions.

Methods

A systematic mapping review of published in-situ simulation literature was conducted. Searches were conducted on MEDLINE, EMBASE, AMED, PsycINFO, CINAHL, MIDIRS and ProQuest databases to identify all relevant literature from inception to October 2020. Relevant papers were retrieved, reviewed and extracted data were organised into broad themes.

Results

Sixty-nine papers were included in the mapping review. In-situ simulation is used 1) as an assessment tool; 2) to assess and promote system readiness and safety cultures; 3) to improve clinical skills and patient outcomes; 4) to improve non-technical skills (NTS), knowledge and confidence. Most studies included were observational and assessed individual, team or departmental performance against clinical standards. There was considerable variation in assessment methods, length of study and the frequency of interventions.

Conclusions

This mapping highlights various in-situ simulation approaches designed to address a range of objectives in healthcare settings; most studies report in-situ simulation to be feasible and beneficial in addressing various learning and improvement objectives. There is a lack of consensus for implementing and evaluating in-situ simulation and further studies are required to identify potential benefits and impacts on patient outcomes. In-situ simulation studies need to include detailed demographic and contextual data to consider transferability across care settings and teams and to assess possible confounding factors. Valid and reliable data collection tools should be developed to capture the complexity of team and individual performance in real settings. Research should focus on identifying the optimal frequency and length of in-situ simulations to improve outcomes and maximize participant experience.

Peer Review reports

Background

In-situ simulation (ISS) training enables teams to practice and be assessed in their own, familiar clinical environments [1, 2]. ISS is often focused on training for low volume, high impact emergencies involving multidisciplinary teams (MDTs) with the aim of reinforcing knowledge and improving the functioning of the clinical team as a whole [3,4,5]. The main benefit of ISS over other traditional simulation approaches is reported as allowing participants to problem solve within their own dynamic setting which supports the implementation of learning into practice [1, 2].

ISS has been identified as a useful mechanism to explore and learn from adverse events [6,7,8,9]. Embedding ISS activities underpinned by Human Factors principles can help to focus on the organisational, procedural and contextual influences on clinical reasoning and actions [10, 11]. ISS has also been developed to test the synergy or dissonance between micro and macro factors: task factors, organisational factors, internal environments and external environments [12]. ISS interventions have been reported as a mechanism to enhance patient flow, improve the design of clinical spaces, and identify latent safety threats (LSTs) within new clinical settings [13,14,15,16]. The ability to experiment and see what occurs through interactions, attunement and disturbances enables participants to try out different options and consider possible unintended outcomes [17].

Organisational resilience is focused on understanding how healthcare organisations can deliver standardised, replicable and predictable services while embracing inherent variations, disruptions and unexpected events [18]. During the Covid-19 pandemic, ISS proved useful in helping teams prepare in a rapidly emerging situation. ISS interventions included testing and implementing the use of personal protective equipment (PPE), infection control guidelines and supporting operational readiness of intensive care units and operating rooms [19,20,21,22,23]. ISS interventions are employed to improve the acquisition of NTS, task management, situation awareness, problem-solving, decision-making and enhancing teamwork while testing and probing real-world organisational systems [1, 18, 24,25,26,27].

ISS offers a feasible and acceptable approach through which individual and team competency can be assessed through simulated scenarios in controlled and standardised clinical settings [28]. Griswold et al. [29] identify that summative assessment using ISS is suited to clinical procedures with clear chains of action and well-defined processes and standards. Clinical competency measurement and assessment tools are less well-defined for ISS and further complicated when individual performance needs to be isolated from the wider team. Concepts such as ‘effective communication’ are subject to interpretation, and clinical outcomes may be attributed to concepts such as teamwork and coordination in addition to individual clinical skills and knowledge [30].

Although ISS has been identified as a promising approach in healthcare settings, ISS terms and concepts require standardisation and integrated models of learning are required to provide a more comprehensive and cohesive strategic approach [1, 31, 32]. The overall aim of the Generating Standards for In-Situ Simulation project phase 2 (GENESISS -2) was to develop evidence-based standards for healthcare professionals, educators and managers interested in developing and implementing ISS interventions in clinical practice. The project was commissioned by Health Education England working across the Midlands and East. A conceptual model of ISS was developed in phase one [33] which proposed four main ISS functions (Fig. 1). The aim of this systematic mapping review was to: explore and map the current evidence base for ISS approaches, identify gaps in the literature and inform future research questions.

Fig. 1
figure 1

Conceptual Model of In-Situ Simulation in Healthcare

Methods

We chose to conduct a systematic mapping review to capture the wide evidence base on main uses of ISS in healthcare. Mapping reviews are specifically designed to describe the extent of research in a field, spanning broad topic areas and research objectives to identifying evidence gaps to be addressed by future research [34]. The report follows the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement guidelines [35]. The review protocol was registered on the PROSPERO database (CRD42019128071). Recommendations for systematic mapping reviews [36,37,38] guided the review conduct.

Search

The search strategy was developed for MEDLINE, EMBASE, AMED, PsycINFO, CINAHL, MIDIRS and ProQuest databases and completed the literature search in March 2019 and updated in October 2020. A summary of the search terms is included in Table 1 and supplementary file 1 provides details of the full Medline search strategy.

Table 1 Search terms

Papers were included in the review if they met the following criteria: (i) published in English, (ii) based in an Organisation for Economic Co-operation and Development (OECD) member country (to enable greater comparability between health systems and socio-economic contexts), (iii) reporting quantitative primary research including randomised controlled trials, quasi-experimental studies, cohort studies, economic evaluation and observational quantitative studies (iv), included healthcare practitioners as participants (individual and teams) (v) reported simulation training or interventions conducted in any patient care settings (vi) reported quantitative measures of safety, governance, quality improvement, technical and non-technical skills performance, and educational or clinical outcomes. Exclusion criteria were (i) papers reporting simulation activities conducted in educational institutions and centres, simulation laboratories or training suites or non-patient areas (ii) qualitative studies, secondary data analysis and literature reviews. The timeframe for inclusion was from inception to October 2020.

Papers retrieved from the literature databases were imported to an EndNote library, and duplicate records were identified. Two researchers independently screened the titles and abstracts against the review inclusion and exclusion criteria (KE, JW). Full text papers of the remaining citations were then retrieved and independently assessed by two researchers (first stage: KE, JW updated search: KE, AC). A third researcher (BB) moderated any discrepancies until the final selection of papers was agreed upon.

Quality assessment

The quality of studies included in the review was evaluated using a range of established critical appraisal tools selected for the particular study design: Quality Assessment Tool for Before-After (Pre-Post) Studies with No Control Group [39]; The Cochrane Risk of Bias tool for Randomised Controlled Trials [40]; The Joanna Briggs Institute (JBI) Checklist for Quasi-Experimental Studies [41]; CASP tool for cohort studies [42]. Two independent researchers assessed study quality (first stage: KE, JW updated search: KE AC) and banded studies as low, medium and high quality. There was consensus between the two researchers. Although no studies were excluded on the basis of quality, the quality assessment was used to identify the strengths and limitations of the review [43]. JBI levels of evidence [44] for included studies was also reported.

Data extraction

Data extraction forms were designed and piloted before beginning data extraction, completed by two independent researchers. Data extraction tables consisting of numerical and textual data presented the study characteristics, results and quality assessments.

Data analysis and synthesis

Synthesis of the extracted data were conducted in a descriptive and tabular way [45]. Categories were developed through an iterative process, focusing on the main aims or purposes of ISS interventions, illustrating the range of methods, intervention components, duration, populations, outcome measures and gaps in the research within and between each category. A description of the quantitative data is presented in tables to enhance explanation, understanding and coherence of the findings [37].

Results

The search identified 6,105 potentially eligible papers. Duplicate papers were removed (n = 1493). Papers were then screened (4,612) based on the information provided by the title and abstract. Potentially eligible papers (n = 258) were retrieved for full text assessment by two independent reviewers (KE, JW) and any disagreement resolved by discussion with a third reviewer (BB) until agreement was reached. The level of agreement between the two reviewers produced a kappa value of 0.9 which suggests a very good strength of agreement (k = 0.9, p < 0.001). Excluded papers (n = 189) a) did not include relevant outcome measures, b) did not report ISS activities or interventions c) were not conducted in OECD countries. The literature search and inclusion process are detailed in the PRISMA Flow diagram [46] (Fig. 2). There were 68 papers included in the mapping review which met the inclusion criteria.

Fig. 2
figure 2

PRISMA diagram: In-situ simulation to improve safety, effectiveness and quality of care

Findings were organised into categories to reflect the aims and objectives of the included studies using ISS: 1) as an assessment tool; 2) to assess and promote system readiness and safety cultures; 3) to improve clinical skills and patient outcomes; 4) to improve NTS, knowledge and comfort and confidence. The themes presented are:

ISS to assess performance and identify risks

Eighteen studies conducted ISS as a method of assessment (Table 2). Studies were conducted in the US, Canada, Denmark, Sweden, UK, Germany, Switzerland. Most studies were observational (n = 17), with one study reporting a quasi-experimental design to compare outcomes using different resuscitation equipment [47]. Samples sizes (where reported) ranged from 12 to 277 participants. Five studies reported ISS interventions to assess performance and identify risks: medication errors in emergency departments [48], LSTs in a Children’s medical centre [49], paediatric and neonatology departments [50], pediatric tracheostomy care management in Emergency Departments (EDs), Intensive Care Units (ICUs) [51], and blood transfusion policies in the operating room [52]. Four studies reported ISS interventions to assess compliance against clinical guidelines and standards: cardiac arrest guidelines [53], sepsis guidelines [54], blood transfusion policy and identification [52] and cardiopulmonary resuscitation (CPR) performance [55]. Four studies reported ISS interventions to assess clinical response and task completion time [56,57,58,59], with three studies employing a pre / post ISS evaluation to evaluate the effectiveness of training programmes [60,61,62]. ISS was used to test and assess the safety of new equipment and procedures in two studies: the use electronic health records in the ICU [63] and to assess and compare traditional and automated external defibrillator supplemented responder models [47]. One study [64] conducted ISS to assess performance-relevant effects of task distribution and communication amongst emergency teams.

Table 2 ISS to assess performance and identify risks

Auerbach et al. [45] and Kessler et al. [50] employed voluntary participation for ISS assessments, although the authors discussed that selection bias may be introduced as individuals agreeing to participate may be more or less skilled than other staff [53]. In addition scheduling of ISS may have resulted in providers and departments preparing for the day (training effect). Lipman et al. [53] reported that clinical timings may have been underestimated due to participation of highly skilled teams, the close proximity of clinical departments and participants to the drill area, absence of patient family members, participant knowledge of the imminent ISS activity and training conducted during daytime hours [55, 58]. Involvement of participants without other clinical duties at a scheduled announced time may limit the generalisability of the findings [53].

ISS performance was assessed by direct observation and by accessing feedback from participants. Two studies used evidence based clinical standards to assess performance, quality and safety metrics [53, 54]. Outcome measures based on established standards were reported to be easily measurable, reproducible, and reflect clinical metrics and benchmarks. However, ISS assessment can be limited by the inability to reliably assess the impact on clinical outcomes due to the low occurrence of critical events [61], and poor sensitivity of outcome measures to assess communication skills in functional teams [57]. Most of the included studies used locally developed checklists, developed through previous pilot testing or amended from checklists developed for other clinical settings. Studies which reported team and system level assessments used established outcome measures including the Simulation Team Assessment Tool [53, 65], Anaesthetists' non-technical skills (ANTS) taxonomy and behaviour rating tool [66, 67], TeamSTEPPS Team Performance Observation Tool [60, 68].

Authors reported positive benefits of conducting ISS to identify risks and hazards in clinical environments and improve the ability to detect errors. ISS was reported to help identify system susceptibilities, evaluate the effectiveness of training programmes and highlight variability in performance across different departments and systems. Overall, aauthors reported positive benefits of ISS as a method of assessment, providing useful information to inform future improvement initiatives.

ISS to assess and promote system readiness and safety cultures

Nine studies conducted ISS interventions with the aim of improving system or departmental performance outcomes (Table 3). Studies were conducted in Denmark, the UK and US. All studies were observational, and data were collected via participant questionnaires, and/or direct observation (or a review of audio-visual recordings) by trained assessors or experienced clinicians. Five studies were conducted in EDs [69,70,71,72,73], two in operating theatres [74, 75], one in a neonatal ICU [13] and one in an obstetric unit [76]. Samples sizes (where reported) ranged from 14 to 289 participants. ISS interventions varied from single training sessions to regular training sessions over a period of months. All studies included participants from multi professional healthcare teams. Studies reported ISS was used as a way to assess, prepare and orient staff to new facilities [70,71,72, 76, 77] and promote safety cultures across departments or systems [69, 73,74,75]. All of the studies reported improvements in readiness scores and safety attitudes outcomes.

Table 3 ISS to assess and promote system readiness and safety cultures

Data were mainly collected via pre and post participant self-assessment questionnaires, outcomes included identification of LSTs, assessment of departmental readiness scores, safety cultures and attitudes, orientation and team and departmental performance. Identification of LSTs was captured via observation and via participant during ISS debriefing.

Ventre et al. [76] identified that although clinicians participated in a basic orientation to the new space, ISS provided additional opportunity to evaluate whether the electronic and information systems, equipment and devices performed adequately before opening. Kobayashi et al. [72] conducted ISS when a new ED was almost ready to open, yet with enough time remaining for adjustments and corrective actions on identified issues. However, ISS may assist not only in testing the new facility but also in designing the environments [78].

Three studies conducted ISS to improve safety compliance, cultures and attitudes [73,74,75]. Although safety and teamwork climates were reported as readily measured and amenable to improvement through ISS, it was difficult to demonstrate an association between team and safety training on patient outcomes as improved clinical outcomes are multifactorial [74], evaluating the role of team versus organisational processes can be challenging [73]. Paltved et al. [73] discussed how prolonged engagement with ISS interventions and longer follow-up periods may be required as safety attitudes do not suddenly appear but emerge over time. Jaffrey et al. [75] reported that ISS emphasises the importance of safety measures and empowers participants to make changes and implement them effectively. ISS provides both a learning and a working environment which incorporates the complexity and resources found in the clinical environment and supports knowledge transfer to actual practice [73].

ISS to improve clinical skills, performance and clinical management

Seventeen studies conducted ISS interventions with the aim of improving clinical skills, performance and clinical management (Table 4). Studies were conducted in Australia, Israel, Italy, the UK and US. Ten studies were Pre / Post observational studies which included ISS interventions, two were prospective cohort studies, two RCTs, one observational study with a control and one multicomponent quality improvement project. Studies were conducted in emergency and resuscitation teams and departments [79,80,81,82,83,84,85,86], paediatric and neonatal care settings [87,88,89], in-patient ward settings [90,91,92], coronary care [93], an obstetric unit [94] and a mental healthcare setting [2]. Where reported, ISS interventions frequency varied from single training sessions delivered over one day to repeat ISS training lasting 18 months. The length of ISS was reported to last 30 min to 3 h. Most studies included participants as multi professional healthcare teams, with two studies including doctors and one including only nurses. Sample sizes ranged from 22–303 participants. ISS frequency, outcomes and authors’ conclusions are presented in Table 5.

Table 4 ISS to improve clinical skills and outcomes
Table 5 ISS to improve clinical skills and patient outcomes: ISS Frequency and authors conclusions

Some studies which involved more complex practices and clinical outcomes implemented regular ISS interventions over longer time periods. Andreatta et al. [87] conducted paediatric mock codes (resuscitation scenarios), on a monthly basis for 48 months and reported hospital survival rates improved significantly over study period. Knight et al. [84] conducted 16 paediatric ISS sessions over 18 months and reported that survival rates had improved when compared to historical controls. Other studies reporting favourable outcomes for regular ISS training included anaphylaxis management [79], sepsis management [90] response times to hospital emergencies [91], detection of arrhythmias [81], management of medical deterioration [2, 89] and CPR performance [83, 86].

Studies which included more easily defined or isolated tasks, reported one to three ISS sessions as effective in improving: infection control practices [26]; thoracotomy procedures [93]; response times and management of PPH [94]; sedation practices [80]; and resuscitation response times [82].

ISS to improve non-technical skills, knowledge and comfort and confidence

Non-technical Skills (NTS) are individual and team social and cognitive skills, hat support technical skills when performing complex tasks. NTS can include planning and preparation for complex tasks, situation awareness, perception of risk, decision-making, communication, teamwork and leadership [95]. Twenty-seven studies reported ISS interventions to improve NTS, participant comfort and confidence (Table 6). Studies were conducted in Australia, Canada, Denmark, France, the UK and US. Sixteen studies were observational; there was one prospective cohort study, five RCTs, and five quasi-experimental studies. Studies were conducted in adult and paediatric emergency and resuscitation teams and departments [69, 71, 82, 96,97,98,99,100,101,102,103,104,105], paediatric and neonatal care [106,107,108,109,110,111,112], obstetric care [24, 113,114,115], ICU [116, 117], a post anaesthesia care unit [118] and a mental healthcare setting [2]. Where reported, ISS interventions were delivered over periods of one day to 18 months, with training lasting from 30 min to 3 h. Reported sample sizes ranged from 20—750 participants.

Table 6 ISS to improve non-technical skills, knowledge and comfort and confidence

Outcome measures included self-reported confidence scores, performance scores, management and leadership scores, communication, and self-reported anxiety and knowledge. Outcome measures, ISS frequency and outcomes scores are presented in Table 7.

  • Significant improvements in confidence scores were reported for single session [96, 98, 111, 114], three session [112, 117] or regular departmental training [2].

  • Improvements in participants’ performance scores were reported in six studies [24, 71, 96, 104, 108, 113], with most studies conducting a single ISS intervention.

  • Two studies reported significant improvements in participants management and leadership scores following a single session [111] and three session ISS intervention [112].

  • Two studies [71, 118] reported an improvement in communication scores following 1–3 ISS interventions.

  • Two studies reported significant improvement in anxiety scores following a single ISS intervention [104, 111].

  • Four studies reported a significant improvement in participants knowledge scores following a brief ISS intervention [2, 101, 113, 115].

Table 7 Confidence, performance, management, communication, anxiety and knowledge scores reported in the included studies

Rubio-Gurung et al. [24] compared a four-hour ISS intervention to improve neonatal resuscitation across maternity units with control groups (n = 12, 6 units in each group). The median technical score was significantly higher for the ISS groups compared to the control groups. In the ISS groups, the frequency of achieving a heart rate of 90 per minute at 3 min improved significantly and the number of hazardous events decreased significantly. Four studies which compared ISS groups with control or comparison groups reported no statistical significant difference in outcomes: Gundrosen et al. [28] compared nurses one hour lecture-based training with ISS training on participants situational awareness and team working (ANTS taxonomy); Crofts et al. [115] compared a ISS intervention for obstetric emergency management with training conducted in a simulation centre; Villemure et al. [118] compared ISS in post anaesthetic care units with a control group (no particular interprofessional education).; Dowson et al. [112] compared regular ISS training to improve nurses’ clinical confidence in the management of paediatric emergencies with a control group (mandatory resuscitation training).

ISS settings and methods

Studies conducted ISS interventions in in-patient care settings, predominantly in adult and paediatric EDs, obstetric/maternity units, cardiac response teams, adult and paediatric ICUs, and operating rooms. Data collection methods included direct observation, video review and data collected from simulation or clinical equipment. Participants’ knowledge, anxiety, comfort and safety attitudes were exclusively measured by self-reported questionnaires. There was a range of methods between and within studies to measure task performance, clinical management, teamwork and communication (including assessment from direct or video observation), alongside participants’ self-reported outcomes and /or clinical outcomes data.

Studies used various tools to assess performance during ISS interventions including:

  • Teamwork and non-technical skills: Simulation Team Assessment Tool STAT [65], NONTECHS [119], Anaesthetists' non-technical skills (ANTS) taxonomy and behaviour rating tool [67], TeamSTEPPS [68], TeamMonitor [120], Clinical Teamwork Scale [121], Team Emergency Assessment Measure (TEAM) [122]

  • Readiness scores: TESTPILOT [78], Emergency Medical Services for Children Readiness Survey [123]

  • Clinical performance: Clinical performance during Paediatric Advanced Life Support simulation scenarios [124], Self-Efficacy in Clinical Performance scale [125]

  • Confidence scale [126]

  • Communication and collaboration [127]

The benefits and limitations of conducting ISS reported across all included studies are summarised in Table 8.

Table 8 Benefits and limitations of ISS reported in the included studies

Discussion

This systematic mapping review found that ISS is reported to be feasible and beneficial in a variety of inpatient clinical settings. It is used to assess a number of different domains of practice including adherence to clinical guidelines and standards, task completion times, team performance, non-technical skills, detection of errors and latent safety threats.

Lamé and Dixon-Woods [128] make an important distinction between research which is conducted about simulation and research conducted through simulation. The findings from this review include both of these approaches, which at times overlap, studied though various experimental designs. Research conducted about ISS (where ISS was an active intervention) included studies exploring acceptability and usefulness of ISS to clinicians and educators and evaluating the ability of ISS to identify LSTs and improve individual, team and system-level outcomes. Research conducted through ISS often included ISS as part of a multicomponent approach to improve clinical skills, performance and outcomes.

ISS outcomes were used to highlight where additional or new methods of training might be required to improve the quality of care, to identify LSTs and explore the accuracy and efficiency of task completion over the period of a working shift. Exploring the factors that can affect variations in adherence to clinical procedures, outcomes and performance may help to uncover where and why errors occur. ISS has the potential to reveal the constraining and facilitating mechanisms which impact performance and to identify modifiable factors at the individual, departmental, institutional level or system level [52,53,54].

Some multicentre studies were conducted to assess clinical performance used validated tools to assessed adherence to guidelines and departmental readiness scores. The ability to standardise simulation across participating sites can help isolate independent variables and to reduce the risk of bias introduced by variations in local contexts [129]. Differences in performance can be explored between sites and be used to generate theory about why differences may occur. For example, Auerbach et al. [53] used ISS to explore hospital characteristics to adherence to paediatric cardiac arrest guidelines across four paediatric EDs. ISS outcomes based on clinical standards can serve as a proxy for real performance, enhancing the external validity of the study findings [54].

There were considerable variations in the frequency of ISS sessions, length of ISS sessions and use of announced and unannounced ISS. However the length and frequency of ISS were not always reported. Studies which are focused on relatively straightforward, easily defined or isolated tasks, see improved outcomes after one to three ISS sessions [80, 82, 88, 93, 94]. Studies involving more complex practices or outcomes seem to require interventions over longer time periods [2, 79, 84, 87]. This may indicate a potential benefit of ISS to support complex skills acquisition through behavioural learning strategies, where skills are developed through repetition and behaviour change occurs through feedback from the simulation activity, interaction between the task, environment, and the team.

Most of the studies included in the review used locally developed checklists, developed through previous pilot testing or amended from checklists developed for other clinical settings. In general, there was a paucity of reporting of the validity and reliability of assessment measures and tools. Studies which reported team and system level assessments adopted more established outcome measures [65, 67, 68, 120, 121, 123]. Measurement methods for assessing individual competencies involved in complex care processes are less well-defined, and further complicated when individual performance needs to be isolated from the wider team. Concepts such as ‘effective communication’ are subject to interpretation and clinical outcomes may be attributed to concepts such as teamwork, communication and leadership in addition to clinical skills and knowledge [30]. Griswold et al. [29] identify that for clinical procedures with clear chains of action and well-defined processes and standards, summative ISS assessment is much simpler than in more “dynamic, multifactorial practices in which cognitive, procedural, and communication skills are simultaneously applied in a team environment” (Griswold et al. 2017, page 170). Criterion standards and benchmarks of quality performance need to be further developed to reliably and accurately capture the individual performance which is linked to relevant clinical competencies.

Goldstein et al. [130] stated that literature reporting ISS interventions on patient outcomes is scarce. Surrogate endpoints, such as response times are frequently adopted but this does not truly represent the complex factors that lead to improved patient outcomes [130]. In this review, ISS was often incorporated within larger, multi-component educational improvement projects. Most studies were observational with only thirteen adopting experimental designs. Small, observational studies are often limited by the potential for introducing selection bias, observer bias and confounding. Lamé & Dixon-Woods [128] state that ISS which can reproduce situations identically before and after the intervention increases confidence that the intervention can explain the variation in outcomes. Time-series designs which collect data at multiple times before and after the intervention or controlled studies are required to provide greater confidence in the findings of ISS interventions [128].

Unannounced ISS (or mock drills) were mainly conducted where studies sought to carry out a system audit or to assess clinical performance against a benchmark. Whereas announced ISS, which gave participants varying levels of notice and access to supportive resources, were mainly conducted as part of improvement projects or as part of clinical training. Posner et al. [32] highlight that both announced and unannounced ISS approaches can be conducted to detect LSTs, although assessment of factors such as response times and leadership assignment are more suited to unannounced ISS [55, 58]. Freund et al. [105] compared unannounced to announced (one hour prior to ISS) team training and reported no significant differences on self-perceived learning and self-reported stress outcomes. It is reported that ISS can pose numerous threats an individual’s psychological safety which can have a negative effect on learning. Participants may feel under increased scrutiny from colleagues or burdened by their other clinical work. Psychological safety can be supported by including a pre-simulation brief to discuss training objectives, expectations and develop trust between educators and learners [32, 131, 132].

Cheng et al. [129] recommend an extension to the CONSORT guidelines for reporting simulation-based research to include demographics and clinical characteristics of participants and the setting. This should include participants’ previous experience with simulation, skill mix, staffing, capacity pressures and other relevant features to facilitate an assessment of the external validity of the findings [53]. A review by Goldshtein et al. [130] reported that it was difficult to assess who was participating in ISS and their prior experience of ISS participation. Lipman et al. [53] reported that clinical timings evaluated in their study may have been underestimated due to participation of highly skilled teams, the close proximity of clinical departments and participants to the drill area, absence of patient family members, participant knowledge of the imminent ISS activity and the daytime hours [55]. In future studies, detailed information on other potential sources of bias and other confounding, contextual and system level factors should be presented to assist researchers, educators and clinicians to assess the relevance of the findings to other settings and participant groups [129].

ISS to assist teams train, rehearse and practice for low frequency, high impact events were frequently reported simulation activities in the review. The theoretical base for ISS as a training intervention was not reported in many studies, however ISS as a training intervention maps to the concepts within cognitive learning approaches where participants preconceptions are explored, and new or unexpected events are presented via the simulation activity to challenge precognitions [133]. ISS is also underpinned by situativity theory, in which knowledge transfer is considered optimal when the learning environment matches the environment in which it will be applied [28, 131, 134]. During the Covid-19 pandemic, ISS has been used to help staff prepare for emerging challenges. ISS interventions have helped to identify LSTs, highlight inadequacies in guidelines and protocols policies, improve the correct use of PPE, and orientate staff to newly established Covid-19 intensive care unit and wards [135, 136].

Study strengths and limitations

This review should be viewed in light of several limitations. This review did not include grey literature, conference abstracts and academic theses. It is likely that grey literature may include ISS practice-based improvement and educational projects which further illustrate the current uses of ISS in healthcare settings. However, this review highlights the lack of rigorous intervention ISS research and the urgent need to increase research output and methodological quality. The mapping review aimed to provide an overview of the broad ISS published literature and did not conduct in-depth analysis of study outcomes to enable meaningful comparisons. The review has highlighted different categories and approaches to ISS, identifying common outcomes measures and measurement tools. Mapping reviews are distinguished by the presentation of the data in a digestible format and assessment of whether the total population of studies is similar enough to undertake a coherent synthesis of the current data [36]. Therefore, this review may provide a useful starting point for other researchers seeking to develop and define parameters for future ISS systematic reviews.

Conclusion

This review presents an overview of the literature on ISS interventions by mapping the study objectives, methods, outcomes, barriers, and facilitators at work across different settings. The mapping review provides a useful summary for healthcare educators and researchers seeking to develop ISS strategies in healthcare settings. Additionally, it highlights important evidence gaps, including the need to (1) identify appropriate tasks capable of standardisation and reproducibility in ISS assessment scenarios (2) capture adequate demographic data from participants to assess the impact on outcomes (e.g. work-patterns, skill-mix, experience, ISS experience and exposure, willingness to participate) (3) explore different methodologies in an attempt to reduce bias and confounding factors (4) develop and validate sensitive data collection methods and tools to capture the complexity of team and individual performance in real settings (5) identify optimal frequency and length of time to complete ISS, considering feasibility and acceptability in the clinical setting. This systematic mapping review has provided a useful framework to navigate the expansive and diverse research literature on a relatively new and underdefined approach to ISS as a function to assess individual, team and departmental performance. There is currently a lack of consensus for the rationale for conducting ISS interventions and well-developed studies are required to identify the potential benefits of ISS and the impacts on patient outcomes. Overall, studies reported ISS to be feasible and beneficial to address various learning and improvement objectives. The components and mechanisms employed across the included studies which have been designed to address a range of objectives can inform future design of ISS interventions to meet specific objectives.

Availability of data and materials

All data generated or analysed during this study are included in this published article [and its supplementary information files].

Abbreviations

ISS:

In-situ simulation

MDT:

Multidisciplinary team

LST:

Latent safety threat

PPE:

Personal protective equipment

ED:

Emergency Departments

ICU:

Intensive Care Unit

CPR:

Cardiopulmonary resuscitation

References

  1. Kelsey NC, Claus S. Embedded, in situ simulation improves ability to rescue. Clin Simul Nurs. 2016;12(11):522–7.

    Article  Google Scholar 

  2. Lavelle M, Attoe C, Tritschler C, Cross S. Managing medical emergencies in mental health settings using an interprofessional in-situ simulation training programme: a mixed methods evaluation study. Nurse Educ Today. 2017;59:103–9.

    Article  Google Scholar 

  3. Kurup V, Matei V, Ray J. Role of in-situ simulation for training in healthcare: opportunities and challenges. Curr Opin Anaesthesiol. 2017;30(6):755–60.

    Article  Google Scholar 

  4. Guise J-M, Mladenovic J. In situ simulation: identification of systems issues. Semin Perinatol. 2013;37(3):161–5.

    Article  Google Scholar 

  5. Pucher PH, Tamblyn R, Boorman D, Dixon-Woods M, Donaldson L, Draycott T, Forster A, Nadkarni V, Power C, Sevdalis N, et al. Simulation research to enhance patient safety and outcomes: recommendations of the Simnovate Patient Safety Domain Group. BMJ Simul Technol Enhanc Learn. 2017;3(Suppl 1):S3–7.

    Article  Google Scholar 

  6. Commission on Education and Training for Patient Safety: Improving Safety Through Education and Training. In.: Health Education England; 2016.

  7. Simms ER, Slakey DP, Garstka ME, Tersigni SA, Korndorffer JR. Can simulation improve the traditional method of root cause analysis: a preliminary investigation. Surgery. 2012;152(3):489–97.

    Article  Google Scholar 

  8. Slakey DP, Simms ER, Rennie KV, Garstka ME, Korndorffer JR Jr. Using simulation to improve root cause analysis of adverse surgical outcomes. Int J Qual Health Care. 2014;26(2):144–50.

    Article  Google Scholar 

  9. Yajamanyam PK, Sohi D. In situ simulation as a quality improvement initiative. Arch Dis Child Educ Pract Ed. 2015;100(3):162.

    Article  Google Scholar 

  10. Patterson MD, Blike GT, VM N. In Situ Simulation: Challenges and Results. In: Advances in Patient Safety: New Directions and Alternative Approaches. Volume 3, edn. Edited by Henriksen K, Battles JB, Keyes MA, et al. Rockville, US: Agency for Healthcare Research and Quality; 2008.

  11. Uttley E, Suggitt D, Baxter D, Jafar W. Multiprofessional in situ simulation is an effective method of identifying latent patient safety threats on the gastroenterology ward. Frontline Gastroenterol. 2020;11(5):351.

    Article  Google Scholar 

  12. Holden RJ, Carayon P, Gurses AP, Hoonakker P, Hundt AS, Ozok AA, Rivera-Rodriguez AJ. SEIPS 2.0: a human factors framework for studying and improving the work of healthcare professionals and patients. Ergonomics. 2013;56(11):1669–86.

    Article  Google Scholar 

  13. Bender GJ. In situ simulation for systems testing in newly constructed perinatal facilities. Semin Perinatol. 2011;35(2):80–3.

    Article  Google Scholar 

  14. Medwid K, Smith S, Gang M. Use of in-situ simulation to investigate latent safety threats prior to opening a new emergency department. Safety Sci. 2015;77:19–24.

    Article  Google Scholar 

  15. Chen PP, Tsui NT, Fung AS, Chiu AH, Wong WC, Leong HT, Lee PS, Lau JY. In-situ medical simulation for pre-implementation testing of clinical service in a regional hospital in Hong Kong. Hong Kong Med J. 2017;23(4):404–10.

    Article  Google Scholar 

  16. Combes J: 0121 Sequence simulation ‘the hyper acute stroke thrombolysis pathway. BMJ Simul Technol Enhanc Learn 2015;1.

  17. Lefroy J, Yardley S. Embracing complexity theory can clarify best practice frameworks for simulation education. Med Educ. 2015;49(4):344–6.

    Article  Google Scholar 

  18. Macrae C, Draycott T. Delivering high reliability in maternity care: in situ simulation as a source of organisational resilience. Safety Sci. 2016;117:490–500.

    Article  Google Scholar 

  19. Choi GYS, Wan WTP, Chan AKM, Tong SK, Poon ST, Joynt GM. Preparedness for COVID-19: in situ simulation to enhance infection control systems in the intensive care unit. Br J Anaesth. 2020;125(2):e236–9.

    Article  Google Scholar 

  20. Fregene TE, Nadarajah P, Buckley JF, Bigham S, Nangalia V. Use of in situ simulation to evaluate the operational readiness of a high-consequence infectious disease intensive care unit. Anaesthesia. 2020;75(6):733–8.

    Article  Google Scholar 

  21. Dharamsi A, Hayman K, Yi S, Chow R, Yee C, Gaylord E, Tawadrous D, Chartier LB, Landes M. Enhancing departmental preparedness for COVID-19 using rapid-cycle in-situ simulation. J Hosp Infect. 2020;105(4):604–7.

    Article  Google Scholar 

  22. Lie SA, Wong LT, Chee M, Chong SY. Process-oriented In Situ simulation is a valuable tool to rapidly ensure operating room preparedness for COVID-19 outbreak. Simul Healthc. 2020;15(4):225.

    Article  Google Scholar 

  23. Muret-Wagstaff SL, Collins JS, Mashman DL, Patel SG, Pettorini K, Rosen SA, Shaffer VO, Sumler ML, Sweeney JF, Sharma J. In Situ simulation enables operating room agility in the COVID-19 pandemic. Ann Surg. 2020;272(2):e148–50.

    Article  Google Scholar 

  24. Rubio-Gurung S, Putet G, Touzet S, Gauthier-Moulinier H, Jordan I, Beissel A, Labaune JM, Blanc S, Amamra N, Balandras C, et al. In situ simulation training for neonatal resuscitation: an RCT. Pediatrics. 2014;134(3):e790-797.

    Article  Google Scholar 

  25. Boet S, Bould MD, Fung L, Qosa H, Perrier L, Tavares W, Reeves S, Tricco AC. Transfer of learning and patient outcome in simulated crisis resource management: a systematic review. Can J Anaesth. 2014;61(6):571–82.

    Article  Google Scholar 

  26. Gibbs K. A Novel In Situ Simulation intervention used to mitigate an outbreak of Methicillin-Resistant Staphylococcus aureus in a Neonatal Intensive Care Unit. 2018.

  27. Murphy M, Curtis K, McCloughen A. What is the impact of multidisciplinary team simulation training on team performance and efficiency of patient care? An integrative review. Australas Emerg Nurs J. 2016;19(1):44–53.

    Article  Google Scholar 

  28. Gundrosen S, Solligard E, Aadahl P. Team competence among nurses in an intensive care unit: the feasibility of in situ simulation and assessing non-technical skills. Intensive Crit Care Nurs. 2014;30(6):312–7.

    Article  Google Scholar 

  29. Griswold S, Fralliccardi A, Boulet J, Moadel T, Franzen D, Auerbach M, Hart D, Goswami V, Hui J, Gordon JA. Simulation-based education to ensure provider competency within the health care system. Acad Emerg Med. 2018;25(2):168–76.

    Article  Google Scholar 

  30. Brunette V, Thibodeau-Jarry N. Simulation as a tool to ensure competency and quality of care in the cardiac critical care unit. Can J Cardiol. 2017;33(1):119–27.

    Article  Google Scholar 

  31. Pucher P, Tamblyn R, Boorman D, Dixon-Woods M, Donaldson L, Draycott T, Forster A, Nadkarni V, Power C, Sevdalis N, et al. Simulation research to enhance patient safety and outcomes: recommendations of the Simnovate Patient Safety Domain Group. BMJ Simul Technol Enhanc Learn. 2017;3:S3–7.

    Article  Google Scholar 

  32. Posner GD, Clark ML, Grant VJ. Simulation in the clinical setting: towards a standard lexicon. Adv Simul. 2017;2(1):15.

    Article  Google Scholar 

  33. Baxendale B, Evans K, Cowley A, Bramley L, Miles G, Ross A, Dring E, Cooper J. GENESISS 1 - Generating Standards for In-Situ Simulation project: a scoping review and conceptual model. in submission 2021.

  34. Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements. Health Info Libr J. 2019;36(3):202–22.

    Article  Google Scholar 

  35. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

    Article  Google Scholar 

  36. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26(2):91–108.

    Article  Google Scholar 

  37. Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(1):28.

    Article  Google Scholar 

  38. Wolffe TAM, Whaley P, Halsall C, Rooney AA, Walker VR. Systematic evidence maps as a novel tool to support evidence-based decision-making in chemicals policy and risk management. Environ Int. 2019;130: 104871.

    Article  Google Scholar 

  39. NIH. Study Quality Assessment Tools. In: National Heart, Lung and Blood Institute. 2014.

    Google Scholar 

  40. Cochrane Handbook for Systematic Reviews of Interventions version 6.1 http://www.training.cochrane.org/handbook.

  41. Critical Appraisal Tools https://joannabriggs.org/critical-appraisal-tools

  42. CASP Checklists https://casp-uk.net/casp-tools-checklists/

  43. Joanna Briggs Institute Reviewer's Manual https://reviewersmanual.joannabriggs.org/

  44. Joanna Briggs Institute Levels of Evidence and Grades of Recommendation Working Party: JBI Levels ofEvidence. In. University of Adelaide, Australia: Joanna Briggs Institute; 2013.

  45. Auerbach M, Brown L, Whitfill T, Baird J, Abulebda K, Bhatnagar A, Lutfi R, Gawel M, Walsh B, Tay K-Y, et al. Adherence to pediatric cardiac arrest guidelines across a spectrum of fifty emergency departments: a prospective, In Situ Simulation-based Study. Acad Emerg Med. 2018;25(12):1396–408.

    Article  Google Scholar 

  46. Calhoun AW, Boone MC, Dauer AK, Campbell DR, Montgomery VL. Using simulation to investigate the impact of hours worked on task performance in an intensive care unit. Am J Crit Care. 2014;23:387–95.

    Article  Google Scholar 

  47. Campbell DM, Poost-Foroosh L, Pavenski K, Contreras M, Alam F, Lee J, Houston P. Simulation as a toolkit-understanding the perils of blood transfusion in a complex health care environment. Adv Simul (Lond). 2016;1:32.

    Article  Google Scholar 

  48. Clapper TC, Ching K, Mauer E, Gerber LM, Lee JG, Sobin B, Ciraolo K, Osorio SN, DiPace JI. A Saturated Approach to the Four-Phase, Brain-Based Simulation Framework for TeamSTEPPS® in a Pediatric Medicine Unit. Pediatr Qual Saf. 2018;3(4):e086.

    Article  Google Scholar 

  49. Härgestam M. Trauma teams and time to early management during in situ trauma team training. 2016.

  50. Kessler DO, Walsh B, Whitfill T, Dudas RA, Gangadharan S, Gawel M, Brown L, Auerbach M. Disparities in Adherence to pediatric sepsis guidelines across a Spectrum of Emergency Departments: a multicenter, cross-sectional observational In Situ Simulation Study. J Emerg Med. 2016;50(3):403–415.e401–403.

    Article  Google Scholar 

  51. Kobayashi L, Dunbar-Viveiros JA, Sheahan BA, Rezendes MH, Devine J, Cooper MR, Martin PB, Jay GD. In situ simulation comparing in-hospital first responder sudden cardiac arrest resuscitation using semiautomated defibrillators and automated external defibrillators. Simul Healthc. 2010;5(2):82–90.

    Article  Google Scholar 

  52. Kozer E, Seto W, Verjee Z, Parshuram C, Khattak S, Koren G, Jarvis DA. Prospective observational study on the incidence of medication errors during simulated resuscitation in a paediatric emergency department. BMJ. 2004;329(7478):1321.

    Article  Google Scholar 

  53. Lipman SS, Carvalho B, Cohen SE, Druzin ML, Daniels K. Response times for emergency cesarean delivery: use of simulation drills to assess and improve obstetric team performance. J Perinatol. 2013;33(4):259–63.

    Article  Google Scholar 

  54. Lipman SS, Wong JY, Arafeh J, Cohen SE, Carvalho B. Transport decreases the quality of cardiopulmonary resuscitation during simulated maternal cardiac arrest. Anesth Analg. 2013;116(1):162–7.

    Article  Google Scholar 

  55. Lok A, Peirce E, Shore H. Identifying latent risks through in situ simulation training to improve patient safety. Arch Dis Child. 2014;99.

  56. March CA, Steiger D, Scholl G, Mohan V, Hersh WR, Gold JA. Use of simulation to assess electronic health record safety in the intensive care unit: a pilot study. BMJ Open. 2013;3(4):e002549.

    Article  Google Scholar 

  57. Mondrup F. In-hospital resuscitation evaluated by in situ simulation: a prospective simulation study. 2011.

  58. Sarfati L, Ranchon F, Vantard N, Schwiertz V, Gauthier N, He S, Kiouris E, Gourc-Berthod C, Guédat MG, Alloux C, et al. SIMMEON-Prep study: SIMulation of Medication Errors in ONcology: prevention of antineoplastic preparation errors. J Clin Pharm Ther. 2015;40(1):55–62.

    Article  Google Scholar 

  59. Shah SJ, Cusumano C, Ahmed S, Ma A, Jafri FN, Yang CJ. In Situ Simulation to assess pediatric tracheostomy care safety: a novel multicenter quality improvement program. Otolaryngol Head Neck Surg. 2020;163(2):250–8.

    Article  Google Scholar 

  60. Schmutz J, Hoffmann F, Heimberg E, Manser T. Effective coordination in medical emergency teams: the moderating role of task type. Eur J Work Organ Psy. 2015;24(5):761–76.

    Article  Google Scholar 

  61. Wheeler DS, Geis G, Mack EH, LeMaster T, Patterson MD. High-reliability emergency response teams in the hospital: improving quality and safety using in situ simulation training. BMJ Qual Saf. 2013;22(6):507–14.

    Article  Google Scholar 

  62. Zimmermann K, Holzinger IB, Ganassi L, Esslinger P, Pilgrim S, Allen M, Burmester M, Stocker M. Inter-professional in-situ simulated team and resuscitation training for patient safety: description and impact of a programmatic approach. BMC Med Educ. 2015;15:189–189.

    Article  Google Scholar 

  63. Peters MDJ, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. JBI Evidence Implementation 2015;13(3).

  64. Moher D, Liberati A, Tetzlaff J, Altman D, Group TP. Preferred reporting items for systematic reviews and metaanalyses: the PRISMA Statement. PLoS Med. 2009;6(7):e1000097.

    Article  Google Scholar 

  65. Reid J, Stone K, Brown J, Caglar D, Kobayashi A, Lewis-Newby M, Partridge R, Seidel K, Quan L. The Simulation Team Assessment Tool (STAT): development, reliability and validation. Resuscitation. 2012;83(7):879–86.

    Article  Google Scholar 

  66. Campbell DM, Poost-Foroosh L, Pavenski K, Contreras M, Alam F, Lee J, Houston P. Simulation as a toolkit—understanding the perils of blood transfusion in a complex health care environment. Adv Simul. 2016;1(1):32.

    Article  Google Scholar 

  67. Flin R, Patey R, Glavin R, Maran N. Anaesthetists’ non-technical skills. Br J Anaesth. 2010;105(1):38–44.

    Article  Google Scholar 

  68. Internet Citation: Team Performance Observation Tool. https://www.ahrq.gov/teamstepps/longtermcare/sitetools/tmpot.html

  69. Patterson MD, Geis GL, Falcone RA, LeMaster T, Wears RL. In situ simulation: detection of safety threats and teamwork training in a high risk emergency department. BMJ Qual Saf. 2013;22(6):468.

    Article  Google Scholar 

  70. Abulebda K, Lutfi R, Whitfill T, Abu-Sultaneh S, Leeper KJ, Weinstein E, Auerbach MA. A collaborative In Situ Simulation-based pediatric readiness improvement program for community emergency departments. Acad Emerg Med. 2018;25(2):177–85.

    Article  Google Scholar 

  71. Gardner AK, Ahmed RA, George RL, Frey JA. In Situ Simulation to assess workplace attitudes and effectiveness in a new facility. Simul Health. 2013;8(6):351.

    Article  Google Scholar 

  72. Kobayashi L, Shapiro MJ, Sucov A, Woolard R, Boss Iii RM, Dunbar J, Sciamacco R, Karpik K, Jay G. Portable advanced medical simulation for new emergency department testing and orientation. Acad Emerg Med. 2006;13(6):691–5.

    Article  Google Scholar 

  73. Paltved C, Bjerregaard AT, Krogh K, Pedersen JJ, Musaeus P. Designing in situ simulation in the emergency department: evaluating safety attitudes amongst physicians and nurses. Adv Simul (Lond). 2017;2:4.

    Article  Google Scholar 

  74. Hinde T, Gale T, Anderson I, Roberts M, Sice P. A study to assess the influence of interprofessional point of care simulation training on safety culture in the operating theatre environment of a university teaching hospital. J Interprof Care. 2016;30(2):251–3.

    Article  Google Scholar 

  75. Jaffry Z, Jaye P, Laws-Chapman C, Zhao J, Pontin L. Safer surgery through simulation: increasing compliance with the 5 Steps to Safer Surgery through an in-situ simulation based training programme at Guy’s and St Thomas’ NHS Foundation Trust. BMJ Simul Technol Enhanc Learn. 2019;5(4):196–7.

    Article  Google Scholar 

  76. Ventre KM, Barry JS, Davis D, Baiamonte VL, Wentworth AC, Pietras M, Coughlin L, Barley G. Using in situ simulation to evaluate operational readiness of a children’s hospital-based obstetrics unit. Simul Healthc. 2014;9(2):102–11.

    Article  Google Scholar 

  77. Bender GJ, Maryman JA. Clinical macrosystem simulation translates between organizations. Simul Healthc. 2018;13(2):96–106.

    Article  Google Scholar 

  78. Bender J, Shields R, Kennally K. Transportable enhanced simulation technologies for pre-implementation limited operations testing: neonatal intensive care unit. Simul Healthc. 2011;6(4):204–12.

    Article  Google Scholar 

  79. Barni S, Mori F, Giovannini M, de Luca M, Novembre E. In situ simulation in the management of anaphylaxis in a pediatric emergency department. Intern Emerg Med. 2019;14(1):127–32.

    Article  Google Scholar 

  80. Ben-Ari M, Chayen G, Steiner IP, Schinasi DA, Feldman O, Shavit I. The effect of in situ simulation training on the performance of tasks related to patient safety during sedation. J Anesth. 2018;32(2):300–4.

    Article  Google Scholar 

  81. Kobayashi A. Use of in situ simulation and human factors engineering to assess and improve emergency department clinical systems for timely telemetry-based detection of life-threatening arrhythmias. 2012.

  82. Steinemann S, Berg B, Skinner A, DiTulio A, Anzelon K, Terada K, Oliver C, Ho HC, Speck C. In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. J Surg Educ. 2011;68(6):472–7.

    Article  Google Scholar 

  83. Sullivan NJ, Duval-Arnould J, Twilley M, Smith SP, Aksamit D, Boone-Guercio P, Jeffries PR, Hunt EA. Simulation exercise to improve retention of cardiopulmonary resuscitation priorities for in-hospital cardiac arrests: a randomized controlled trial. Resuscitation. 2015;86:6–13.

    Article  Google Scholar 

  84. Knight LJ, Gabhart JM, Earnest KS, Leong KM, Anglemyer A, Franzon D. Improving code team performance and survival outcomes: implementation of pediatric resuscitation team training. Crit Care Med. 2014;42(2):243–51.

    Article  Google Scholar 

  85. Josey K, Smith ML, Kayani AS, Young G, Kasperski MD, Farrer P, Gerkin R, Theodorou A, Raschke RA. Hospitals with more-active participation in conducting standardized in-situ mock codes have improved survival after in-hospital cardiopulmonary arrest. Resuscitation. 2018;133:47–52.

    Article  Google Scholar 

  86. Coggins AR, Nottingham C, Byth K, Ho KR, Aulia FA, Murphy M, Shetty AL, Todd A, Moore N. Randomised controlled trial of simulation-based education for mechanical cardiopulmonary resuscitation training. Emerg Med J. 2019;36(5):266–72.

    Article  Google Scholar 

  87. Andreatta P, Saxton E, Thompson M, Annich G. Simulation-based mock codes significantly correlate with improved pediatric patient cardiopulmonary arrest survival rates. Pediatr Crit Care Med. 2011;12(1):33–8.

    Article  Google Scholar 

  88. Gibbs K, DeMaria S, McKinsey S, Fede A, Harrington A, Hutchison D, Torchen C, Levine A, Goldberg A. A novel In Situ simulation intervention used to mitigate an outbreak of methicillin-resistant <em>Staphylococcus aureus</em> in a Neonatal Intensive Care Unit. J Pediatr. 2018;194:22-27.e25.

    Article  Google Scholar 

  89. Theilen U, Leonard P, Jones P, Ardill R, Weitz J, Agrawal D, Simpson D. Regular in situ simulation training of paediatric Medical Emergency Team improves hospital response to deteriorating patients. Resuscitation. 2013;84(2):218–22.

    Article  Google Scholar 

  90. Braddock CH 3rd, Szaflarski N, Forsey L, Abel L, Hernandez-Boussard T, Morton J. The TRANSFORM Patient Safety Project: a microsystem approach to improving outcomes on inpatient units. J Gen Intern Med. 2015;30(4):425–33.

    Article  Google Scholar 

  91. Generoso JR Jr, Latoures RE, Acar Y, Miller DS, Ciano M, Sandrei R, Vieira M, Luong S, Hirsch J, Fidler RL. Simulation Training in Early Emergency Response (STEER). J Contin Educ Nurs. 2016;47(6):255–63.

    Article  Google Scholar 

  92. Sleeman K, Davis A, Veall A. Point-of-care simulation training to address serious untoward incidence of hypoglycaemia. J Diabetes Nurs. 2018;22(1):5–10.

    Google Scholar 

  93. Hamilton AJ, Prescher H, Biffar DE, Poston RS. Simulation trainer for practicing emergent open thoracotomy procedures. J Surg Res. 2015;197(1):78–84.

    Article  Google Scholar 

  94. Marshall NE, Vanderhoeven J, Eden KB, Segel SY, Guise JM. Impact of simulation and team training on postpartum hemorrhage management in non-academic centers. J Matern Fetal Neonatal Med. 2015;28(5):495–9.

    Article  Google Scholar 

  95. Prineas S, Mosier K, Mirko C, Guicciardi S. Non-technical skills in healthcare. In: Donaldson L, Ricciardi W, Sheridan S, Tartaglia R, editors. Textbook of patient safety and clinical risk management. Cham: Springer International Publishing; 2021. p. 413–34.

    Chapter  Google Scholar 

  96. Saqe-Rockoff A, Ciardiello AV, Schubert FD. Low-fidelity, in-situ pediatric resuscitation simulation improves RN competence and self-efficacy. J Emerg Nurs. 2019;45(5):538-544.e531.

    Article  Google Scholar 

  97. Cepeda Brito JR, Hughes PG, Firestone KS, Ortiz Figueroa F, Johnson K, Ruthenburg T, McKinney R, Gothard MD, Ahmed R. Neonatal resuscitation program rolling refresher: maintaining chest compression proficiency through the use of simulation-based education. Adv Neonatal Care. 2017;17(5):354–61.

    Article  Google Scholar 

  98. Davison M, Kinnear FB, Fulbrook P. Evaluation of a multiple-encounter in situ simulation for orientation of staff to a new paediatric emergency service: a single-group pretest/post-test study. BMJ Simul Technol Enhanc Learn. 2017;3(4):149.

    Article  Google Scholar 

  99. Katznelson JH, Mills WA, Forsythe CS, Shaikh S, Tolleson-Rinehart S. Project CAPE: a high-fidelity, in situ simulation program to increase Critical Access Hospital Emergency Department provider comfort with seriously ill pediatric patients. Pediatr Emerg Care. 2014;30(6):397–402.

    Article  Google Scholar 

  100. Katznelson JH, Wang J, Stevens MW, Mills WA. Improving pediatric preparedness in Critical Access Hospital Emergency departments: impact of a longitudinal in situ simulation program. Pediatr Emerg Care. 2018;34(1):17–20.

    Article  Google Scholar 

  101. Patterson MD, Geis GL, LeMaster T, Wears RL. Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. BMJ Qual Saf. 2013;22(5):383.

    Article  Google Scholar 

  102. Siegel NA, Kobayashi L, Dunbar-Viveiros JA, Devine J, Al-Rasheed RS, Gardiner FG, Olsson K, Lai S, Jones MS, Dannecker M, et al. In situ medical simulation investigation of emergency department procedural sedation with randomized trial of experimental bedside clinical process guidance intervention. Simul Healthc. 2015;10(3):146–53.

    Article  Google Scholar 

  103. van Schaik SM, Plant J, Diane S, Tsang L, O’Sullivan P. Interprofessional team training in pediatric resuscitation: a low-cost, in situ simulation program that enhances self-efficacy among participants. Clin Pediatr (Phila). 2011;50(9):807–15.

    Article  Google Scholar 

  104. Boyde M, Cooper E, Putland H, Stanton R, Harding C, Learmont B, Thomas C, Porter J, Thompson A, Nicholls L. Simulation for emergency nurses (SIREN): a quasi-experimental study. Nurse Educ Today. 2018;68:100–4.

    Article  Google Scholar 

  105. Freund D, Andersen PO, Svane C, Meyhoff CS, Sørensen JL. Unannounced vs announced in situ simulation of emergency teams: Feasibility and staff perception of stress and learning. Acta Anaesthesiol Scand. 2019;63(5):684–92.

    Article  Google Scholar 

  106. Bayouth L, Ashley S, Brady J, Lake B, Keeter M, Schiller D, Robey WC 3rd, Charles S, Beasley KM, Toschlog EA, et al. An in-situ simulation-based educational outreach project for pediatric trauma care in a rural trauma system. J Pediatr Surg. 2018;53(2):367–71.

    Article  Google Scholar 

  107. Vos T, Lim SS, Abbafati C, Abbas KM, Abbasi M, Abbasifard M, Abbasi-Kangevari M, Abbastabar H, Abd-Allah F, Abdelalim A, et al. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet. 2020;396(10258):1204–22.

    Article  Google Scholar 

  108. Kurosawa H, Ikeyama T, Achuff P, Perkel M, Watson C, Monachino A, Remy D, Deutsch E, Buchanan N, Anderson J, et al. A randomized, controlled trial of in situ pediatric advanced life support recertification (“pediatric advanced life support reconstructed”) compared with standard pediatric advanced life support recertification for ICU frontline providers*. Crit Care Med. 2014;42(3):610–8.

    Article  Google Scholar 

  109. Stocker M, Allen M, Pool N, De Costa K, Combes J, West N, Burmester M. Impact of an embedded simulation team training programme in a paediatric intensive care unit: a prospective, single-centre, longitudinal study. Intensive Care Med. 2012;38(1):99–104.

    Article  Google Scholar 

  110. von Arx D, Pretzlaff R. Improved nurse readiness through pediatric mock code training. J Pediatric Nurs. 2010;25(5):438–40.

    Article  Google Scholar 

  111. Allan CK, Thiagarajan RR, Beke D, Imprescia A, Kappus LJ, Garden A, Hayes G, Laussen PC, Bacha E, Weinstock PH. Simulation-based training delivered directly to the pediatric cardiac intensive care unit engenders preparedness, comfort, and decreased anxiety among multidisciplinary resuscitation teams. J Thorac Cardiovasc Surg. 2010;140(3):646–52.

    Article  Google Scholar 

  112. Dowson A, Russ S, Sevdalis N, Cooper M, De Munter C. How in situ simulation affects paediatric nurses’ clinical confidence. Br J Nurs. 2013;22(11):610 612-610617.

    Article  Google Scholar 

  113. Nickerson JE, Webb T, Boehm L, Neher H, Wong L, LaMonica J, Bentley S. Difficult delivery and neonatal resuscitation: a novel simulation for emergency medicine residents. West J Emerg Med. 2019;21(1):102–7.

    Article  Google Scholar 

  114. Surcouf JW, Chauvin SW, Ferry J, Yang T, Barkemeyer B. Enhancing residents’ neonatal resuscitation competency through unannounced simulation-based training. Med Educ Online. 2013;18:1–7.

    Article  Google Scholar 

  115. Crofts JF, Ellis D, Draycott TJ, Winter C, Hunt LP, Akande VA. Change in knowledge of midwives and obstetricians following obstetric emergency training: a randomised controlled trial of local hospital, simulation centre and teamwork training. BJOG. 2007;114(12):1534–41.

    Article  Google Scholar 

  116. Gundrosen S, Solligård E, Aadahl P. Team competence among nurses in an intensive care unit: the feasibility of in situ simulation and assessing non-technical skills. Intensive Crit Care Nurs. 2014;30(6):312–7.

    Article  Google Scholar 

  117. Nunnink L, Welsh AM, Abbey M, Buschel C. In Situ Simulation-based team training for post-cardiac surgical emergency chest reopen in the intensive care unit. Anaesth Intensive Care. 2009;37(1):74–8.

    Article  Google Scholar 

  118. Villemure C, Georgescu LM, Tanoubi I, Dube J-N, Chiocchio F, Houle J. Examining perceptions from in situ simulation-based training on interprofessional collaboration during crisis event management in post-anesthesia care. J Interprof Care. 2019;33(2):182–9.

    Article  Google Scholar 

  119. Mishra A, Catchpole K, McCulloch P. The Oxford NOTECHS System: reliability and validity of a tool for measuring teamwork behaviour in the operating theatre. Qual Saf Health Care. 2009;18(2):104–8.

    Article  Google Scholar 

  120. Stocker M, Menadue L, Kakat S, De Costa K, Combes J, Banya W, Lane M, Desai A, Burmester M. Reliability of team-based self-monitoring in critical events: a pilot study. BMC Emerg Med. 2013;13:22–22.

    Article  Google Scholar 

  121. Guise JM, Deering SH, Kanki BG, Osterweil P, Li H, Mori M, Lowe NK. Validation of a tool to measure and promote clinical teamwork. Simul Healthc. 2008;3(4):217–23.

    Article  Google Scholar 

  122. Cooper S, Cant R, Porter J, Sellick K, Somers G, Kinsman L, Nestel D. Rating medical emergency teamwork performance: development of the Team Emergency Assessment Measure (TEAM). Resuscitation. 2010;81(4):446–52.

    Article  Google Scholar 

  123. Remick K, Gausche-Hill M, Joseph M. American Academy of Pediatrics Committee on pediatric emergency medicine and section on surgery, American College of Emergency Physicians Pediatric Emergency Medicine Committee, Emergency Nurses Association Pediatric Committee. Pediatric Readiness in the Emergency Department. Pediatrics. 2018;142(5):e20182459.

    Article  Google Scholar 

  124. Donoghue A, Nishisaki A, Sutton R, Hales R, Boulet J. Reliability and validity of a scoring instrument for clinical performance during Pediatric Advanced Life Support simulation scenarios. Resuscitation. 2010;81(3):331–6.

    Article  Google Scholar 

  125. Munroe B, Buckley T, Curtis K, Murphy M, Strachan L, Hardy J, Fethney J. The impact of HIRAID on emergency nurses’ self-efficacy, anxiety and perceived control: a simulated study. Int Emerg Nurs. 2016;25:53–8.

    Article  Google Scholar 

  126. Grundy SE. The confidence scale: development and psychometric characteristics. Nurse Educ. 1993;18(1):6–9.

    Article  Google Scholar 

  127. Chiocchio F, Grenier S, O’Neill T, Savaria K, Willms J. The effects of collaboration on performance: a multilevel validation in project teams. Int J Proj Organ Manag. 2012;4:1–37.

    Google Scholar 

  128. Lamé G, Dixon-Woods M. Using clinical simulation to study how to improve quality and safety in healthcare. BMJ Simul Technol Enhanc Learn. 2018;6:87–94.

    Article  Google Scholar 

  129. Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, et al. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. Adv Simul. 2016;1(1):25.

    Article  Google Scholar 

  130. Goldshtein D, Krensky C, Doshi S, Perelman VS. In situ simulation and its effects on patient outcomes: a systematic review. BMJ Simul Technol Enhanc Learn. 2020;6(1):3–9.

    Article  Google Scholar 

  131. Bonfield A, Cusack J. Effective training in neonatal medicine. In: Boyle E, Cusack J, editors. Emerging topics and controversies in neonatology. Switzerland: Springer; 2020.

    Google Scholar 

  132. Rudolph JW, Raemer DB, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9(6):339–49.

    Article  Google Scholar 

  133. Fenwick T, Dahlgren MA. Towards socio-material approaches in simulation-based education: lessons from complexity theory. Med Educ. 2015;49(4):359–67.

    Article  Google Scholar 

  134. Durning SJ, Artino AR. Situativity theory: a perspective on how participants and the environment can interact: AMEE Guide no. 52. Med Teach. 2011;33(3):188–99.

    Article  Google Scholar 

  135. Sharara-Chami R, Sabouneh R, Zeineddine R, Banat R, Fayad J, Lakissian Z. In Situ Simulation: an essential tool for safe preparedness for the COVID-19 pandemic. Simul Healthc. 2020;15(5):303.

    Article  Google Scholar 

  136. Lakissian Z, Sabouneh R, Zeineddine R, Fayad J, Banat R, Sharara-Chami R. In-situ simulations for COVID-19: a safety II approach towards resilient performance. Adv Simul. 2020;5(1):15.

    Article  Google Scholar 

  137. Josey K. Hospitals with more-active participation in conducting standardized in-situ mock codes have improved survival after in-hospital cardiopulmonary arrest. 2018.

  138. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable

Funding

The project was commissioned by Health Education England, Midlands and East.

Author information

Authors and Affiliations

Authors

Contributions

Conception and design of the study (BB, KE, LB, GM, JC), data collection and analysis (KE, JW, AC, LB), initial drafting of the manuscript (BB, AR, KE, AC, JW, LB, JC), critical review of the manuscript (BB, AR, KE, AC, LB, AR, JC) and all authors provided final approval of the submitted manuscript.

Corresponding author

Correspondence to Kerry Evans.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable as all images within the manuscript were created by the research team.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Evans, K., Woodruff, J., Cowley, A. et al. GENESISS 2—Generating Standards for In-Situ Simulation project: a systematic mapping review. BMC Med Educ 22, 537 (2022). https://doi.org/10.1186/s12909-022-03401-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-022-03401-y

Keywords