Skip to main content

The effectiveness of simulation-based learning (SBL) on students’ knowledge and skills in nursing programs: a systematic review

Abstract

Background

Simulation-Based Learning (SBL) serves as a valuable pedagogical approach in nursing education, encompassing varying levels of fidelity. While previous reviews have highlighted the potential effectiveness of SBL in enhancing nursing students’ competencies, a gap persists in the evidence-base addressing the long-term retention of these competencies. This systematic review aimed to evaluate the impact of SBL on nursing students’ knowledge and skill acquisition and retention.

Method

A comprehensive search of electronic databases, including CINAHL, PubMed, Embase, Scopus, and Eric, was conducted from 2017 to 2023 to identify relevant studies. The Joanna Briggs critical appraisal tools were used to assess the methodological quality of the included studies. A total of 33 studies (15 RCTs and 18 quasi-experimental) met the inclusion criteria and were included in the review. A descriptive narrative synthesis method was used to extract relevant data.

Results

The cumulative sample size of participants across the included studies was 3,670. Most of the studies focused on the impact of SBL on life-saving skills like cardiopulmonary resuscitation (CPR) or other life-support skills. The remaining studies examined the impact of SBL on critical care skills or clinical decision-making skills. The analysis highlighted consistent and significant improvements in knowledge and skills. However, the evidence base had several limitations, including the heterogeneity of study designs, risk of bias, and lack of long-term follow-up.

Conclusion

This systematic review supports the use of SBL as a potent teaching strategy within nursing education and highlights the importance of the ongoing evaluation and refinement of this approach. While current evidence indicates enhancing knowledge and skill acquisition, limited studies evaluated the retention beyond five months, constraining generalisable claims regarding durability. Further research is essential to build on the current evidence and address gaps in knowledge related to the retention, optimal design, implementation, and evaluation of SBL interventions in nursing education.

Peer Review reports

Background

Simulation-Based Learning (SBL) is an educational approach which has been widely adopted in nursing and medical education [1]. The predominance of this approach can be understood to relate to the way in which SBL seeks to replicate aspects of real-world situations, allowing students to apply knowledge and develop their practical skills in a safe environment [2]. This is valuable for nursing, a field which relies on the practical application of skills [2]. The intended outcome of SBL could be to enhance the acquisition of knowledge and skills as well as the retention of these over time [3]. These outcomes demonstrate the immediate effectiveness of SBL and its long-term impact on students’ competence. Despite its widespread use, there is a lack of evaluation of the efficacy of SBL within nursing education in achieving immediate and long-term knowledge and skills. Therefore, this review aims to evaluate how SBL impacts knowledge and skills among nursing students.

Simulation training has demonstrated substantial value in developing nurses’ resuscitation and critical care abilities. Performing high-quality cardiopulmonary resuscitation (CPR), responding to patient deterioration events, and managing crisis situations require sophisticated psychomotor and clinical judgment proficiencies [4]. Additionally, critical care environments involve complex technologies and rare emergency scenarios that learners may inconsistently encounter through conventional clinical education alone [5]. Thus, simulation-based mastery learning has emerged as an efficacious approach for standardising novice nurses’ exposure to low-frequency, high-risk contexts requiring rapid emergency response capabilities and proficient use of specialised equipment.

SBL can be delivered via several modalities: high, medium, and low fidelity [6, 7]. High-fidelity SBL sessions seek to recreate a patient scenario with a high degree of realism [8], whereas, in contrast to this, low-fidelity SBL still focuses on practising the target skills, but in an environment which was less reminiscent of the dynamics or pressures of real-world practice [9]. Therefore, high-fidelity SBL might be expected to be more educationally valuable than low-fidelity SBL, however, the evidence does not entirely support this supposition. For example, a recent study conducted by Massoth et al. [10]. compared high, and low-fidelity SBL approaches for an advanced life support training session. Their findings indicated that improvements in knowledge and skills for those who experienced high-fidelity SBL were not significantly different to those who had undertaken low-fidelity SBL [10]. Furthermore, Massoth et al. sub-item analysis indicated that high-fidelity SBL participants were prone to becoming overconfident with the given task, which Massoth et al. view as an undesirable side effect of such an approach.

It is, therefore, important to examine SBL in more detail through a thorough a review of the literature. This will address the question of whether SBL meets its objectives for knowledge and skills acquisition and retention and may also help resolve ongoing debates relating to fidelity. Recent reviews have undertaken important preparatory work in examining this area: the integrative review of Al Gharibi and Arulappan [11] evaluated SBL on a range of outcomes for nursing students, as did the systematic review conducted by Labrague et al. [12]. However, both reviews focussed on core outcomes relating primarily to the confidence of learners and did not specifically examine issues of knowledge and skills acquisition or retention.

This systematic review aims to critically appraise and synthesize the published evidence on the effectiveness of SBL on students’ knowledge and skills acquisition and retention in nursing programs.

Methods

The protocol for this systematic review was developed and registered on PROSPERO, the registration number CRD42021284544 and was reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) quality requirements [13].

Search strategy

The article selection entailed two phases: initial scoping and a strategic search [14]. The systematic search utilized five leading academic databases: CINAHL, PubMed, Embase, Scopus, and Eric, employing the Population, Intervention, Comparison, and Outcomes (PICO) framework to establish precise inclusion criteria [14, 15]. Only English-language studies were included, with key terms such as simulation-based learning, education, training, and related synonyms. The terms were merged using Boolean operators and tailored for each database as needed. The databases indexed all major relevant journals, eliminating the need for manual searches. Reference lists in the review were examined for further sources. All chosen articles were published within seven years of this study, aligning with the field’s rapidly evolving nature and ensuring the incorporation of recent advancements [16]. This timeframe strikes a balance between recency and a sufficient depth of literature, offering a comprehensive overview of current trends and methodologies (Table 1).

Table 1 Literature search strategy

Screening process

References obtained from the database search were organized and imported via EndNote X9 reference management software. Covidence systematic review software was employed to streamline the screening and selection procedures [17]. Two reviewers, AA and AN, independently examined the abstracts, titles, and full texts of all records to ascertain eligibility based on inclusion criteria. A third reviewer (RM) was consulted for consensus in cases of discrepancies regarding study eligibility.

Data extraction

Upon securing the final articles, an extraction form was devised and pilot-tested to abstract salient study characteristics and outcomes [18], in compliance with rigorous guidelines for systematic reviews [19]. Data pertinent to the PICO framework and encompassing both cognitive and psychomotor domains were extracted. Reviewers AA and AN evaluated the form’s feasibility. To ensure data integrity and mitigate bias, AA and AN undertook the data extraction, which was subsequently corroborated by an additional pair of reviewers (RM and WM). The characteristics of the studies incorporated in this review are succinctly encapsulated in the supplementary file, which delineates authorship, publication year, geographic origin, objectives, methodology, participant demographics, simulation activities, and key findings germane to the review.

Assessment of the risk of bias in included studies

Reviewers AA and AN independently scrutinized full-text articles utilizing the Joanna Briggs Institute’s (JBI) critical appraisal tools [20]. JBI, an independent, international, non-profit research entity affiliated with the University of Adelaide’s Health and Medical Sciences faculty, has devised an array of critical appraisal checklists to assess healthcare interventions’ feasibility, appropriateness, meaningfulness, and efficacy [21]. The JBI checklist for Randomised Controlled Trials and Quasi-Experimental Studies was selected for their relevance to the study designs targeted in this review, encompassing thirteen and nine items respectively, addressing aspects such as design, sample selection, and comparison. Items are scored dichotomously, with a maximum aggregate score of 13 for RCTs and 9 for quasi-experimental studies, facilitating a holistic assessment of each study’s quality. Further, methodological judgment will also be incorporated into the quality assessment.

Irrespective of the methodological quality, all selected studies were integrated into the review. This approach was adopted to ensure a comprehensive synthesis of the available evidence on SBL concerning knowledge and skill in nursing education. Excluding studies based on methodological quality alone might omit potentially valuable insights. Including a range of studies allows for an understanding of the current evidence base and highlights areas needing further methodological refinement. This inclusive strategy enables a holistic view of the research landscape [22]. Reviewers AA and AN independently performed the quality assessment, with discrepancies adjudicated by a third reviewer (RM). Quality scores were tabulated utilizing a spreadsheet template in Microsoft Excel, deploying a categorical response set (“yes”, “no”, “can’t tell”, or “not applicable”).

Data synthesis strategy

This review adopted a descriptive narrative synthesis approach [23], which systematically outlines and synthesises key characteristics and evidence across the selected studies, offering a comprehensive summary of the findings. All the included studies applied simulation to a range of clinical topics using a variety of methods, but similar outcomes (knowledge and skill acquisition and retention) and interventions were used. This involved the discussion and reporting of critical and comparative details about the simulation interventions as well as the characteristics of the focus population, the types of outcomes measured and the overall quality of the study. The narrative synthesis also included a textual description of the simulation intervention methodology. The approach allows for articulating the congruities and disparities among the studies concerning methodological quality, design, methodology, outcome measures, and findings [24].

Results

Study selection

The scientific database returned 14,451 articles, of which 6,213 duplicates were excluded. Titles and abstracts of the remaining 8,238 articles were assessed for eligibility using Covidence systematic review software. This initial screening excluded 8,162 articles due to irrelevance. Subsequently, 76 articles underwent full-text screening; 33 met the inclusion criteria and were further assessed by AA and AN, while 29 were excluded. The PRISMA flow diagram [25] in Fig. 1 summarizes the search and selection process.

Fig. 1
figure 1

PRISMA Flow Diagram

Study characteristics

A total of 33 papers, published between 2017 and 2023, met the inclusion criteria (Table 2). Table 3 delineates the studies’ attributes, encompassing research design, simulation interventions, and result of appraisal & quality rating. These studies exclusively assessed SBL in augmenting nursing students’ knowledge and skills. Among them, 31 utilized a quantitative approach, while Demirtas et al. [26] and Zieber and Sedgewick [27] employed mixed methods; however, only quantitative results are discussed in this review.

Fifteen studies employed Randomised Controlled Trials (RCTs) [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42]. While the remainder (n = 18) adopted quasi-experimental designs [26, 27, 43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58].

The studies were globally distributed, comprising eight from Europe, two from South America, one from Australia, four from North America, seven from the Middle East, and eleven from Asia, collectively, these encompassed 19 countries.

All 33 studies focused on student nurses, resulting in an aggregated sample size of 3,670 participants. Sample sizes varied considerably, ranging from 16 of Hardenberg et al. [30] to 479 Requena-Mullor et al. [51]. Notably, nine studies had over 100 participants. One study of Hardenberg et al. [30] exclusively involved postgraduate students, while the remainder encompassed a range of undergraduate levels. Although Ka Ling et al. [33] and Kardong-Edgren et al. [50] recruited from multiple nursing schools, the others were confined to single institutions.

Simulation fidelity varied across studies which creates difficulties when comparing and synthesizing results between simulations that are not standardised. For example, when different topics are taught to different groups of participants, different time allocations are made, different roles are played, a variety of data collection tools are employed, and different levels of fidelity are used.

Regarding the SBL interventions, a majority (n = 23) examined cardio-pulmonary resuscitation (CPR) or other life support skills [26, 27, 29, 31,32,33,34, 36, 37, 40,41,42,43,44,45, 47,48,49,50,51, 53, 55, 56]. The rest focus on critical care skills [30, 35, 38, 39, 46, 52, 54, 58] or clinical decision-making skills [28].

Table 2 Characteristics of included studies

Methodological quality of the included studies

The appraisal outcomes are detailed in Appendix A & B. Following Reilly et al. [59]and Munn et al. [60], the overall quality was classified based on the proportion of criteria met (< 50% as poor, 50–80% as moderate, > 80% as good) (See Table 3). Discrepancies between reviewers were reconciled through team discussions.

Risk of bias in studies

The studies were evaluated across six domains: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other biases as per Higgins et al. [61], utilizing the JBI appraisal tool [20] for RCT and quasi-experimental studies. Measures taken to mitigate bias are reported in line with the criteria by Omura et al. [62] and Mbuzi et al. [63].

Among the 15 RCTs, all of the relevant studies ensured that all treatment groups were similar at a baseline level (selection bias), they addressed the potential risk of bias through randomisation. Only five studies [35, 38, 40,41,42] reported the concealment of treatment allocation.

Given the educational nature of interventions, instructors and assessors were often not blinded. A blind design was employed in five RCTs, with three blinding instructors and outcome assessors [29, 30, 32], and two blinding participants, instructors, and assessors [33, 41]. All RCTs completed follow-ups and used appropriate statistical analyses based on consistent participant groups pre-and post-intervention.

Regarding the 18 quasi-experimental studies devoid of random allocation, five consisted of pre-and post-intervention studies with a control group [46, 48, 56,57,58]. Overall, the 18 quasi-experimental studies employed identical measures on the same participants, before and after exposure to the intervention, follow-up was completed, and appropriate statistical analyses were conducted. However, these designs were not randomised and, therefore, they could contribute to potential bias. See Appendix A and B in the additional files for the full quality assessment table and the risk of bias judgements.

According to the JBI Evidence criteria [22], evidence from 15 RCTs was rated Level 1 C, while the 18 quasi-experimental studies was rated Level 2 C.

Table 3 Summary of characteristics of included studies and the result of the appraisal

Results of syntheses

In the following narrative analysis, findings are categorised into three themes interpreting the impact of SBL on nursing students. The first theme explores the ‘Impact of SBL on knowledge and skills acquisition,’ probing the immediate effects of SBL on the learner’s capabilities. The second theme, ‘Impact of SBL on retention of knowledge and skills,’ examines the durability of these learning outcomes over a span of time. Finally, the third theme focuses on the ‘Impact of SBL on wider clinical performance,’ which concerns the impact of SBL on broader clinical practice (self-confidence, satisfaction, clinical reasoning, self-efficacy, and problem-solving). These themes offer a comprehensive understanding of the influences of SBL in nursing education.

Theme 1: Impact of SBL on knowledge and skills acquisition

It is evident that this emerges as the most prominent observation from this review. Despite the diversity in simulation environments, methodologies, and outcomes which encompassed a notable level of heterogeneity among the studies included, there was a steady and credible agreement suggesting that SBL is a potent method for enhancing both knowledge and skills of nursing students. This was observed in the context of CPR-based skills [26, 29, 42, 44], along with other vital skills [52, 58], and decision-making tasks [28].

Sarvan and Efe [41] utilised serious game simulation (SGS) in neonatal resuscitation training, employing a randomized controlled pre-test and post-test design. This approach led to notable improvements in practical skills, indicating the efficacy of SGS in enhancing skill acquisition. Similarly, Li et al. [40] implemented a blended learning approach integrating online virtual simulation with traditional methods in CPR training. This study observed marked improvements in self-directed learning capabilities and CPR skills, demonstrating the role of SBL in facilitating the acquisition of both cognitive and practical skills.

The effect of simulation on knowledge and skills acquisition demonstrated statistically significantly higher means for the experimental groups demonstrating a range of improvement in performance from 10 to 35% compared to the control groups in some studies. Hardenberg et al. [30] reported a mean difference of (p < 0.05) skills that were enhanced amongst the training simulation group. Keys et al. [32] reported that overall performance during CPR was significantly higher (p = 0.003) for participants in the intervention group (simulation) compared to the control group (10/10 and 5/10, respectively).

Furthermore, there was an improvement in the knowledge and skill acquisition as the review identified a consistent increase across studies, with post-intervention scores showing an increase ranging from 20 to 50% compared to pre-intervention scores. D’Cunha et al. [44] noted that there was a significant increase in mean scores in the areas of clinical reasoning, knowledge, and skills from the pre-test to the post-test (55.69–77.33%) following the simulation drills. Demirtas et al. [26] found a significant improvement in the students’ knowledge and skills of CPR following simulation training (p = 0.001). The mean pre-test CPR knowledge score was 5.66 ± 1.97, which increased significantly to 8.38 ± 1.30 after the simulation-based CPR training. Additionally, the mean CPR skills score improved from 22.29 ± 5.07 pre-test to 32.51 ± 1.80 post-test, resulting in a 45.9% improvement in CPR skills. indicates an effect, showing that the intervention had a significant impact on students’ learning outcomes. Kardong-Edgren et al. [50] reported a significant improvement in overall compression scores from the pre-test (M = 42.76) to the post-test (M = 77.87). This improves the effectiveness of the intervention in enhancing CPR compression performance, highlighting the critical role of SBL in improving resuscitation skills. Further, Filomeno et al. [45] observed a significant improvement in critical care scenarios such as ABCDE assessment, disturbance identification, prioritisation and application of algorithms when analysing the post-test: p = 0.01.

It is worth highlighting that these outcomes were consistently observed across studies with varying degrees of rigor in their design, such as Kardong-Edgren et al. [50] and Zieber and Sedgewick [27] and RCT designs such as those conducted by Kim et al. [37] and Hardenberg et al. [30]. This is crucial detail because RCT studies seek to minimise the impact that confounding variables have on the results of intervention studies and are hence considered a more robust paradigm [64]. The range of RCT studies incorporated within this review supply evidence that consistently supports the findings of the non-randomised studies and therefore enables a greater degree of confidence to be assigned to this theme. The consistently positive findings throughout the selected SBL studies result in the conclusion that this method is an effective means of boosting the skills and knowledge of nursing students.

Theme 2: Impact of SBL on retention of knowledge and skills

The second theme, the impact of SBL on long-term retention of clinical knowledge and skills, extends the findings of the first theme by examining the sustainability of the acquired knowledge and skills beyond the initial SBL session. The first theme arose from consistent findings across varied settings, outcomes, and designs, while the second theme lacks universality due to limited comprehensive analyses in the n = 33 studies concerning meaningful long-term retention, and due to mixed results. Within the theme of retention, eight studies were assessed the retention. Six of these studies reported 15% improvements, while two observed a 5% decline following the intervention. Notably, the follow-up period across these studies varied, with a range of two to five months. This variation provides insights into the short-term impacts of the interventions on retention rates. For instance, Charlier et al. [43] through a quasi-experimental study, demonstrated sustained retention of Basic Life Support (BLS) skills and knowledge at four-month follow-up among participants who underwent SBL. Similar results were reported by Arrogante et al. [29], who found superior CPR performance in the SBL group compared to controls after three months. Further, RCTs by Padilha et al. [35] and Araújo et al. [34] also supported the hypothesis that SBL fosters improved cognitive retention and performance over traditional methods. Habibli et al. [31] observed enhanced nursing students’ BLS-CPR knowledge and performance due to SBL. At three-month follow-up, the intervention group scored higher (15.07, 16.57) compared to the control group (13.33, 14.76). Zieber and Sedgewick [27] corroborated these findings.

However, Seol and Lee [49] and Tuzer et al. [47] did not find retention of knowledge and skills acquired through SBL at 20-week follow-up. Though these findings contradict others, it is important to consider that Seol and Lee’s study had a small sample size (n = 32), potentially affecting its validity and reliability. These discrepancies suggest a need for further research on the long-term retention of knowledge and skills through SBL.

Theme 3: Impact of SBL on wider clinical performance

SBL’s extended impact encompasses skills and knowledge and broader clinical performance, including enhancements in self-confidence, satisfaction, clinical reasoning, self-efficacy, and problem-solving. Tucker et al. [56] investigated the effect of simulated scenarios on nursing students’ self-efficacy in resuscitation and found that such simulations bolster student confidence, a crucial component of clinical performance. Moreover, the study by Meneghesso et al. [55]using the ‘blindfolded’ technique in clinical simulations highlighted an increase in self-confidence and knowledge, which are vital for effective clinical performance. Additionally, the research by Li et al. [40] and Yang and Oh [57] demonstrated improvements in problem-solving abilities and learning motivation, underscoring the positive influence of SBL on various facets of clinical performance. These improvements were highlighted in RCTs conducted by Padilha et al. [35], Kim et al. [37], Svellingen et al. [28], Seo and Eom [39], Tawalbeh [38], and non-randomized studies by Charlier et al. [43], Filomeno et al. [45], Demirtas et al. [26], Goldsworthy et al. [46], and Lau et al. [53]. Additionally, studies by Roh et al. [54] and Lau et al. [53] indicated that SBL is correlated with improved teamwork and collaboration, crucial elements in clinical practice [65]. These findings represent insightful benefits of SBL and suggest an area that merits further exploration in future research.

Risk of bias across studies

The studies reported demographic information, but no significant differences concerning age, education level, or gender were detected. All RCTs employed randomization to counter selection bias, which not posed concern of possible bias.

Only five studies adopted blind designs: Keys et al. [32], Sarvan and Efe [41], Ka Ling et al. [33], Arrogante et al. [28], and Hardenberg et al. [29]. Blind design, wherein neither participants nor experimenters know group allocations, minimizes bias and bolsters result validity [19]. Ka Ling et al. [32] and Sarvan and Efe [41] utilised a double-blind approach, keeping both participants and assessors unaware of study aims and group memberships, curbing assessment biases. Conversely, Keys et al. [31], Arrogante et al. [28], and Hardenberg et al. [29] applied single-blind designs for course deliverers and assessors, minimizing biases in course delivery and learning outcome assessments.

Implementing blind design in simulation studies can be complex due to the simulation environment’s nature, where experimenters might access information revealing group assignments, and ensuring participant blinding may be challenging. However, careful planning can facilitate blind design integration in simulation studies.

Discussion

This review aimed to systematically evaluate the literature to determine the effectiveness of SBL in knowledge and skills acquisition and retention. While certain topics, such as CPR skills or specific simulation interventions, could potentially be for meta-analysis, the overall heterogeneity of the included studies, particularly in terms of intervention contexts, outcome measures, and reporting, limited the feasibility of conducting a meta-analysis across all studies.

Evidence from 33 primary research studies indicates a positive association between SBL and improvements in knowledge and skills. The majority of these studies focused on skills related to CPR and basic life support [26, 45]. Other studies examined different clinical skills, such as critical care skills [30, 46] and clinical decision-making [28]. As noted previously, simulations facilitate practice of emergency response procedures and specialised equipment operation that may not be sufficiently encountered through standard clinical placements [5]. Performing high-fidelity CPR and managing dynamic patient deterioration events further require sophisticated clinical judgement and psychomotor proficiencies that simulation-based mastery learning allows novice nurses to acquire [4].

While SBL demonstrated enhanced clinical skills and knowledge acquisition within the included studies, evidence supporting retention remains preliminary and constrained by limited longitudinally. Only 8 of the 33 included studies measured outcome durability over time, with retention follow-up assessments spanning just two to five months post-intervention. However, since this time period is narrowly constrained, it does not provide adequate opportunity for long-term knowledge and skill maintenance [49].

Furthermore, certain studies revealed additional benefits of SBL. For instance, Demirtas et al. [26], Tawalbeh [38], and Goldsworthy et al. [46] reported enhancements in learners’ confidence and self-efficacy, which might lead to improved clinical performance and patient outcomes. Roh et al. [54] and Lau et al. [53] indicated that SBL sessions were linked with enhanced teamwork and collaboration, which are vital in clinical practice [65]. These results imply that the impact of SBL may extend beyond knowledge and skills acquisition and retention, meriting further investigations to understand the underlying mechanisms in various healthcare settings.

While the current systematic review aligns with prior evidence syntheses in finding SBL effective for developing nursing knowledge and skills, key differences in scope and methodology underpin its unique contributions. Al Gharibi and Arulappan’s [11] review of 11 studies focused specifically on improved confidence and competence regarding clinical skills between 2011 and 2019. Alternatively, this review captured a rapidly expanding literature base of over 30 studies through near-current 2023 searches. This enabled more comprehensive evaluation across multidimensional impacts including knowledge acquisition, psychomotor skill development, clinical judgement and long-term retention, with 16 randomized controlled trials denoting higher quality evidence.

Additionally, Labrague et al.’s descriptive review examined simulation’s effect on anxiety but was restricted to correlational inquiries rather than experimental research evaluating intra-individual skill and knowledge growth. Furthermore, neither of these prior reviews substantially addressed sustainability questions regarding simulation training’s enduring effects on retention over extended periods. Thus, the current systematic review significantly builds upon preceding evidence by consolidating demonstrable knowledge and skill-based effectiveness data across a substantial set of controlled interventions. However, it only identified eight longitudinal studies analysing retention outcomes across a two to five-month timeframe. Therefore, the review highlights long-term retention as a critical yet understudied domain warranting markedly expanded ongoing investigation through longitudinal inquiries to firmly determine the sustainability of simulation training impacts.

Although some studies addressed fidelity variations, only Tuzer et al. [47] compared fidelity forms and did not find conclusive evidence for the superiority of one approach over another. Massoth et al. [10] had similar findings but reported overconfidence in high-fidelity SBL participants, a factor not examined in the present review.

Regarding study quality, the inclusion of 16 RCTs lends credibility to the review, but it is noteworthy that an equal number of studies did not employ randomization, which may impact the quality of evidence [64]. Additionally, large sample sizes in some non-randomized trials like Requena-Mullor et al. [51] could offset limitations by providing statistical power [66]. Conversely, some RCTs had small sample sizes like Hardenberg et al. [30], raising concerns about statistical power and reliability [67]. This emphasizes the need for rigorous evaluation during the research design phase to ensure both scientific and ethical integrity [68].

Strengths and limitations

This systematic review adopted a rigorous approach aligned with best practice standards, as reinforced through its registration with PROSPERO and adherence to the new PRISMA guidance. Comprehensive searches of major databases identified relevant literature without geographical constraints, facilitating the inclusion of 33 recent experimental studies from 2017 to 2023. This selective date range allowed targeted insight into simulation pedagogy maturation. Additionally, the review exclusively synthesized quantitative experimental research across 15 randomized controlled trials and 18 quasi-experimental designs to optimize internal validity in assessing simulation effectiveness. Paired screening and duplicate data extraction further minimized subjectivity and bias.

This systematic review sheds light on SBL within nursing education but is subject to limitations in both the included studies and the review process. The evidence included is limited by variability in study quality, with the absence of blinding, and small sample sizes, potentially undermining reliability and generalizability. Additionally, the heterogeneity in SBL intervention characteristics, such as duration, intensity, and design, renders comparison challenging. Transitioning to the review methodology, its scope is curtailed by an exclusion of non-English language, introducing potential language biases. Furthermore, the review’s narrow focus on SBL and its temporal constraint to recent publications may bypass valuable past findings.

Implications for nursing education and future research

Despite demonstrating effectiveness for knowledge and skill acquisition, constraints remain with regard to enduring retention. With few studies assessing beyond five months and an absence of longitudinal studies, the long-term sustainability of learning benefits remains inadequately elucidated. Given substantial knowledge gaps regarding the long-term impact of simulation, nurse educators should hold realistic expectations for knowledge and skills retention when incorporating simulation methodologically. To maximize efficacy, it is imperative that educators integrate a diversity of SBL modalities, such as high-fidelity simulation and role-playing, tailored to distinct learning needs and curricular objectives.

Academic institutions, including universities and nursing colleges, must foster collaboration with nursing educators and be committed to the integration of SBL into curricula. This necessitates investments in faculty training, simulation equipment, and technology. It is incumbent upon these institutions to ensure that curricular integration is resource-supported, aligning with educational objectives and addressing the distinctive learning requisites of nursing students.

Future studies should scrutinize the optimal utilisation of SBL, including efficacious teaching methodologies, the correlation between SBL duration and knowledge retention, and the transferability of knowledge and skills to clinical contexts. Longitudinal studies could elucidate the long-term implications of SBL on students’ competency and the impact on diverse cohorts. Moreover, the adoption of SBL necessitates significant resource investments in equipment, facilities, technologies, and specialized staff. Without demonstrating a favourable return on investment, the simulation could be priced out of reach. Therefore, an examination of cost-effectiveness regarding traditional methods, and more analysis of barriers and facilitators to SBL implementation would be invaluable in optimizing the quality of nursing education and preparing adept nursing professionals for the dynamic healthcare landscape.

Conclusions

This systematic review suggests that SBL is an effective pedagogical approach for promoting knowledge and skill acquisition and retention across a range of nursing education topics, including cardiopulmonary and critical care among nursing students. However, evidence gaps persist regarding enduring skill and knowledge retention outcomes.

With fewer than 25% of included studies assessing retention beyond five months post-intervention, current findings lack generalizability concerning the long-term sustainability of simulation’s learning impacts. Furthermore, this review provides a set of findings which both support and extend previous work in this area. The analysis of both randomized controlled and quasi-experimental studies demonstrated consistent and significant improvements in various measures of learning outcomes, including knowledge, skills, and self-confidence. These findings are particularly relevant given the increasing demand for nursing education programmes to prepare students for the complexities and challenges they will face in contemporary healthcare environments. Nonetheless, the evidence is tempered by limitations including heterogeneity in study designs and risk of bias. These constraints highlight the imperative for rigorous research to clarify the optimal parameters for SBL deployment and to investigate its applicability to diverse cohorts and clinical environments. Overall, this systematic review lends qualified support for simulation-based learning as a potentially valuable experiential teaching strategy within nursing education, though efficacy conclusions must be interpreted cautiously given considerable evidence gaps, particularly regarding enduring knowledge and skill retention impacts.

Availability of data and materials

The data that supports the results and findings of this systematic review can be found in either the main paper or the additional supporting files. Any other data from the current study are available from the corresponding author upon reasonable request.

References

  1. Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90(4):499–541.

    Article  Google Scholar 

  2. Pilcher J, Heather G, Jensen C, Huwe V, Jewell C, Reynolds R, et al. Simulation-based learning: it’s not just for NRP. Neonatal Netw. 2012;31(5):281–8.

    Article  Google Scholar 

  3. Steadman RH, Coates WC, Huang YM, Matevosian R, Larmon BR, McCullough L, et al. Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills. Crit Care Med. 2006;34(1):151–7.

    Article  Google Scholar 

  4. McGaghie WC, Issenberg SB, Barsuk JH, Wayne DB. A critical review of simulation-based mastery learning with translational outcomes. Med Educ. 2014;48(4):375–85.

    Article  Google Scholar 

  5. Rosen MA, Hunt EA, Pronovost PJ, Federowicz MA, Weaver SJ. In situ simulation in continuing education for the health care professions: a systematic review. J Continuing Educ Health Professions. 2012;32(4):243–54.

    Article  Google Scholar 

  6. Munshi F, Lababidi H, Alyousef S. Low-versus high-fidelity simulations in teaching and assessing clinical skills. J Taibah Univ Med Sci. 2015;10(1):12–5.

    Google Scholar 

  7. Yuan HB, Williams BA, Man CY. Nursing students’ clinical judgment in high-fidelity simulation based learning: a quasi-experimental study. J Nurs Educ Pract. 2014;4:7.

    Google Scholar 

  8. Reilly A, Spratt C. The perceptions of undergraduate student nurses of high-fidelity simulation-based learning: a case report from the University of Tasmania. Nurse Educ Today. 2007;27(6):542–50.

    Article  Google Scholar 

  9. Butler KW, Veltre DE, Brady D. Implementation of active learning pedagogy comparing low-fidelity simulation versus high-fidelity simulation in pediatric nursing education. Clin Simul Nurs. 2009;5(4):e129–36.

    Article  Google Scholar 

  10. Massoth C, Röder H, Ohlenburg H, Hessler M, Zarbock A, Pöpping DM, et al. High-fidelity is not superior to low-fidelity simulation but leads to overconfidence in medical students. BMC Med Educ. 2019;19:1–8.

    Article  Google Scholar 

  11. Al Gharibi KA, Arulappan J. Repeated simulation experience on self-confidence, critical thinking, and competence of nurses and nursing students—An integrative review. SAGE Open Nurs. 2020;6:2377960820927377.

    Google Scholar 

  12. Labrague LJ, McEnroe-Petitte DM, Bowling AM, Nwafor CE, Tsaras K. High-fidelity simulation and nursing students’ anxiety and self-confidence: A systematic review. Nurs Forum. 2019;54(3):358–68.

    Article  Google Scholar 

  13. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

    Article  Google Scholar 

  14. Booth A, Sutton A, Papaioannou D. Systematic approaches to a successful literature review. 2nd ed. Washington, D.C: SAGE Publications Ltd; 2016.

    Google Scholar 

  15. Polit D, Beck C. Essentials of nursing research: appraising evidence for nursing practice. Lippincott Williams & Wilkins; 2020.

  16. Garner P, Hopewell S, Chandler J, MacLehose H, Akl EA, Beyene J et al. When and how to update systematic reviews: consensus and checklist. BMJ. 2016;354:354.

    Google Scholar 

  17. Babineau J. Product review: covidence (systematic review software). J Can Health Libr Assoc/J l’Assoc bibliothèques de la santé du Can. 2014;35(2):68–71.

    Article  Google Scholar 

  18. Boland A, Cherry M, Dickson R. Doing a systematic review: a student’s guide. Sage Publications Limited; 2017.

  19. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane handbook for systematic reviews of interventions. Wiley; 2019.

    Book  Google Scholar 

  20. JBI. Critical appraisal tools Joanna Briggs Institute. 2018. Updated 28 May 2022.  http://joannabriggs.org/research/critical-appraisal-tools.html.

  21. Barker TH, Stone JC, Sears K, Klugar M, Leonardi-Bee J, Tufanaru C, Aromataris E, Munn Z. Revising the JBI quantitative critical appraisal tools to improve their applicability: an overview of methods and the development process. JBI Evid Synth. 2023;21(3):478–93.

    Article  Google Scholar 

  22. Joanna Briggs Institute. The Joanna Briggs Institute Levels of Evidence and Grades of Recommendation Working Party*. Supporting Document for the Joanna Briggs Institute Levels of Evidence and Grades of Recommendation. Austrália: Joanna Briggs Institute; 2014. p. 2019–05.

  23. Rodgers M, Sowden A, Petticrew M, Arai L, Roberts H, Britten N, et al. Testing methodological guidance on the conduct of narrative synthesis in systematic reviews: effectiveness of interventions to promote smoke alarm ownership and function. Evaluation. 2009;15(1):49–73.

    Article  Google Scholar 

  24. Bettany-Saltikov J. EBOOK: how to do a systematic literature review in nursing: a step-by-step guide. 2016.

    Google Scholar 

  25. Moher D, Liberati A, Tetzlaff J, Altman DG, Group* P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.

    Article  Google Scholar 

  26. Demirtas A, Guvenc G, Aslan Ö, Unver V, Basak T, Kaya C. Effectiveness of simulation-based cardiopulmonary resuscitation training programs on fourth-year nursing students. Australasian Emerg Care. 2021;24(1):4–10.

    Article  Google Scholar 

  27. Zieber M, Sedgewick M. Competence, confidence and knowledge retention in undergraduate nursing students— mixed method study. Nurse Educ Today. 2018;62:16–21.

    Article  Google Scholar 

  28. Svellingen AH, Forstrønen A, Assmus J, Røykenes K, Brattebø G. Simulation-based education and the effect of multiple simulation sessions-A randomised controlled study. Nurse Educ Today. 2021;106:105059.

    Article  Google Scholar 

  29. Arrogante O, Rios-Diaz J, Carrion-Garcia L, Samith S, Gonzalez-Romero GM, Caperos JM. Deliberate practice in resuscitation training using a feedback device, and the effects of the physical characteristics of the rescuer on the acquisition and retention of cardiopulmonary resuscitation skills: randomized clinical trial. Int Emerg Nurs. 2021;58:101037.

    Article  Google Scholar 

  30. Hardenberg J, Rana I, Tori K. Simulation exposure improves clinical skills for postgraduate critical care nurses. Clin Simul Nurs. 2019;28:39–45.

    Article  Google Scholar 

  31. Habibli T, Ghezeljeh TN, Haghani S. The effect of simulation-based education on nursing studentsâ knowledge and performance of adult basic cardiopulmonary resuscitation: a randomized clinical trial. Nurs Pract Today. 2020;7(2):87–96.

    Google Scholar 

  32. Keys E, Luctkar-Flude M, Tyerman J, Sears K, Woo K. The integration of virtual simulation gaming into undergraduate nursing resuscitation education: a pilot randomised controlled trial. Clin Simul Nurs. 2021;54:54–61.

    Article  Google Scholar 

  33. Ka Ling F, Lim Binti Abdullah K, Seng Chiew G, Danaee M, Chan CMH. The impact of high fidelity patient simulation on the level of knowledge and critical thinking skills in code blue management among undergraduate nursing students in Malaysia. SAGE Open. 2021;11(2):21582440211007124.

    Article  Google Scholar 

  34. Araújo MS, Medeiros SM, Costa RR, Coutinho VR, Mazzo A, Sousa YG. Effect of clinical simulation on the knowledge retention of nursing students. Acta Paulista de Enfermagem. 2021;34:eAPE000955.

  35. Padilha JM, Machado PP, Ribeiro A, Ramos J, Costa P. Clinical virtual simulation in nursing education: randomized controlled trial. J Med Internet Res. 2019;21(3):e11529.

    Article  Google Scholar 

  36. Saeidi R, Gholami M. Comparison of effect of simulation-based neonatal resuscitation education and traditional education on knowledge of nursing students. 2017.

    Google Scholar 

  37. Kim SH, Issenberg B, Roh YS. The effects of simulation-based advanced life support education for nursing students. CIN: Comput Inf Nurs. 2020;38(5):240–5.

    Google Scholar 

  38. Tawalbeh LI. Effect of simulation modules on Jordanian nursing student knowledge and confidence in performing critical care skills: a randomized controlled trial. Int J Afr Nurs Sci. 2020;13:100242.

    Google Scholar 

  39. Seo YH, Eom MR. The effect of simulation nursing education using the outcome-present state-test model on clinical reasoning, the problem-solving process, self-efficacy, and clinical competency in Korean nursing students. Healthcare. 2021;9(3):243.

  40. Li Y, Lv Y, Dorol RD, Wu J, Ma A, Liu Q, Zhang J. Integrative virtual nursing simulation in teaching cardiopulmonary resuscitation: A blended learning approach. Australas Emerg Care. 2024;27(1):37–41.

    Article  Google Scholar 

  41. Sarvan S, Efe E. The effect of neonatal resuscitation training based on a serious game simulation method on nursing students’ knowledge, skills, satisfaction and self-confidence levels: a randomized controlled trial. Nurse Educ Today. 2022;111:105298.

    Article  Google Scholar 

  42. Farsi Z, Yazdani M, Butler S, Nezamzadeh M, Mirlashari J. Comparative effectiveness of simulation versus serious game for training nursing students in cardiopulmonary resuscitation: a randomized control trial. Int J Comput Games Technol. 2021;2021:6695077.

    Article  Google Scholar 

  43. Charlier N, Van Der Stock L, Iserbyt P. Comparing student nurse knowledge and performance of basic life support algorithm actions: an observational post-retention test design study. Nurse Educ Pract. 2020;43:102714.

    Article  Google Scholar 

  44. D’Cunha RJ, Fernandes SF, Sherif L. Utility of simulation as a teaching tool for nursing staff involved in code blue management. Indian J Crit Care Med: Peer-reviewed Off Public Indian Soc Crit Care Med. 2021;25(8):878.

    Article  Google Scholar 

  45. Filomeno L, Renzi E, Insa-Calderón E. Effectiveness of clinical simulation on nursing student’s improving critical care knowledge: a pretest-posttest study. Clin Ter. 2020;171(6):e501–8.

  46. Goldsworthy S, Patterson JD, Dobbs M, Afzal A, Deboer S. How does simulation impact building competency and confidence in recognition and response to the adult and paediatric deteriorating patient among undergraduate nursing students? Clin Simul Nurs. 2019;28:25–32.

    Article  Google Scholar 

  47. Tuzer H, Inkaya B, Yilmazer T, Sarikoc G, Elcin M. The effect of high and medium fidelity simulator in cardiopulmonary resuscitation training on nursing students’ knowledge and performances. Int J Caring Sci. 2020;13(2):1250–6.

    Google Scholar 

  48. Chen J, Yang J, Hu F, Yu S-H, Yang B-X, Liu Q, et al. Standardised simulation-based emergency and intensive care nursing curriculum to improve nursing students’ performance during simulated resuscitation: a quasi-experimental study. Intensive Crit Care Nurs. 2018;46:51–6.

    Article  Google Scholar 

  49. Seol J, Lee O. Effects of cardiopulmonary resuscitation training for Mozambican nursing students in a low-resource setting: an intervention study. Nurse Educ Today. 2020;90:104433.

    Article  Google Scholar 

  50. Kardong-Edgren S, Oermann MH, Jastrzembski TS, Krusmark MA, Gluck KA, Molloy MA, et al. Baseline cardiopulmonary resuscitation skill performance of nursing students is improved after one resuscitation quality improvement skill refresher. J Nurses Prof Dev. 2020;36(2):57–62.

    Article  Google Scholar 

  51. Requena-Mullor MM, Alarcón-Rodríguez R, Ventura-Miranda MI, García-González J. Effects of a clinical simulation course about basic life support on undergraduate nursing students’ learning. Int J Environ Res Public Health. 2021;18(4):1409.

    Article  Google Scholar 

  52. Sapiano AB, Sammut R, Trapani J. The effectiveness of virtual simulation in improving student nurses’ knowledge and performance during patient deterioration: a pre and post test design. Nurse Educ Today. 2018;62:128–33.

    Article  Google Scholar 

  53. Lau Y, Chee DGH, Ab Hamid ZB, Leong BS-H, Lau ST. Interprofessional simulation–based advanced cardiac life support training: video-based observational study. Clin Simul Nurs. 2019;30:16–24.

    Article  Google Scholar 

  54. Roh YS, Kim SS, Park S, Ahn J-W. Effects of a simulation with team-based learning on knowledge, team performance, and teamwork for nursing students. CIN: Comput Inf Nurs. 2020;38(7):367–72.

    Google Scholar 

  55. Meneghesso I, Marcatto IF, Wada BF, Guermandi M, Girão FB. Self-confidence and knowledge in leadership in critical care: simulation with the blindfolded technique. Rev Gaúcha Enferm. 2023;43:43.

    Google Scholar 

  56. Tucker G, Urwin C, Unsworth J. The impact of unsuccessful resuscitation and manikin death during simulation on nursing student’s resuscitation self-efficacy: a quasi-experimental study. Nurse Educ Today. 2022;119:105587.

    Article  Google Scholar 

  57. Yang SY, Oh YH. The effects of neonatal resuscitation gamification program using immersive virtual reality: a quasi-experimental study. Nurse Educ Today. 2022;117:105464.

    Article  Google Scholar 

  58. Tseng L-P, Hou T-H, Huang L-P, Ou Y-K. Effectiveness of applying clinical simulation scenarios and integrating information technology in medical-surgical nursing and critical nursing courses. BMC Nurs. 2021;20(1):1–14.

    Article  Google Scholar 

  59. Reilly R, Evans K, Gomersall J, Gorham G, Peters MD, Warren S, et al. Effectiveness, cost effectiveness, acceptability and implementation barriers/enablers of chronic kidney disease management programs for indigenous people in Australia, New Zealand and Canada: a systematic review of mixed evidence. BMC Health Serv Res. 2016;16:1–15.

    Article  Google Scholar 

  60. Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evid Synthesis. 2020;18(10):2127–33.

    Google Scholar 

  61. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343(7829):d5928.

  62. Omura M, Maguire J, Levett-Jones T, Stone TE. The effectiveness of assertiveness communication training programs for healthcare professionals and students: a systematic review. Int J Nurs Stud. 2017;76:120–8.

    Article  Google Scholar 

  63. Mbuzi V, Fulbrook P, Jessup M. Effectiveness of programs to promote cardiovascular health of indigenous australians: a systematic review. Int J Equity Health. 2018;17(1):1–17.

    Article  Google Scholar 

  64. Abbott ML, McKinney J. Understanding and applying research design. John Wiley & Sons; 2012.

  65. Reeves S, Pelone F, Harrison R, Goldman J, Zwarenstein M. Interprofessional collaboration to improve professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2017;6(6):CD000072.

  66. Kadam P, Bhalerao S. Sample size calculation. Int J Ayurveda Res. 2010;1(1):55.

    Article  Google Scholar 

  67. Das S, Mitra K, Mandal M. Sample size calculation: basic principles. Indian J Anaesth. 2016;60(9):652.

    Article  Google Scholar 

  68. Koepsell D. Scientific integrity and research ethics: an approach from the ethos of science. Amsterdam: Springer; 2016.

Download references

Acknowledgements

Not applicable.

Funding

The authors did not receive any funding for this work.

Author information

Authors and Affiliations

Authors

Contributions

AA, RM, and WM collaborated on the study protocol. AA completed the search strategy, reviewed by WM and RM. AA and AN handled screening, quality appraisal, and data extraction, supervised by RM and WM. AA drafted the manuscript, with input from RM, JM and WM . All authors approved the final manuscript.

Corresponding author

Correspondence to Ali Alharbi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alharbi, A., Nurfianti, A., Mullen, R.F. et al. The effectiveness of simulation-based learning (SBL) on students’ knowledge and skills in nursing programs: a systematic review. BMC Med Educ 24, 1099 (2024). https://doi.org/10.1186/s12909-024-06080-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-024-06080-z

Keywords