Research article | Open | Open Peer Review | Published:
Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis
BMC Medical Educationvolume 16, Article number: 152 (2016)
Simulation-based nursing education is an increasingly popular pedagogical approach. It provides students with opportunities to practice their clinical and decision-making skills through various real-life situational experiences. However, simulation approaches fall along a continuum ranging from low-fidelity to high-fidelity simulation. The purpose of this study was to determine the effect size of simulation-based educational interventions in nursing and compare effect sizes according to the fidelity level of the simulators through a meta-analysis.
This study explores the quantitative evidence published in the electronic databases EBSCO, Medline, ScienceDirect, ERIC, RISS, and the National Assembly Library of Korea database. Using a search strategy including the search terms “nursing,” “simulation,” “human patient,” and “simulator,” we identified 2279 potentially relevant articles. Forty studies met the inclusion criteria and were retained in the analysis.
This meta-analysis showed that simulation-based nursing education was effective in various learning domains, with a pooled random-effects standardized mean difference of 0.70. Subgroup analysis revealed that effect sizes were larger for high-fidelity simulation (0.86), medium-fidelity simulation (1.03), and standardized patients (0.86) than they were for low-fidelity and hybrid simulations. In terms of cognitive outcomes, the effect size was the largest for high-fidelity simulation (0.50). Regarding affective outcome, high-fidelity simulation (0.80) and standardized patients (0.73) had the largest effect sizes.
These results suggest that simulation-based nursing educational interventions have strong educational effects, with particularly large effects in the psychomotor domain. Since the effect is not proportional to fidelity level, it is important to use a variety of educational interventions to meet all of the educational goals.
Clinical education in nursing aims to integrate theoretical knowledge from books into practical knowledge in real-life situations and to help students develop their problem-solving skills. Due to rapid changes in clinical placements, patient safety issues, and ethical concerns, students’ direct experience with patient care and opportunities to handle problem-based clinical situations have been diminished. Simulation-based clinical education is a useful pedagogical approach that provides nursing students with opportunities to practice their clinical and decision-making skills through varied real-life situational experiences, without compromising the patient’s well-being.
Simulation-based clinical education in nursing refers to a variety of activities using patient simulators, including devices, trained persons, lifelike virtual environments, and role-playing, not just handling mannequins . With realistic clinical scenarios, simulation-based educational interventions in nursing can train novice as well as experienced nurses, helping them develop effective non-technical skills, practice rare emergency situations, and providing a variety of authentic life-threatening situations. The advantages of simulation-based educational interventions include the ability to provide immediate feedback, repetitive practice learning, the integration of simulation into the curriculum, the ability to adjust the difficulty level, opportunities to individualize learning, and the adaptability to diverse types of learning strategies .
Simulation can be described as a continuum ranging from low-fidelity simulation (LFS) to high-fidelity simulation (HFS) . Various simulation methods can be adapted according to specific learning outcomes and educational levels. Dieckmann  warns against placing too much emphasis on having optimal equipment and surroundings that realistically replicate the clinical setting. The required learning outcomes must govern the choice of simulation method .
A number of research studies in nursing have evaluated the effectiveness of simulation-based educational interventions . However, the reported effectiveness has varied according to the fidelity level of the simulators and the outcome variables. Issenberg et al.  found that HFS was effective for learning in medicine. However, their review was limited to HFS, medical education, and learner outcome variables, and did not compare simulation methods. Therefore, a meta-analysis synthesizing the results of these studies is needed to provide important insights into the level of simulation fidelity that is most effective for educational use.
The aims of this study were to determine the effect size of a simulation’s impact on nursing education and compare effect sizes according to the fidelity level of the simulators used.
This study was planned and conducted in adherence to PRISMA standards  of quality for reporting meta-analysis. We also considered the PRISMA criteria based on the PRISMA 2009 checklist in reporting each section, such as introduction, methods, results, and discussion.
Studies published between January 1995 and July 2013 were identified by conducting an electronic search of the following databases: EBSCO, Medline, ScienceDirect, ERIC, RISS, and the National Assembly Library of Korea database. The literature search was limited to articles published in English or Korean and was conducted using combinations of the keyword phrases nursing, simulation, human patient, and simulator. A total of 2279 potential studies were identified. Titles and abstracts were reviewed for eligibility.
Relevant studies were screened for inclusion based on the following criteria: 1) the study aimed to evaluate the effectiveness of simulation-based education for nursing students, and 2) an experimental or quasi-experimental design was used. We excluded articles that did not report a control group or that tested the effectiveness of computer-based virtual patients. For abstracts that did not provide sufficient information to determine eligibility, full-length articles were retrieved. Disagreement on the inclusion or exclusion of articles was resolved by consensus. Of the potentially relevant 2279 articles, screening of the title and abstracts resulted in 317 relevant studies. After a review of these articles, 96 studies were retained and three articles included additionally via hand search. These 99 full-text articles were reviewed systematically to confirm their eligibility (Fig. 1).
Criteria for considering studies for this review
In this study, assessment of the methodological quality of 40 selected studies was performed by using the Case Control Study Checklist developed by the Critical Appraisal Skills Programme (CASP) . The CASP appraisal tool was designed to facilitate systematic thinking about educational studies. This tool contains 11 questions in three sections: (1) Are the results of the trial valid? (2) What are the results? (3) Will the results help locally? Most of the items were responded with “yes,” “no,” or “can’t tell.” The papers were assessed by two independent reviewers using the CASP checklist. Any disagreement that arose between the reviewers was resolved through discussion and consensus with a third reviewer. Forty studies met the inclusion criterion of nine or more out of 11 questions answered with “yes” and were consequently considered to be applicable to this review study. The inclusion criteria for this review were as follows:
This study sampled pre-licensure nursing students, licensed nurses, or nurse practitioners.
Type of interventions
We defined simulation-based educational intervention as education involving one or more of the following modalities: partial-task trainers, standardized patients (SPs), full-body task trainers, and high-fidelity mannequins.
Types of outcome variables
Study outcomes included learning and reaction outcomes. Learning outcomes were categorized into three domains: cognitive, psychomotor, and affective.
The level of fidelity was determined by the environment, the tools and resources used, and other factors associated with the participants . However, as to debriefing, a few selected studies do not indicate the method of debriefing they had used, making it difficult to categorize and discuss the effects of each debriefing method. Thus, we categorized fidelity level according to the physical equipment used. Fidelity level was coded as low, medium, or high according to the extent to which the simulation model resembled a human being, hybrid, or SP. LFSs were defined as static models or task trainers primarily made of rubber body parts [9, 10]. Medium-fidelity simulators (MFSs) were full-body manikins that have embedded software and can be controlled by an external, handheld device . HFSs were life-sized computerized manikins with realistic anatomical structures and high response fidelity . We also considered hybrid simulators, which combined two or more fidelity levels of simulation. As SP is a person trained as an individual in a scripted scenario for the purposes of instruction, practice, or evaluation , the use of SP was considered because of the different types of fidelity responses, such as body expressions and verbal feedback, which cannot be perceived in other simulation models.
The extracted data were coded by two researchers. A coding manual was developed in order to maintain the reliability of coding. The manual included information regarding effect size calculations and the characteristics of the study and the report. Differences between coders were resolved by discussion until a consensus was achieved.
Data synthesis and analysis
The software Comprehensive Meta-Analysis version 2 (Biostat, Englewood, New Jersey) was used to conduct the data analysis. Effect size estimates were adjusted for sample size (Cohen’s d), and 95 % confidence intervals were calculated to assess the statistical significance of average effect sizes.
Fixed effects models assume that the primary studies have a common effect size. In contrast, random effects models attempt to estimate the distribution of the mean effect size, assuming that each primary study has a different population . A test for heterogeneity of the intervention effects was performed using the Q statistic. As the results of the test for heterogeneity was statistically significant, we used the random effects models to accommodate this heterogeneity for the main effect and sub-group analyses. The planned subgroup analyses were conducted on evaluation outcomes.
We identified 2279 potentially relevant articles using the search strategy described above, of which 40 met the inclusion criteria. The characteristics of the 40 studies included in this meta-analysis are listed in Table 1. Twenty five of the 40 studies (62.5 %) used random assignment, whereas the remaining 15 (37.5 %) were nonrandomized. Half of the studies compared education using high-fidelity simulators with a control group. Ten studies (25 %) utilized standardized patients. Learners at various levels of training were represented.
When the studies were combined in the meta-analysis, high heterogeneity was observed (Q = 253.22, P < .001) (Table 2). The overall effect size for the random effects model was 0.70, with 95 % confidence intervals of 0.58–0.83 (Table 3) (Fig. 2). The possibility of a publication bias was minimal because the funnel plot appeared symmetrical.
Effect sizes by level of simulation fidelity
Studies using HFSs (0.86), MFSs (1.03), and SPs (0.86) had large effect sizes, whereas low-fidelity (0.35) and hybrid (0.34) simulation studies had smaller effect sizes.
Reaction outcome according to fidelity level
Learning outcome according to fidelity level
The results of the sub-group analysis for learning outcomes according to fidelity level are shown in Table 4. For cognitive outcome, which is a sub-domain of learning, the effect size was the highest for HFS (0.50), followed by LFS (0.47), SP (0.32), and MFS (0.06).
Regarding affective outcome, HFS (0.80) and SP (0.73) had the largest effect sizes, whereas LFS (0.39) and hybrid (0.35) simulation studies had smaller effect sizes. MFS (1.76), SP (1.27), and HFS (1.03) showed large effect sizes in the psychomotor domain (Table 4).
The present study provided meta-analytical data for evidence-based education through a comprehensive analysis of simulation-based nursing education with diverse backgrounds and characteristics. Compared with our previous article “Effectiveness of patient simulation in nursing education: meta-analysis” , the current study included an additional electronic search of Korean databases such as RISS and the National Assembly Library of Korea database. Through this process, 20 Korean papers were included additionally and half of papers were Korean. This could cause different result compared to previous one. In addition to including a reaction outcome according to fidelity levels, effect sizes based on outcomes and fidelity level were identified. A systematic search of the literature resulted in 40 published studies that were eligible for inclusion in this meta-analysis. These primary studies provided evidence of the effects of simulation-based nursing education in various evaluation and learning environments.
Random assignment studies accounted for 62.5 % of the studies included. This represents a noticeable increase in randomized research designs, which made up less than 30 % of studies in a systematic review conducted 10 years ago on HSF in medical education . That review found that HFSs were used in 50 % of studies, and 25 % used SPs, which is similar to the findings of the study by Kim, Park, and Shin . This confirms the relatively high usage of HFSs and SP in nursing education.
The medium-to-large effect size (0.70) suggests that simulation-based nursing education is effective. This is consistent with the findings of a study on health professional education , which reported that technology-enhanced simulation training produced moderate to large effects.
Regarding simulator fidelity level, HFS (0.86), MFS (1.03), and SP (0.86) displayed larger effect sizes compared to LFS or hybrid simulation. This result supports the findings of a previous meta-analysis of simulation in health professions, showing that HSF offers benefits over LFS . However, these findings should be interpreted with caution. Recent studies suggest that the degree of realism required of a simulation is a function of the learning task and context, and can therefore vary widely for different areas of educational outcomes .
In the reaction domain, which includes satisfaction and learning attitudes, HFS had a larger effect size than LFS. Satisfaction levels are high among students participating in simulation learning that utilizes human simulators or SP . Considering that problem-based learning (PBL) lessons were found to enhance student attitudes more than traditional lectures , student participation and actual activity appear to have positive effects on satisfaction and learning attitudes.
In the sub-group analysis for learning outcome according to fidelity level, the effect size was the largest for psychomotor outcome, followed by affective and cognitive outcomes. This result differs somewhat from the meta-analysis on the effects of PBL , in which effect sizes were the largest for psychomotor outcomes, followed by the cognitive and affective domains. This difference is interpreted as reflecting PBL’s emphasis on reasoning based on problems and cases, compared to the actual clinical practice emphasized in simulation-based learning.
Specifically, the effect size of cognitive outcome was the largest for HFS (0.50), while the order for affective outcome was HFS (0.80), followed by SP (0.73). In the psychomotor domain, the order was MFS (1.76), SP (1.27), and HFS (1.03). These results demonstrate that HFS and SP are effective in producing cognitive and affective outcomes; however, to achieve psychomotor learning outcomes, technical training using MFS would be more helpful, which concurs with the lack of positive association between fidelity and process skills .
However, the present study has the limitation of not considering learning-related factors in the analyses based on the fidelity level of simulators. Even though debriefing has become more crucial in simulation-based learning and the methods have diversified over the years, a few selected studies do not indicate the methods of debriefing they had used, making it difficult to categorize and discuss the effects of each debriefing method. This may be because it is customary to omit debriefing while learning from low fidelity simulations, especially for training simple nursing skills. As such, the present study has the limitation of not considering learning-related factors from debriefing at each fidelity level of simulators, including reflection, feedback, and a range of debriefing methods (self-debriefing, multimedia debriefing, and/or in-simulation instructor facilitated debriefing). In addition, we did not include studies published in languages other than English or Korean.
Despite such limitations, this study demonstrated that simulation-based nursing education has an educational effect, with particularly strong effects in the psychomotor domain. Since the effects are not proportional to fidelity level, educational interventions should be broad enough to satisfy educational goals, all of which are supported by the results presented above. In addition, a recent study reported that debriefing was the most important factor in simulation, with positive effects from self-debriefing and video-facilitated instructor debriefing . Based on these findings, the clinical reflection process needs to be improved to increase the learning effects in the cognitive domain.
Our results indicated that simulation-based nursing educational interventions were effective with particularly large effects in the psychomotor domain. In addition, the effect of simulation-based nursing education was not proportional to fidelity level. Therefore, it is important to use an appropriate level of simulation to meet all of the educational goals and outcomes.
Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27:10–28. http://dx.doi.org/10.1080/01421590500046924.
Hovancsek MT. Using simulations in nursing education. In: Jeffries PR, editor. Simulation in nursing education: from conceptualization to evaluation. New York: National League for Nursing; 2007. p. 1–9.
Dieckmann P. Simulation settings for learning in acute medical care. In: Dieckmann P, editor. Using simulations for education, training and research. Lengerich: Pabst; 2009. p. 40–138.
Toserud R, Hedelin B, Hall-Lord ML. Nursing students’ perception of high- and low-fidelity simulation used as learning methods. Nurse Educ Prac. 2013;13:262–70. http://dx.doi.org/10.1016/j.nepr.2013.02.002.
Laschinger S, Medves J, Pulling C, McGraw R, Waytuck B, Harrison MB, et al. Effectiveness of simulation on health profession students’ knowledge, skills, confidence and satisfaction. Int J Evid Based Healthc. 2008;6:278–302. http://dx.doi.org/10.1111/j.1744-1609.2008.00108.x.
Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151:264–9. http://dx.doi.org/10.7326/0003-4819-151-4-200908180-00135.
Critical Appraisal Skills Programme (CASP). Case control study checklist. From http://www.casp-uk.net/
Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc. 2007;2:183–93.
Issenberg SB, Gordon MS, Gordon DL, Safford RE, Hart IR. Simulation and new learning technologies. Med Teach. 2001;23:16–23.
Seropian MA, Brown K, Gavilanes JS, Driggers B. Simulation: not just a manikin. J Nurs Educ. 2004;43:164–9.
Alinier G, Hunt B, Gordon R, Harwood C. Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54:359–69.
Robinson-Smith G, Bradley P, Meakim C. Evaluating the use of standardized patients in undergraduate psychiatric nursing experiences. Clin Simul Nurs. 2009;5:e203–11. http://dx.doi.org/10.1016/j.ecns.2009.07.001.
Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. West Sussex: Wiley; 2009.
Shin S, Park J, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nur Educ Today. 2015;35:176–82. http://dx.doi.org/10.1016/j.nedt.2014.09.009.
Kim JH, Park I, Shin S. Systematic review of Korean studies on simulation within nursing education. J Kor Acad Soc Nurs Educ. 2013;19:307–19. http://dx.doi.org/10.5977/jkasne.2013.19.3.307.
Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011;306:978–88. http://dx.doi.org/10.1001/jama.2011.1234.
Ilgen JS, Sherbino J, Cook DA. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis. Acad Emerg Med. 2013;20:117–27. http://dx.doi.org/10.1111/acem.12076.
Marken PA, Zimmerman C, Kennedy C, Schremmer R, Smith KV. Human simulators and standardized patients to teach difficult conversations to interpersonal health care teams. Am J Pharm Educ. 2010;74:1–8.
Shin I, Kim JH. The effect of problem-based learning in nursing education: a meta-analysis. Adv Health Sci Educ Theory Pract. 2013;18:1103–20. http://dx.doi.org/10.1007/s10459-012-9436-2.
Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63. http://dx.doi.org/10.1016/j.nedt.2013.09.020.
We appreciate the support by Soonchunhynag University Library.
No funding was received.
Availability of data and materials
Data from journals used in this work found on publicly available repositories.
All authors contributed to the design of the study. JK performed the statistical analysis and wrote the first draft. JP carried out data collection and data coding. SS participated in its design and coordination, helped to draft the manuscript, and revised the manuscript. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Consent for publication
Consent for publication is not applicable for this work.
Ethical approval and consent to participate
Ethical approval and consent from participate are not applicable for this study.