Skip to main content

Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis



Simulation-based nursing education is an increasingly popular pedagogical approach. It provides students with opportunities to practice their clinical and decision-making skills through various real-life situational experiences. However, simulation approaches fall along a continuum ranging from low-fidelity to high-fidelity simulation. The purpose of this study was to determine the effect size of simulation-based educational interventions in nursing and compare effect sizes according to the fidelity level of the simulators through a meta-analysis.


This study explores the quantitative evidence published in the electronic databases EBSCO, Medline, ScienceDirect, ERIC, RISS, and the National Assembly Library of Korea database. Using a search strategy including the search terms “nursing,” “simulation,” “human patient,” and “simulator,” we identified 2279 potentially relevant articles. Forty studies met the inclusion criteria and were retained in the analysis.


This meta-analysis showed that simulation-based nursing education was effective in various learning domains, with a pooled random-effects standardized mean difference of 0.70. Subgroup analysis revealed that effect sizes were larger for high-fidelity simulation (0.86), medium-fidelity simulation (1.03), and standardized patients (0.86) than they were for low-fidelity and hybrid simulations. In terms of cognitive outcomes, the effect size was the largest for high-fidelity simulation (0.50). Regarding affective outcome, high-fidelity simulation (0.80) and standardized patients (0.73) had the largest effect sizes.


These results suggest that simulation-based nursing educational interventions have strong educational effects, with particularly large effects in the psychomotor domain. Since the effect is not proportional to fidelity level, it is important to use a variety of educational interventions to meet all of the educational goals.

Peer Review reports


Clinical education in nursing aims to integrate theoretical knowledge from books into practical knowledge in real-life situations and to help students develop their problem-solving skills. Due to rapid changes in clinical placements, patient safety issues, and ethical concerns, students’ direct experience with patient care and opportunities to handle problem-based clinical situations have been diminished. Simulation-based clinical education is a useful pedagogical approach that provides nursing students with opportunities to practice their clinical and decision-making skills through varied real-life situational experiences, without compromising the patient’s well-being.

Simulation-based clinical education in nursing refers to a variety of activities using patient simulators, including devices, trained persons, lifelike virtual environments, and role-playing, not just handling mannequins [1]. With realistic clinical scenarios, simulation-based educational interventions in nursing can train novice as well as experienced nurses, helping them develop effective non-technical skills, practice rare emergency situations, and providing a variety of authentic life-threatening situations. The advantages of simulation-based educational interventions include the ability to provide immediate feedback, repetitive practice learning, the integration of simulation into the curriculum, the ability to adjust the difficulty level, opportunities to individualize learning, and the adaptability to diverse types of learning strategies [1].

Simulation can be described as a continuum ranging from low-fidelity simulation (LFS) to high-fidelity simulation (HFS) [2]. Various simulation methods can be adapted according to specific learning outcomes and educational levels. Dieckmann [3] warns against placing too much emphasis on having optimal equipment and surroundings that realistically replicate the clinical setting. The required learning outcomes must govern the choice of simulation method [4].

A number of research studies in nursing have evaluated the effectiveness of simulation-based educational interventions [5]. However, the reported effectiveness has varied according to the fidelity level of the simulators and the outcome variables. Issenberg et al. [1] found that HFS was effective for learning in medicine. However, their review was limited to HFS, medical education, and learner outcome variables, and did not compare simulation methods. Therefore, a meta-analysis synthesizing the results of these studies is needed to provide important insights into the level of simulation fidelity that is most effective for educational use.

The aims of this study were to determine the effect size of a simulation’s impact on nursing education and compare effect sizes according to the fidelity level of the simulators used.


This study was planned and conducted in adherence to PRISMA standards [6] of quality for reporting meta-analysis. We also considered the PRISMA criteria based on the PRISMA 2009 checklist in reporting each section, such as introduction, methods, results, and discussion.

Study selection

Studies published between January 1995 and July 2013 were identified by conducting an electronic search of the following databases: EBSCO, Medline, ScienceDirect, ERIC, RISS, and the National Assembly Library of Korea database. The literature search was limited to articles published in English or Korean and was conducted using combinations of the keyword phrases nursing, simulation, human patient, and simulator. A total of 2279 potential studies were identified. Titles and abstracts were reviewed for eligibility.

Relevant studies were screened for inclusion based on the following criteria: 1) the study aimed to evaluate the effectiveness of simulation-based education for nursing students, and 2) an experimental or quasi-experimental design was used. We excluded articles that did not report a control group or that tested the effectiveness of computer-based virtual patients. For abstracts that did not provide sufficient information to determine eligibility, full-length articles were retrieved. Disagreement on the inclusion or exclusion of articles was resolved by consensus. Of the potentially relevant 2279 articles, screening of the title and abstracts resulted in 317 relevant studies. After a review of these articles, 96 studies were retained and three articles included additionally via hand search. These 99 full-text articles were reviewed systematically to confirm their eligibility (Fig. 1).

Fig. 1

Flow of study analysis through different phases of the meta-analysis

Fig. 2

Forest plots for primary studies

Criteria for considering studies for this review

In this study, assessment of the methodological quality of 40 selected studies was performed by using the Case Control Study Checklist developed by the Critical Appraisal Skills Programme (CASP) [7]. The CASP appraisal tool was designed to facilitate systematic thinking about educational studies. This tool contains 11 questions in three sections: (1) Are the results of the trial valid? (2) What are the results? (3) Will the results help locally? Most of the items were responded with “yes,” “no,” or “can’t tell.” The papers were assessed by two independent reviewers using the CASP checklist. Any disagreement that arose between the reviewers was resolved through discussion and consensus with a third reviewer. Forty studies met the inclusion criterion of nine or more out of 11 questions answered with “yes” and were consequently considered to be applicable to this review study. The inclusion criteria for this review were as follows:

Study participants

This study sampled pre-licensure nursing students, licensed nurses, or nurse practitioners.

Type of interventions

We defined simulation-based educational intervention as education involving one or more of the following modalities: partial-task trainers, standardized patients (SPs), full-body task trainers, and high-fidelity mannequins.

Types of outcome variables

Study outcomes included learning and reaction outcomes. Learning outcomes were categorized into three domains: cognitive, psychomotor, and affective.

Data coding

The level of fidelity was determined by the environment, the tools and resources used, and other factors associated with the participants [8]. However, as to debriefing, a few selected studies do not indicate the method of debriefing they had used, making it difficult to categorize and discuss the effects of each debriefing method. Thus, we categorized fidelity level according to the physical equipment used. Fidelity level was coded as low, medium, or high according to the extent to which the simulation model resembled a human being, hybrid, or SP. LFSs were defined as static models or task trainers primarily made of rubber body parts [9, 10]. Medium-fidelity simulators (MFSs) were full-body manikins that have embedded software and can be controlled by an external, handheld device [10]. HFSs were life-sized computerized manikins with realistic anatomical structures and high response fidelity [11]. We also considered hybrid simulators, which combined two or more fidelity levels of simulation. As SP is a person trained as an individual in a scripted scenario for the purposes of instruction, practice, or evaluation [12], the use of SP was considered because of the different types of fidelity responses, such as body expressions and verbal feedback, which cannot be perceived in other simulation models.

The extracted data were coded by two researchers. A coding manual was developed in order to maintain the reliability of coding. The manual included information regarding effect size calculations and the characteristics of the study and the report. Differences between coders were resolved by discussion until a consensus was achieved.

Data synthesis and analysis

The software Comprehensive Meta-Analysis version 2 (Biostat, Englewood, New Jersey) was used to conduct the data analysis. Effect size estimates were adjusted for sample size (Cohen’s d), and 95 % confidence intervals were calculated to assess the statistical significance of average effect sizes.

Fixed effects models assume that the primary studies have a common effect size. In contrast, random effects models attempt to estimate the distribution of the mean effect size, assuming that each primary study has a different population [13]. A test for heterogeneity of the intervention effects was performed using the Q statistic. As the results of the test for heterogeneity was statistically significant, we used the random effects models to accommodate this heterogeneity for the main effect and sub-group analyses. The planned subgroup analyses were conducted on evaluation outcomes.


Study characteristics

We identified 2279 potentially relevant articles using the search strategy described above, of which 40 met the inclusion criteria. The characteristics of the 40 studies included in this meta-analysis are listed in Table 1. Twenty five of the 40 studies (62.5 %) used random assignment, whereas the remaining 15 (37.5 %) were nonrandomized. Half of the studies compared education using high-fidelity simulators with a control group. Ten studies (25 %) utilized standardized patients. Learners at various levels of training were represented.

Table 1 Characteristics of studies included in the analysis

Overall analysis

When the studies were combined in the meta-analysis, high heterogeneity was observed (Q = 253.22, P < .001) (Table 2). The overall effect size for the random effects model was 0.70, with 95 % confidence intervals of 0.58–0.83 (Table 3) (Fig. 2). The possibility of a publication bias was minimal because the funnel plot appeared symmetrical.

Table 2 Results of the homogeneity test
Table 3 Overall result of the meta-analysis, using a random effects model

Effect sizes by level of simulation fidelity

Studies using HFSs (0.86), MFSs (1.03), and SPs (0.86) had large effect sizes, whereas low-fidelity (0.35) and hybrid (0.34) simulation studies had smaller effect sizes.

Reaction outcome according to fidelity level

The results of the sub-group analysis for reaction outcome according to fidelity level are shown in Table 4. The effect size of HFS on reaction was larger than that of LFS (Table 4).

Table 4 Effect sizes by level of fidelity, to evaluation levels

Learning outcome according to fidelity level

The results of the sub-group analysis for learning outcomes according to fidelity level are shown in Table 4. For cognitive outcome, which is a sub-domain of learning, the effect size was the highest for HFS (0.50), followed by LFS (0.47), SP (0.32), and MFS (0.06).

Regarding affective outcome, HFS (0.80) and SP (0.73) had the largest effect sizes, whereas LFS (0.39) and hybrid (0.35) simulation studies had smaller effect sizes. MFS (1.76), SP (1.27), and HFS (1.03) showed large effect sizes in the psychomotor domain (Table 4).


The present study provided meta-analytical data for evidence-based education through a comprehensive analysis of simulation-based nursing education with diverse backgrounds and characteristics. Compared with our previous article “Effectiveness of patient simulation in nursing education: meta-analysis” [14], the current study included an additional electronic search of Korean databases such as RISS and the National Assembly Library of Korea database. Through this process, 20 Korean papers were included additionally and half of papers were Korean. This could cause different result compared to previous one. In addition to including a reaction outcome according to fidelity levels, effect sizes based on outcomes and fidelity level were identified. A systematic search of the literature resulted in 40 published studies that were eligible for inclusion in this meta-analysis. These primary studies provided evidence of the effects of simulation-based nursing education in various evaluation and learning environments.

Random assignment studies accounted for 62.5 % of the studies included. This represents a noticeable increase in randomized research designs, which made up less than 30 % of studies in a systematic review conducted 10 years ago on HSF in medical education [1]. That review found that HFSs were used in 50 % of studies, and 25 % used SPs, which is similar to the findings of the study by Kim, Park, and Shin [15]. This confirms the relatively high usage of HFSs and SP in nursing education.

The medium-to-large effect size (0.70) suggests that simulation-based nursing education is effective. This is consistent with the findings of a study on health professional education [16], which reported that technology-enhanced simulation training produced moderate to large effects.

Regarding simulator fidelity level, HFS (0.86), MFS (1.03), and SP (0.86) displayed larger effect sizes compared to LFS or hybrid simulation. This result supports the findings of a previous meta-analysis of simulation in health professions, showing that HSF offers benefits over LFS [17]. However, these findings should be interpreted with caution. Recent studies suggest that the degree of realism required of a simulation is a function of the learning task and context, and can therefore vary widely for different areas of educational outcomes [17].

In the reaction domain, which includes satisfaction and learning attitudes, HFS had a larger effect size than LFS. Satisfaction levels are high among students participating in simulation learning that utilizes human simulators or SP [18]. Considering that problem-based learning (PBL) lessons were found to enhance student attitudes more than traditional lectures [19], student participation and actual activity appear to have positive effects on satisfaction and learning attitudes.

In the sub-group analysis for learning outcome according to fidelity level, the effect size was the largest for psychomotor outcome, followed by affective and cognitive outcomes. This result differs somewhat from the meta-analysis on the effects of PBL [19], in which effect sizes were the largest for psychomotor outcomes, followed by the cognitive and affective domains. This difference is interpreted as reflecting PBL’s emphasis on reasoning based on problems and cases, compared to the actual clinical practice emphasized in simulation-based learning.

Specifically, the effect size of cognitive outcome was the largest for HFS (0.50), while the order for affective outcome was HFS (0.80), followed by SP (0.73). In the psychomotor domain, the order was MFS (1.76), SP (1.27), and HFS (1.03). These results demonstrate that HFS and SP are effective in producing cognitive and affective outcomes; however, to achieve psychomotor learning outcomes, technical training using MFS would be more helpful, which concurs with the lack of positive association between fidelity and process skills [17].

However, the present study has the limitation of not considering learning-related factors in the analyses based on the fidelity level of simulators. Even though debriefing has become more crucial in simulation-based learning and the methods have diversified over the years, a few selected studies do not indicate the methods of debriefing they had used, making it difficult to categorize and discuss the effects of each debriefing method. This may be because it is customary to omit debriefing while learning from low fidelity simulations, especially for training simple nursing skills. As such, the present study has the limitation of not considering learning-related factors from debriefing at each fidelity level of simulators, including reflection, feedback, and a range of debriefing methods (self-debriefing, multimedia debriefing, and/or in-simulation instructor facilitated debriefing). In addition, we did not include studies published in languages other than English or Korean.

Despite such limitations, this study demonstrated that simulation-based nursing education has an educational effect, with particularly strong effects in the psychomotor domain. Since the effects are not proportional to fidelity level, educational interventions should be broad enough to satisfy educational goals, all of which are supported by the results presented above. In addition, a recent study reported that debriefing was the most important factor in simulation, with positive effects from self-debriefing and video-facilitated instructor debriefing [20]. Based on these findings, the clinical reflection process needs to be improved to increase the learning effects in the cognitive domain.


Our results indicated that simulation-based nursing educational interventions were effective with particularly large effects in the psychomotor domain. In addition, the effect of simulation-based nursing education was not proportional to fidelity level. Therefore, it is important to use an appropriate level of simulation to meet all of the educational goals and outcomes.



high-fidelity simulation


low-fidelity simulation


medium-fidelity simulation


problem-based learning


standardized patients


  1. 1.

    Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27:10–28.

    Article  Google Scholar 

  2. 2.

    Hovancsek MT. Using simulations in nursing education. In: Jeffries PR, editor. Simulation in nursing education: from conceptualization to evaluation. New York: National League for Nursing; 2007. p. 1–9.

    Google Scholar 

  3. 3.

    Dieckmann P. Simulation settings for learning in acute medical care. In: Dieckmann P, editor. Using simulations for education, training and research. Lengerich: Pabst; 2009. p. 40–138.

    Google Scholar 

  4. 4.

    Toserud R, Hedelin B, Hall-Lord ML. Nursing students’ perception of high- and low-fidelity simulation used as learning methods. Nurse Educ Prac. 2013;13:262–70.

    Article  Google Scholar 

  5. 5.

    Laschinger S, Medves J, Pulling C, McGraw R, Waytuck B, Harrison MB, et al. Effectiveness of simulation on health profession students’ knowledge, skills, confidence and satisfaction. Int J Evid Based Healthc. 2008;6:278–302.

    Google Scholar 

  6. 6.

    Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151:264–9.

    Article  Google Scholar 

  7. 7.

    Critical Appraisal Skills Programme (CASP). Case control study checklist. From

  8. 8.

    Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc. 2007;2:183–93.

    Article  Google Scholar 

  9. 9.

    Issenberg SB, Gordon MS, Gordon DL, Safford RE, Hart IR. Simulation and new learning technologies. Med Teach. 2001;23:16–23.

    Article  Google Scholar 

  10. 10.

    Seropian MA, Brown K, Gavilanes JS, Driggers B. Simulation: not just a manikin. J Nurs Educ. 2004;43:164–9.

    Google Scholar 

  11. 11.

    Alinier G, Hunt B, Gordon R, Harwood C. Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54:359–69.

    Article  Google Scholar 

  12. 12.

    Robinson-Smith G, Bradley P, Meakim C. Evaluating the use of standardized patients in undergraduate psychiatric nursing experiences. Clin Simul Nurs. 2009;5:e203–11.

    Article  Google Scholar 

  13. 13.

    Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. West Sussex: Wiley; 2009.

  14. 14.

    Shin S, Park J, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nur Educ Today. 2015;35:176–82.

    Article  Google Scholar 

  15. 15.

    Kim JH, Park I, Shin S. Systematic review of Korean studies on simulation within nursing education. J Kor Acad Soc Nurs Educ. 2013;19:307–19.

    Article  Google Scholar 

  16. 16.

    Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011;306:978–88.

  17. 17.

    Ilgen JS, Sherbino J, Cook DA. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis. Acad Emerg Med. 2013;20:117–27.

    Article  Google Scholar 

  18. 18.

    Marken PA, Zimmerman C, Kennedy C, Schremmer R, Smith KV. Human simulators and standardized patients to teach difficult conversations to interpersonal health care teams. Am J Pharm Educ. 2010;74:1–8.

    Article  Google Scholar 

  19. 19.

    Shin I, Kim JH. The effect of problem-based learning in nursing education: a meta-analysis. Adv Health Sci Educ Theory Pract. 2013;18:1103–20.

    Article  Google Scholar 

  20. 20.

    Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63.

    Article  Google Scholar 

Download references


We appreciate the support by Soonchunhynag University Library.


No funding was received.

Availability of data and materials

Data from journals used in this work found on publicly available repositories.

Authors’ contribution

All authors contributed to the design of the study. JK performed the statistical analysis and wrote the first draft. JP carried out data collection and data coding. SS participated in its design and coordination, helped to draft the manuscript, and revised the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Consent for publication is not applicable for this work.

Ethical approval and consent to participate

Ethical approval and consent from participate are not applicable for this study.

Author information



Corresponding author

Correspondence to Sujin Shin.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kim, J., Park, J. & Shin, S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ 16, 152 (2016).

Download citation


  • Nursing education
  • Patient simulation
  • Educational models
  • Meta-analysis