Skip to main content

Development and validation of Simulation Scenario Quality Instrument (SSQI)

Abstract

Background

Due to the unmet need for valid instruments that evaluate critical components of simulation scenarios, this research aimed to develop and validate an instrument that measures the quality of healthcare simulation scenarios.

Methods

A sequential transformative mixed-method research design was used to conduct the study. The development and validation of the instrument involved two phases: the qualitative phase, which included defining the instrument's theoretical background and instrument construction, followed by the quantitative phase, where the instrument was piloted and validated. The qualitative study included 17 healthcare simulation experts, where three focus group was conducted, and the first version of the instrument was constructed based on the focus group analysis and the theoretical framework constructed using the literature review. During the quantitative phase, the instrument’s quantitative piloting included 125 healthcare simulation scenarios; then, the instrument went through construct validity and reliability testing.

Results

Content experts confirmed the theoretical model and instrument framework. The average item content validity index (I-CVI) scores and the average of the I-CVI scores (S-CVI/Ave) for all items on the scale or the average proportion relevance judged by all experts was 0.87. The conformity factor analysis results showed a good fit for the proposed 10-factor model (CFI (the comparative fit index) = 0.998, Tucker-Lewis index = 0.998, Root mean square error of approximation (RMSEA) = 0.061. The final instrument included ten domains: 1. Learning objectives, 2. Target group, 3. Culture, 4. Scenario case, 5. Scenario narrative briefing, 6. Scenario complexity, 7. Scenario flow, 8. Fidelity, 9. Debriefing, and 10. Assessment. The SSQI included 44 items that are rated on a 3-point scale (Meets Expectations = (2), Needs Improvement, (1), Inadequate (0)).

Conclusion

This validated and reliable instrument will be helpful to healthcare educators and simulation experts who want to develop simulation-based training scenarios and ensure the quality of written scenarios.

Peer Review reports

Background

Simulation-based training (SBT) in education was successfully implemented in aviation and the military. Now, it is used in healthcare to improve patient care and safety [1,2,3]. In the last 20 years, simulation in healthcare education has been used in healthcare education increased [4] Several studies have reported positive outcomes on healthcare students' and learners' knowledge and skills [2, 5, 6] The success of SBT depends on the careful and robust development of simulation scenarios based on critical needs and assessment instruments used to guide the delivery of specific debriefing [7, 8].

Simulated-Based Learning (SBL) Experience in healthcare education is defined as “An array of structured activities that represent actual or potential situations in education and practice. These activities allow participants to develop or enhance their knowledge, skills, and attitudes, or to analyze and respond to realistic situations in a simulated environment” [9, 10]. SBL is structured based on needs assessment to identify learning objectives and outcomes needed for the learners.

Healthcare simulation scenarios are defined as “modeled on real-life situations that often include a sequence of learning activities that involve complex decision making, problem-solving strategies, intelligent reasoning, and other complex cognitive skills” [11]. The healthcare simulation scenarios usually include the goals, learning objectives, debriefing, scenario narrative, description of the clinical simulation encounter, staff requirements and instructions, simulation theater set up, simulation modality and operation, scenario props, and instructions for standardized patients [1]. Figure 1 identifies the significant steps and stages required to be followed to write a healthcare simulation scenario based on Alinier (2011) and Seropain's work (2003) [1, 12].

Fig. 1
figure 1

Stages of healthcare simulation scenario writing. The figure is designed based on the information provided in Alinier (2011) and Seropain’s work (2003) [1, 12]

Any simulation scenario utilized for education is expected to be evidence-based in design and high quality [3, 13]. There are resources and templates published and available online to assist educators in healthcare education in developing and writing simulation scenarios [1, 13,14,15,16,17,18].

Ensuring the quality of simulation scenarios is difficult as several elements affect the simulation experience [19]. Based on the scenario development stages and steps mentioned in Fig. 1, rigorous and professional training of simulation educators and simulationists is required to ensure that they can develop and implement high-quality simulation scenarios and curriculums [20]. However, those training modules and programs are not monitored by an accreditation entity; their outcomes are their assessment, and methods are rarely reported [21]. Limited instruments evaluate some aspects of the simulation experience during conduction, namely debriefing and feedback; only one validated instrument evaluates the simulation scenario components and scenario design named “Simulation Scenario Evaluation Tool (SSET)” [22, 23]. The instrument was recently developed using the Delphi-modified method and focused on defining expectations for developing quality scenarios [22]. This instrument was developed based on the available literature and a subsequent review of six published simulation scenario templates. The instrument consists of six components that determine scenario quality that is rated based on a corresponding anchor and scale: learning objectives, critical context/scenario overview, critical actions, patient states, scenario materials and resources, and a debriefing plan [22].

This instrument is considered the first to evaluate the quality of simulation scenarios; however, the authors reported some limitations in their study [22]. It included a limited number of participants during the first and second rounds of the survey. Selection bias was identified as a limitation, and the partial response in the second survey round might have affected the analysis of certain items [22, 24]. Due to the unmet need for valid instruments that evaluate critical components of simulation scenarios, developing instruments that measure and assess the quality of the components of the healthcare simulation scenarios is vital [22]. This research aimed to develop and validate an instrument that measures the quality of the components included in the healthcare simulation scenarios.

Methods

A sequential transformative mixed-method approach was used to develop and validate an instrument that measures the quality of healthcare simulation scenarios. The study followed two phases to create and validate the instrument: the quantitative phase, and the qualitative phase. The method of development and validation was adapted from Benson and Florence's work [25].

Phase I: qualitative phase

Phase I is the qualitative phase of the development of the instrument. It involves two steps: the first step is planning and developing the theoretical background, and the second is instrument construction. The instrument aim, domains, and framework were defined and established in the first step based on a literature review. An extensive review of available literature discussed or reported the following two areas: the first area was the quality evaluation or assessment instruments of simulation scenarios, and the second was health simulation scenarios guidelines.

After conducting the literature review and critically appraising the evidence by two reviewers from the research team, all constructs, domains, and operational definitions of components that define the quality of simulation scenarios were summarized in overarching domains and subdomains to set up a framework for the instrument. Keywords used for research were healthcare simulation scenario, quality healthcare simulation scenario, simulation scenario guidelines, simulation scenario quality, and simulation scenario procedure. The databases included in the literature review were Cochrane Library, PubMed, Medline, Joanna Briggs Institute EBP Database, and Web of Science Core Collection.

Results of the literature review were also used to construct the script of the focus groups that included experienced simulation educators [25]. The focus group was conducted to discuss the proposed framework and investigate new themes determining the quality of healthcare simulation scenarios. Focus groups were recorded, the recordings were analyzed, and themes and concepts were established and combined with the literature review findings to finalize the instrument's framework.

The second step involved writing the instrument's items based on the established framework in the first step. After writing the items and reviewing the instrument with the research team, the instrument's content validity was determined by healthcare simulation experts. Five experts were included in the content validity process [26]. Content and face validation of the instrument was done by providing a copy to the experts. They evaluated whether the instrument accurately assessed the quality of healthcare simulation scenarios and provided feedback on each item in the instrument. The last step was revising the instrument and developing new items based on the expert's validation report.

Phase (II): quantitative phase

The second phase included two steps: the first step was instrument piloting and the second was instrument validation. The instrument was piloted among healthcare simulation educators. The instrument was sent to them in a hard copy and an online survey with instructions to use scenarios included in the scenario library of the Simulation and Skills Development Center (SSDC) at Princess Nourah University. The scenarios in the library target different healthcare specialties. 129 scenarios were evaluated using the instrument in the pilot stage. Those scenarios were previously piloted in SSDC and archived in the library afterward. The educators included in the piloting were clinical simulation educators with experience writing and conducting health simulation scenarios. Educators with more than one year of involvement in simulation activities or who underwent training in writing health simulation scenarios have been involved in simulation activities and are staff or faculty with an educational training background.

After that, during the instrument validation step, the instrument underwent exploratory and conformity factor analysis to identify the underlying components and factors. The items that pointed to the same dimensions should have loaded into the same factors. The internal consistency of factors was checked using Cronbach's alpha coefficient. Additionally, the correlation between questions that load on the same factor was examined to ensure the instrument's answers were consistent. The reliability and validity tests results and the qualitative analysis of participants' feedback were used to determine if the instrument's items should be revised, deleted, or reduced. Changes were made to the evaluated dimensions of simulation scenarios based on these results and the theoretical background formulated in phase I. Following these revisions, the final content of the instruments for evaluating healthcare simulation scenarios was formulated and finalized.

Statistical analysis

The qualitative analysis method of the results of the focus groups was “Constant comparison analysis” [27]. Constant comparison analysis is characterized by three stages: In the first stage (open coding), the data are chunked into small units, and the researcher attaches a code to each unit. These codes are grouped into categories during the second stage (axial coding). Finally, in the third stage (selective coding), the researcher develops themes that express the content of each group based on the categories and codes in the first and second stages [27]. Structural validity of the instrument was done using factor and confirmatory factor analysis, and Cronbach's alpha coefficients were calculated to measure the instrument's reliability. For content validity, after summarizing the reviewers’ comments, the item content validity index (I-CVI) was calculated for each item. I-CVI is defined as the proportion of content experts giving the item a relevance rating of 3 or 4 based on the following formula (I-CVI = (agreed item)/ (number of experts). An item with CVI below 0.8 was deleted. Additionally, the average of the I-CVI scores (S-CVI/Ave) for all items on the scale or the average of proportion relevance judged by all experts was calculated by this formula (S-CVI/Ave = (sum of I-CVI scores)/ (number of items)). The construct validity was investigated with three different confirmatory factor analysis (CFA) models: a one-factor model, a ten-factor model, and a second-order ten-factor model, respectively. The lavaan R package was used to conduct CFA analyses. The construct validity was investigated using Principal Axis Factoring with Kaiser normalization.

Ethical considerations

The study protocol was approved by the Princess Nourah University Institutional Review Board (IRP); the IRP number was (19–0105). Participants' consent in the focus group was acquired verbally and in writing. The research data were kept secure, and only the research team could access the focus group recording.

Results

The study results were divided into two sections based on the pre-defined phases of the development and validation of the instrument. Each phase was dvided into two steps.

Phase I: qualitative phase

Step 1: Instrument theoretical background

The purpose of the instrument was to assess the quality of healthcare simulation scenario components. An extensive literature review was done on all literature that discussed the quality of healthcare simulation scenarios to establish a framework for the instrument. Additionally, published or available templates were included in the literature review. The literature review findings indicated that six major domains determine the quality of the scenario. Those domains were: Learning objectives, patient's case, scenario setting, scenario flow, critical actions, and debriefing. Each domain was divided into sub-domains. The first domain, learning objectives, described that when writing the learning objectives, they must be formatted in SMART format (Specific, Measurable, Achievable, Relevant, and Time-Bound) [28,29,30]. A critical part of writing learning objectives was utilizing Bloom’s taxonomy [31, 32]. Additionally, learning objectives must be aligned with the learner's level [1, 14]. The second domain was the patient case, which focused on the patient's medical history, diagnosis, and demographic data [1, 33, 34].

The third domain was scenario setting, which included fidelity defined by the environment of the simulation event that matches the actual clinical setting and how the equipment and simulation modality utilized for the scenario imitate the clinical setting [1, 13, 28]. The fourth domain, scenario flow, focused on patient parameters progression appropriate to the learner's actions and prompting, ensuring a smooth transition of the scenario flow. The fifth domain (critical actions) was defined by the learner's actions required to achieve the scenario objectives and simulation outcomes [1, 12, 14,15,16,17]. The final domain was debriefing, and it was stated in the literature that debriefing time and method should be appropriate for the progression and level of the complexity scenario [13, 20]. Additionally, much literature focused on the facilitator experience and its effect on the scenario outcome and learning experience [13, 20].

Virtual focus groups were conducted with experienced simulation educators to discuss their opinions on the quality indicators of healthcare simulation scenarios and the domains found in the literature review. Three focus groups were conducted with 17 simulation educators with experience ranging from 2 to 15 years in simulation-based education. Participants involved exclusively in the operation of simulation activities only were excluded from the study [35]. The focus group questions were written based on an in-depth literature review of available articles describing the quality indicators of simulation scenarios in healthcare education [2, 6, 13]. The aim was to discuss quality indicators of simulation scenarios; questions asked revolved around the participants' experience in conducting simulation scenarios and their opinions of the factors considered important to scenario design. Mujlli et al. detail the qualitative study protocol and steps [34].

Participants were selected from the LinkedIn website based on the information provided on their public page and by recommendations from local simulation experts [36]. Constant comparison analysis was used to analyze the focus group audio recordings [27]. The analysis was done by two researchers and was reviewed by the research team during and after completion to detect inconsistency in findings [37].

The following themes were found after analyzing the focus group transcripts: learning objectives, required pre-reading, target group, culture, scenario case, briefing, scenario complexity, fidelity, scenario flow, debriefing, and assessment. Figure 2 shows the results of the constant comparison analysis of the focus groups.

Fig. 2
figure 2

Constant comparison analysis results from the focus groups

Step 2: instrument construction

In the second step, the measurement instrument framework was written based on the established first step. The item writing was done based on both the established framework and the scale of the instrument and was chosen based on achieving the quality domain (Meets expectation (2), Needs improvement (1), Inadequate (0). The instrument underwent four rounds of review by the research team before finalizing the first version. The instrument had 55 items and 12 sections (Additional file 1: Appendix A).

After finalizing the first version of the instrument, it was sent to five experts in healthcare simulation for face and content validity. A copy of the instrument was sent to the experts, and a virtual interview was scheduled to discuss the expert's feedback. The interview thoroughly discussed the reviewer's feedback on each item and section in the instrument. Each expert was asked about their judgment regarding the relevancy of the item based on the following score: highly relevant (4), relevant (3), somewhat relevant (2), and not relevant (1). After summarizing the reviewers’ comments (Additional file 2: Appendix B), the S-CVI/Ave of the instrument was 0.87 (Additional file 3: Appendix C). The final step was revising the instrument and developing new items based on the expert's validation report. The score scale for the instrument was: Meets expectation (2), Needs improvement (1), Inadequate (0)).

Phase (II): quantitative phase

Step 1: instrument piloting

The instrument was piloted among simulation educators in SSDC and one educator from outside the organization. The educators were assigned a specific number of scenarios to be reviewed and were free to choose from the scenarios in the SSDC scenario library or from scenarios they implemented in their simulation activities. The number of educators included in the piloting was seven from different specialties. Their experience in simulation ranged from 2 to 15 years in simulation design and conduction of simulation activities. Table 1 shows the piloting report of the SSQI.

Table 1 Piloting report of the simulation scenario quality instrument (SSQI)

Step 2: Instrument validation and reliability test

Construct validity and reliability analysis

The construct validity was investigated using Principal Axis Factoring with Kaiser normalization. For this analysis, the first step involved running a factor analysis on the items to ascertain the covariation among the items and whether the patterns fit well into the SSQI constructs. Based on the exploratory factor analysis (EFA) results, nine items with a factor loading of less than 0.3 were excluded from the instrument [38]. The factor analysis yielded eleven factors that explain the variance, which was less than the original framework of the instrument.

The internal reliability of the instrument was investigated using Cronbach's alpha [39]. Results indicated that the alpha for the total scale was 0.92. Examination of individual item statistics did not show the need to eliminate items to increase the scale's reliability (Additional file 4: Appendix D). After reviewing and editing the instrument, the factors were revised and renamed based on the general context of the items and the research team's input. Then, confirmatory factor analyses were conducted to investigate the construct validity of the revised instrument.

Confirmatory Factor Analyses (CFA) results

In this section, the construct validity of SSQI was examined by different confirmatory factor analysis (CFA) models. For this purpose, the one-factor CFA model, where all items in the survey load on one latent factor; the ten-factor CFA model, where each survey domain was treated as a factor; and the second-order CFA model were tested. Different fit measures were reported and used to assess model-data fit and to determine the best CFA model that fits the data.

The most commonly used fit measures are chi-square statistics, CFI (the comparative fit index), TLI (the Tucker-Lewis index), and RMSEA (root mean square error of approximation), which provide insight into the degree of data fit for a given model. Different criteria for fit measures were proposed to evaluate the degree of model fits. Hu and Bentler (1999) proposed that an RMSEA less than 0.06 and CFA and TLI fit measures greater than 0.95 (RMSEA 0.06, CFA ≥ 0.95, and TLI ≥ 0.95) indicate a good fit [40]. Additionally, a less stringent criteria were proposed by Marsh, Hau, and Wen (2004) in which CFA ≥ 0.90, TLI ≥ 0.90, and RMSEA ≤ 0.08 indicate an acceptable model-data fit [41].

The CFA analyses were first conducted based on the initial factorial structure of SSQI that contained 40 items distributed across ten domains (Table 2). According to the one-factorial and ten-factorial CFA results, two items showed a poor fit to the model with factor loadings less than 0.30. Thus, these items with poor fit were excluded from the instrument, and then CFA models were tested again. Table 2 provides the results of CFA models.

Table 2 The results of one-factor, ten-factor, and second-order CFA models

The fit measures in Table 2 showed that CFI and TLI statistics were above 0.95 for all CFA models. However, RMSE values for one-factor and second-order models were higher than 0.06 except for the ten-factor model (RMSE = 0.052), indicating that it showed a better fit compared to the other two models. Additionally, investigating the factor loadings for the ten-factor model revealed that all items had factor loadings greater than 0.30 (Table 3). Moreover, since the ten-factor model had CFI and TLI fit measures greater than 0.95 and RMSE values less than 0.06, a good fit between the ten-factor CFA model and data was achieved, as Hu and Bentler (1999) suggested. These results indicate that the ten-factor CFA model with 38 items achieved a robust construct validity. The factor loadings and path diagram of the ten-factor model are provided in Table 3 and Fig. 3, respectively. Additionally, Table 4 provides the final version of the SSQI instrument after CFA analyses.

Table 3 Factor loadings of ten-factor CFA models
Fig. 3
figure 3

Path diagram of ten-factorial CFA model. Abbreviation: LO = Learning objectives; TG = Target group; Cu = Culture; SCa = Scenario case; SN = Scenario narrative briefing; SCm = Scenario complexity; SF = Scenario flow; Fd = Fidelity; Db = debriefing; At = assessment

Table 4 Final version of Simulation Scenarios Quality Instrument (SSQI)

Discussion

This study described the process of developing and validating the SSQI for evaluating the quality of healthcare simulation scenarios [25]. Only one published tool was developed to evaluate the simulation scenarios (SSET) [22]. To the authors’ knowledge, this is the second attempt to develop and validate an instrument for assessing the quality of healthcare scenarios. The internal reliability of the instrument was measured using Cronbach’s alpha. The results indicated that the alpha for the total scale was 0.92 [39]. The results of the content validation showed a moderate agreement with the components of the healthcare simulation scenario that the instrument should assess: Scenario case, culture, patient demographic information, patient medical information, environment fidelity, patient fidelity, and debriefing. The final version of the instrument included factors and items consistent with several guidelines and research that investigated the elements of simulation scenarios that included the final elements of the instrument [2, 10, 42].

According to Lioce et al. best practice of simulation design, the SSQI elements 8 of the 11 elements listed as a framework for developing effective simulation scenarios [10]. The elements included measurable objectives, simulation format, clinical scenario or case, fidelity, facilitative and facilitator approach, briefing, debriefing, and evaluation [10]. Another study investigated the quality indicators for developing and implementing simulation experiences using the Delphi method [28]. Two of the quality indicators were aligned with the final elements of SSQI, which included all elements listed in the study findings. The “Pedagogical principles” indicator stated that simulation experiences should align with the curriculum, alignment between the program and the simulation, and learning objectives stated in elements one and two: learning objectives target group. The second indicator, “Fidelity,” noted that the simulation technology and environmental fidelity should be aligned with the learning objectives stated in the items under the same name [28].

There are recent studies that described a similar framework to developing simuation epxirnces. In Hewart et al. study, the process of designing simulation-based experiences for speech-language pathology was listed, and it included the development of simulation scenarios based on Lioce et al. 2015 work, which was referenced above, and the framework was recommended for other disciplines [43]. Another recent study described the steps required to develop simulation scenarios, emphasizing the most relevant aspect of the design [44]. The steps listed were all found in the SSQI tool: objectives, simulation format, case description, realism, pre-debriefing, debriefing, and evaluation [44].

Multiple simulation scenario templates were developed to assist educators with developing evidence-based simulation scenarios. The SSQI has similar elements to the developed templates. In Munroe et al. study, the authors devised a new simulation scenario template for research purposes [45]. Elements included in the new template were similar to the SSQI quality indicators, which included: Modality and room setup (which was defined in SSQI as fidelity), Patient profile (sncario case in SSQI), narrative description of the scenario, physiological parameters and patient progress (which was scenario flow in SSQI), and post-simulation debriefing [45]. Another template was developed in 2015 for critical simulation called the Template of Events for Applied and Critical Healthcare Simulation (TEACH Sim) [14]. This template aimed to assist educators and clinicians in developing simulation scenarios and overcoming the potential challenges they might face. The template sections were designed in a way that is similar to the SSQI; however, it used different phrasing. The learning objectives in SSQI were the same as in TEACH Sim. However, the scenario case in SSQI was written in a Clinical context. Scenario case was also patient profile, while fidelity was divided into modality and equipment props [14].

A similar tool was developed in 2019 to evaluate the quality of simulation scenarios. The “Simulation Scenario Evaluation Tool (SSET)” was developed in 2019 using the modified Delphi method. The instrument was developed by reviewing the literature and based on published simulation scenario design templates and developing the instrument to include six components of scenario quality with corresponding scores and anchors. Then, the tool was sent to a national group of experts to demonstrate a consensus on the final assessment instrument. The instrument was validated by simulation educators using content validity and showed a significant level of agreement (p < 0.05). The instrument went through a two-round Delphi approach; the first round included 38 complete responses, and the second round included 22 complete responses. The SSQI instrument was developed using a different method, and content and construct validity were tested. Content validity was defined using the average content validity index, which was 0.87. SSQI was also tested for construct validity and showed a good fit to the proposed model developed after researching simulation design best practices and content experts in simulation from different experience levels and clinical backgrounds. The Cronbach alpha of the instrument was 0.92.

Scenario design is a complex process, and it is recommended that simulation experts use published templates to assist in writing healthcare simulation scenarios [1]. The majority of the feedback and reviews of the scenarios are objective and not structured [22]. The only instrument found in the literature that evaluates written simulation scenarios was the SEET instrument, which, while it is the first instrument to assess simulation scenario quality, the authors noted that it is an instrument validated by content experts and the current instrument use multiple arguments of validity, content validity, and construct validity [22]. The need for validated instruments supports the importance of developing a validated assessment instrument to determine the quality of healthcare simulation scenarios.

Limitations

The study has some limitations that need to be addressed. First, this instrument was not piloted again after conducting the construct validity in phase IV. The second limitation was the limited evaluation of simulation scenarios included in the pilot and the limited number of reviewers who utilized the instrument in the pilot. Finally, no cut-off points were established to determine the levels of quality that each final score indicates.

Conclusions

The validity and reliability analysis results imply that the SSQI is a valid and reliable instrument developed to assess the quality of healthcare simulation scenarios. This tool provides the simulation educators and scenario writers with the expected elements detrimental to designing high-quality scenarios. It is recommended for future research to conduct a second pilot of this instrument and includes a larger pool of subjects to investigate inter-rater reliability among raters.

Availability of data and materials

The databases used and analyzed during the study are available from the corresponding author upon reasonable request.

Abbreviations

At:

Assessment

CFA:

Confirmatory factor analysis

CFI:

Comparative fit index

Cu:

Culture

Db:

Debriefing

df:

Degree of freedom

EFA:

Exploratory factor analysis

Fd:

Fidelity

I-CVI:

Item content validity index

IRP:

Institutional Review Board

LO:

Learning objectives

RMSEA:

Root mean square error of approximation

SBL:

Simulated-Based Learning

SBT:

Simulation-based training

SCa:

Scenario case

SCm:

Scenario complexity

S-CVI:

Item content validity index scores

SF:

Scenario flow

SMART:

Specific, Measurable, Achievable, Relevant, Time-Bound

SN:

Scenario narrative briefing

SSDC:

Simulation and Skills Development Center

SSET:

Simulation Scenario Evaluation Tool

SSQI:

Simulation Scenario Quality Instrument

TG:

Target group

TLI:

Tucker-Lewis index

X2:

Chi-square

References

  1. Alinier G. Developing High-Fidelity Health Care Simulation Scenarios: A Guide for Educators and Professionals. Simul Gaming. 2011;42(1):9–26. Available from: http://journals.sagepub.com/doi/10.1177/1046878109355683.

    Article  Google Scholar 

  2. Barry Issenberg S, Mcgaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27(1):10–28. Available from: http://www.tandfonline.com/doi/full/10.1080/01421590500046924.

    Article  Google Scholar 

  3. Sørensen JL, Østergaard D, LeBlanc V, Ottesen B, Konge L, Dieckmann P, et al. Design of simulation-based medical education and advantages and disadvantages of in situ simulation versus off-site simulation. BMC Med Educ. 2017;17(1):20. Available from:http://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-016-0838-3.

    Article  Google Scholar 

  4. Bradley P. The history of simulation in medical education and possible future directions. Med Educ. 2006;40(3):254–62. Available from: https://onlinelibrary.wiley.com/doi/10.1111/j.1365-2929.2006.02394.x.

    Article  Google Scholar 

  5. Nagendran M, Gurusamy KS, Aggarwal R, Loizidou M, Davidson BR. Virtual reality training for surgical trainees in laparoscopic surgery. Cochrane Database Syst Rev. 2013;2013(8):CD006575. https://doi.org/10.1002/14651858.CD006575.pub3.

    Article  Google Scholar 

  6. Hayden J. Use of simulation in nursing education: national survey results. J Nurs Regul. 2010;1(3):52–7. Available from: https://linkinghub.elsevier.com/retrieve/pii/S2155825615303355.

    Article  Google Scholar 

  7. Nguyen N, Watson WD, Dominguez E. An event-based approach to design a teamwork training scenario and assessment tool in surgery. J Surg Educ. 2016;73(2):197–207. https://doi.org/10.1016/j.jsurg.2015.10.005.

    Article  Google Scholar 

  8. Gaba DM. The future vision of simulation in health care. Qual Saf Heal Care. 2004;13(1):i2-10. Available from: https://qualitysafety.bmj.com/lookup/doi/10.1136/qshc.2004.009878.

    Article  Google Scholar 

  9. Pilcher J, Heather G, Jensen C, Huwe V, Jewell C, Reynolds R, et al. Simulation-based learning: it’s not just for NRP. Neonatal Netw. 2012;31(5):281–8. Available from: http://connect.springerpub.com/lookup/doi/10.1891/0730-0832.31.5.281.

    Article  Google Scholar 

  10. Lioce L, Meakim CH, Fey MK, Chmil JV, Mariani B, Alinier G. Standards of best practice: simulation standard IX: simulation design. Clin Simul Nurs. 2015;11(6):309–15. https://doi.org/10.1016/j.ecns.2015.03.005.

    Article  Google Scholar 

  11. Nadolski RJ, Hummel HGK, van den Brink HJ, Hoefakker RE, Slootmaker A, Kurvers HJ, et al. EMERGO: a methodology and toolkit for developing serious games in higher education. Simul Gaming. 2008;39(3):338–52. Available from: http://journals.sagepub.com/doi/10.1177/1046878108319278.

    Article  Google Scholar 

  12. Seropian MA. General concepts in full scale simulation: getting started. Anesth Analg. 2003;97(6):1695–705. Available from: http://journals.lww.com/00000539-200312000-00030.

    Article  Google Scholar 

  13. Waxman KT. The development of evidence-based clinical simulation scenarios: guidelines for nurse educators. J Nurs Educ. 2010;49(1):29–35. Available from: https://journals.healio.com/doi/10.3928/01484834-20090916-07.

    Article  Google Scholar 

  14. Benishek LE, Lazzara EH, Gaught WL, Arcaro LL, Okuda Y, Salas E. The template of events for applied and critical healthcare simulation (TEACH Sim). Simul Healthc J Soc Simul Healthc. 2015;10(1):21–30. Available from: https://journals.lww.com/01266021-201502000-00004.

    Article  Google Scholar 

  15. Dieckmann P, Rall M. Designing a Scenario as a Simulated Clinical Experience. In: Clinical Simulation. Elsevier; 2008. p. 541–50. Available from: https://linkinghub.elsevier.com/retrieve/pii/B9780123725318500960.

  16. Impact Health. California simulation alliance simulation scenario template. HealthImpact: Oakland; 2016. Available from: https://healthimpact.org/wp-content/uploads/2010/04/CSA-Scenario-Template-4-2011.pdf.

    Google Scholar 

  17. Society for Academic Emergency Medicine (SAEM). Special Interest Group Simulation Scenario Template. Simulation Academy. 2008. p. 1–6. Available from: https://simulation.unc.edu/files/2015/02/SAEM_blank_template.doc. Cited 2022 Jul 11.

  18. Simon R, Raemer D, Rudolph J. Debriefing Assessment for Simulation in Healthcare©-Student Version, Short Form. Center for Medical Simulation. 2010. Available from: https://harvardmedsim.org/wp-content/uploads/2016/10/DASH_SV_Short_2010.pdf.

  19. Rutherford-Hemming T. Determining content validity and reporting a content validity index for simulation scenarios. Nurs Educ Perspect. 2015;36(6):389–93. Available from: http://journals.lww.com/00024776-201511000-00008.

    Article  Google Scholar 

  20. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003–2009. Med Educ. 2010;44(1):50–63. Available from: https://onlinelibrary.wiley.com/doi/10.1111/j.1365-2923.2009.03547.x.

    Article  Google Scholar 

  21. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Revisiting ‘A critical review of simulation-based medical education research: 2003–2009.’ Med Educ. 2016;50(10):986–91. Available from: https://onlinelibrary.wiley.com/doi/10.1111/medu.12795.

    Article  Google Scholar 

  22. Hernandez J, Frallicciardi A, Nadir N-A, Gothard MD, Ahmed RA. Development of a Simulation Scenario Evaluation Tool (SSET): modified Delphi study. BMJ Simul Technol Enhanc Learn. 2020;6(6):344–50. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8936988/.

    Article  Google Scholar 

  23. Brett-Fleegler M, Rudolph J, Eppich W, Monuteaux M, Fleegler E, Cheng A, et al. Debriefing assessment for simulation in healthcare. Simul Healthc J Soc Simul Healthc. 2012;7(5):288–94. Available from: https://journals.lww.com/01266021-201210000-00004.

    Article  Google Scholar 

  24. Fink-Hafner D, Dagen T, Doušak M, Novak M, Hafner-Fink M. Delphi method. Adv Methodol Stat. 2019;16(2):1–19. Available from: https://mz.mf.uni-lj.si/article/view/184.

    Google Scholar 

  25. Benson J, Clark F. A guide for instrument development and validation. Am J Occup Ther. 1982;36(12):789–800. Available from: https://research.aota.org/ajot/article/36/12/789/634/A-Guide-for-Instrument-Development-and-Validation.

    Article  Google Scholar 

  26. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30(4):459–67. Available from: https://onlinelibrary.wiley.com/doi/10.1002/nur.20199.

    Article  Google Scholar 

  27. Onwuegbuzie AJ, Dickinson WB, Leech NL, Zoran AG. A qualitative framework for collecting and analyzing data in focus group research. Int J Qual Methods. 2009;8(3):1–21. Available from: http://journals.sagepub.com/doi/10.1177/160940690900800301.

    Article  Google Scholar 

  28. Arthur C, Levett-Jones T, Kable A. Quality indicators for the design and implementation of simulation experiences: a Delphi study. Nurse Educ Today. 2013;33(11):1357–61. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0260691712002511.

    Article  Google Scholar 

  29. Chatterjee D, Corral J. How to write well-defined learning objectives. J Educ Perioper Med JEPM. 2017;19(4):E610. Available from: http://www.ncbi.nlm.nih.gov/pubmed/29766034.

    Google Scholar 

  30. Lopreiato J. Healthcare Simulation Dictionary. Lioce L, editor. Agency for Healthcare Research and Quality. 2020. p. 33. Available from: https://www.ahrq.gov/sites/default/files/publications/files/sim-dictionary.pdf.

  31. Frallicciardi A, Vora S, Bentley S, Nadir N, Cassara M, Hart D, et al. Development of an emergency medicine simulation fellowship consensus curriculum: initiative of the society for academic emergency medicine simulation academy. Acad Emerg Med. 2016;23(9):1054–60. Available from: https://onlinelibrary.wiley.com/doi/10.1111/acem.13019.

    Article  Google Scholar 

  32. da Silva Garcia Nascimento J, Siqueira TV, de Oliveira JLG, Alves MG, da Silva Garcia Regino D, Dalri MCB. Development of clinical competence in nursing in simulation: the perspective of Bloom’s taxonomy. Rev Bras Enferm. 2021;74(1):e20200135. Available from: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0034-71672021000100304&tlng=en.

    Article  Google Scholar 

  33. de Melo BCP, Falbo AR, Muijtjens AMM, van der Vleuten CPM, van Merriënboer JJG. The use of instructional design guidelines to increase effectiveness of postpartum hemorrhage simulation training. Int J Gynecol Obstet. 2017;137(1):99–105. Available from: https://onlinelibrary.wiley.com/doi/10.1002/ijgo.12084.

    Article  Google Scholar 

  34. Lasater K. Clinical judgment development: using simulation to create an assessment rubric. J Nurs Educ. 2007;46(11):496–503. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18019107.

    Article  Google Scholar 

  35. Almujlli G, Alrabah R, Al-Ghosen A, Munshi F. Conducting virtual focus groups during the COVID-19 epidemic utilizing videoconferencing technology: a feasibility study. Cureus. 2022;14(3):e23540. Available from: https://www.cureus.com/articles/90885-conducting-virtual-focus-groups-during-the-covid-19-epidemic-utilizing-videoconferencing-technology-a-feasibility-study

    Google Scholar 

  36. In Linked. LinkedIn. 2020. Available from: https://www.linkedin.com/feed/.

  37. Krueger RA. Analyzing and reporting focus group results, vol. 6. Thousand Oaks: Sage publications; 1997.

    Google Scholar 

  38. Watkins MW. Exploratory factor analysis: a guide to best practice. J Black Psychol. 2018;44(3):219–46. Available from: http://journals.sagepub.com/doi/10.1177/0095798418771807.

    Article  Google Scholar 

  39. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16(3):297–334. Available from: http://link.springer.com/10.1007/BF02310555.

    Article  Google Scholar 

  40. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Model A Multidiscip J. 1999;6(1):1–55. Available from: http://www.tandfonline.com/doi/abs/10.1080/10705519909540118.

    Article  Google Scholar 

  41. Marsh HW, Hau K-T, Wen Z. In search of golden rules: comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Struct Equ Model A Multidiscip J. 2004;11(3):320–41.Available from: http://www.tandfonline.com/doi/abs/10.1207/s15328007sem1103_2.

    Article  Google Scholar 

  42. Motola I, Devine LA, Chung HS, Sullivan JE, Issenberg SB. Simulation in healthcare education: A best evidence practical guide. AMEE Guide No. 82. Med Teach. 2013;35(10):e1511-30. Available from: http://www.tandfonline.com/doi/full/10.3109/0142159X.2013.818632.

    Article  Google Scholar 

  43. Hewat S, Penman A, Davidson B, Baldac S, Howells S, Walters J, et al. A framework to support the development of quality simulation-based learning programmes in speech–language pathology. Int J Lang Commun Disord. 2020;55(2):287–300. Available from: https://onlinelibrary.wiley.com/doi/10.1111/1460-6984.12515.

    Article  Google Scholar 

  44. Kaneko RMU, de Moraes Lopes MHB. Realistic health care simulation scenario: what is relevant for its design? Rev Esc Enferm USP. 2019;53:e03453. Available from: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0080-62342019000100602&tlng=en.

  45. Munroe B, Buckley T, Curtis K, Morris R. Designing and implementing full immersion simulation as a research tool. Australas Emerg Nurs J. 2016;19(2):90–105. https://doi.org/10.1016/j.aenj.2016.01.001.

    Article  Google Scholar 

Download references

Acknowledgements

We want to acknowledge all experts who gave their valuable input in the development of the validation of the instrument. We want to thank our colleagues: Dr. Usamah Alzoraigi, Mr. Essam Abdulaziz Turkistani, Mr. Abdulaziz Faraj Aloraidy, Dr. Dania Al-Jaroudi, Dr. Madonna Yehia, Dr. Shadi Almoziny, Ms. Salwa Almansouri, Ms. Ohud Alotaibi, Dr. Muna Aljahany, Ms. Kareemah Alenezi, Ms. Sarah Alotaibi, Ms. Charmaine Co, Dr. Mohammad Zaher, Dr. Ameera Cluntun, Dr. Faten Alradini, Dr. Paul Phrampus, Dr. Syed Jamil, Dr. Tagwa Omer, Ms. Amal Alghamdi, Dr. Waleed Alharbi, and Dr. Manal Alhalwani for their valuable input.

Funding

This research project was funded by the Deanship of Scientific Research, Princess Nourah bint Abdulrahman University, through the Program of Research Project Funding After Publication, grant No (44- PRFA-P- 134).

Author information

Authors and Affiliations

Authors

Contributions

GM contributed to this research by writing the study protocol and design, conducting the research, data analysis, writing the manuscript, and editing the manuscript. RA contributed by conducting the research and writing the manuscript. FM contributed by reviewing the study protocol the manuscript, study conduction, and editing the manuscript. AA contributed to the study by writing the study protocol and design, conducting the research, data analysis, writing the manuscript, and editing the manuscript. BO contributed by performing validity analysis, data analysis, writing the manuscript, and editing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Gadah Mujlli.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Princess Nourah University Institutional Review Board (IRP); the IRP log number was 19–0105. All the methods and procedures carried out in this study were in accordance with relevant guidelines and regulation. Informed consent was obtained from all subjects who participated in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Appendix A. Simulation Scenarios Quality Instrument (SSQI) – Version 1.

Additional file 2:

Appendix B. Content validity report of the simulation scenario quality instrument (SSQI).

Additional file 3:

Appendix C. The relevance ratings on the item scale by five experts.

Additional file 4:

Appendix D. Factor matrix of SSQI items and Cronbach alpha score if the item was deleted.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mujlli, G., Al-Ghosen, A., Alrabah, R. et al. Development and validation of Simulation Scenario Quality Instrument (SSQI). BMC Med Educ 23, 972 (2023). https://doi.org/10.1186/s12909-023-04935-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-023-04935-5

Keywords