Skip to main content

Evaluation of the simulation based training quality assurance tool (SBT-QA10) as a measure of learners’ perceptions during the action phase of simulation



In an earlier interview-based study the authors identified that learners experience one or more of eight explicit perceptual responses during the active phase of simulation-based training (SBT) comprising a sense: of belonging to instructor and group, of being under surveillance, of having autonomy and responsibility for patient management, of realism, of an understanding of the scenario in context, of conscious mental effort, of control of attention, and of engagement with task. These were adapted into a ten-item questionnaire: the Simulation Based Training Quality Assurance Tool (SBT-QA10) to allow monitoring of modifiable factors that may impact upon learners’ experiences. This study assessed the construct validity evidence of the interpretation of the results when using SBT-QAT10.

Materials and methods

Recently graduated doctors and nurses participating in a SBT course on the topic of the deteriorating patient completed the SBT-QAT10 immediately following their participation in the scenarios. The primary outcome measure was internal consistency of the questionnaire items and their correlation to learners’ satisfaction scores. A secondary outcome measure compared the impact of allocation to active versus observer role.


A total of 349 questionnaires were returned by 96 course learners. The median of the total score for the ten perception items (TPS) was 39 (out of 50), with no significant difference between the scenarios. We identified fair and positive correlations between nine of the 10 items and the SBT-QA10-TPS, the exception being “mental effort”. Compared to observers, active learners reported significantly more positive perceptions related to belonging to the team and interaction with the instructor, their sense of acting independently, and being focused. The questionnaire items were poorly correlated with the two measures of global satisfaction.


Except for the item for mental effort, the QA10-TPS measures learners’ experiences during the active phase of simulation scenarios that are associated with a positive learning experience. The tool may have utility to learners, instructors, and course providers by informing subsequent debriefing and reflection upon practice for learners and faculty. The relationship between these perceptions and commonly used measures of satisfaction remains poorly understood raising questions about the value of the latter.

Peer Review reports


The features of simulation-based training (SBT) that contribute to effective learning have been studied extensively. [1,2,3,4,5,6,7,8,9] SBT typically features three phases: pre-, in- and post-action. The pre-action phase includes introduction to the simulator and the scenario briefing when learners are prepared for the in-action phase, where they actively interact with a simulator and a team to manage a clinical task. Facilitated debriefing conversations are most often conducted in the post-action (post-scenario) phase. All three phases of SBT are assumed to contribute to learning. [2,3,4,5,6,7, 10, 11] The learners’ real time experiences during the in-action phase of simulation and its impact on their satisfaction or learning outcomes have been investigated in different studies. [12,13,14] Simulation learners’ physical, psychological, and social perceptions act as fiction and / or reality cues in the in-action experience during scenarios. [15] Recently, we identified that the learners experience one or more of the following eight explicit perceptual responses during the in-action phase of simulation and associated these perceptions with phrases evoking emotions and/or social relationships with others and/or their cognitive understanding of the situation: [16] 1.belonging to instructor and group, 2.of being under surveillance, 3.of having autonomy and responsibility for patient management, 4.a sense of realism, understanding of the scenario in context, 6.a sense of conscious mental effort, 7.control of attention, and 8.engagement with task.

These perceptions appeared to be consistently attributed to learner reported satisfaction with the activity. As an additional observation, learners appeared to associate these perceptions with modifiable aspects of instructional design and instructor behavior, which were termed ‘enablers’ and with factors predating the scenario, including prior experience and expectations of SBT. The authors concluded that it would be valuable to monitor these perceptions during courses for the purposes of intervening during a course if learners report negative perceptions that may impact upon their experience, to feedback to instructors and for quality assurance of instructional design. [16] Subsequently, we converted these perceptions into a ten-item questionnaire: the Simulation Based Training Quality Assurance Tool (SBT-QA10).

Our present study primarily aimed to assess the SBT-QA10’s construct validity in appraising learners’ perceptions during the action phase of SBT for the purpose of informing the debrief and or other aspects of the learning activity and with the broader purpose of informing the design and delivery of future simulation programs.



The study was approved by the local institutional review board (AU14E0935) and conducted as a single-center prospective single cohort questionnaire study at the Sydney Clinical Skills and Simulation Centre (SCSSC), within the Northern Sydney Local Health District (NSLHD), in Sydney, Australia. Learners completed the SBT-QAT10 at the end of scenarios in standardized SBT course.

Considering the relatively limited intended application of the tool was to be personal reflection and programmatic level quality assurance, we adopted classical notions of criterion validity and construct validity after referring to Messick’s argument-based validity framework [17] where these correspond closely Messick’s dimension of Relationships with other variables. Consequently, construct validity evidence was assessed by measuring the internal consistency of the questionnaire items, their correlation with satisfaction scores and the comparative impact of the assignation of active versus observer roles on learners’ perceptions. [18]

We hypothesized that the SBT-QA10’s questionnaire items are internally consistent and that positive perceptions will be associated with positive overall satisfaction scores.

As further confirmation of the construct validity we also sought to appraise the tool against a modifiable determinant of learners’ experience. Emerging evidence suggests one factor impacting upon the learning experience is the learner’s assigned role as either active participant versus observer in the action phase of SBT. [18,19,20] The learners’ experience when in observer roles is relevant to the instructional design and facilitation of simulation programs. Consequently, we measured the comparative impact of the assignation of active versus observer roles on learners’ perceptions as a secondary measure of the construct validity.


Learners were newly graduated nurses or doctors within their first 12 months of postgraduate work who were enrolled in a SBT based course to fulfil mandatory training requirements entitled Detection, Evaluation, Treatment, Escalation and Communication in Teams (DETECT) that addressed management of the deteriorating patient on a hospital ward. The course was similar in format to one utilized in previous studies. [16, 21] The study was performed accordance with the Declaration of Helsinki and approved by the appropriate ethics committee. All learners received written and oral information regarding the study and consented voluntarily to having data used for study purposes. Learners could withdraw their consent at any time.

Questionnaire development

The SBT-QAT10 questionnaire tool was developed, as an iterative process with the team-members, by adapting definitions of the eight perceptions provided in the original qualitative study into action statements and by incorporating questionnaire items that were used in a previously published evaluation of the DETECT course that addressed learner satisfaction. [16, 21] In the original interview study, the learners used phrasing to describe their perceptions in a manner to suggest two overarching themes: psychosocial emotional responses and cognitive understanding of the situation. Belonging was a perception that was commonly expressed with predominantly psychosocial phrasing, for example. In contrast, perceptions of conscious mental effort, task focus and control of attention were predominantly concerned with the learners’ cognition. Other perceptions, including surveillance, responsibility, realism, and contextual understanding, showed a greater degree of inter-individual variation in that they had a psychosocial impact on some learners and an impact on the cognitive processes of other learners”. [16, 21] We intentionally avoided classifying the questionnaire items into themes to enable further exploration through statistical factor analysis.

We developed the two additional questionnaire items as two of the original perceptions were represented by two separate related factors: The perception ‘belonging’ represented both relationships formed within the participant group and between learners and the instructor (Q1-2). [16] Similarly, the perception ‘responsibility’ represented both the amount of autonomy and independence available to the participant as well as the level of available support from the faculty member (instructor) (Q4-5). [16] In developing of the SBT-QA10, we tried to avoid survey fatigue by altering use of positive and negative statements in the questionnaire (avoiding “straightening” the answers). [22, 23] The final questionnaire contained a total of 10 items for perceptions of which seven were positively phrased and three were negatively phrased.

Two measures of satisfaction were also included in the final questionnaire. The phrases “I felt comfortable learning this way” and “I now feel more confident managing the clinical case” were derived without changes in wording from the previously cited study where they were shown to discriminate between the modifiable factors of technology format (conventional face to face versus videoconference enabled remote facilitation) and instructor experience with SBT. [21] A five-point Likert scale was used to score level of agreement with the questionnaire item.

The readability of the questionnaire was also tested as part of an iterative process and was piloted by members of a teaching faculty within the SCSSC prior to deployment. [24, 25]

DETECT course

This course is part of a larger program of emergency response team training and has been described in detail in previous publications. [16, 21] DETECT courses have low complexity scenarios designed to rehearse identifying, communicating, and treating patient deterioration on a hospital ward. The scenarios are designed to be of equal difficulty and matched to the learners’ level of clinical experience and workplace roles. The course is run in a pause-and-discuss format where learners in groups of six to eight engage in one or two phases of action that are interspersed with targeted reflective debriefing conversations.

Following an introductory lecture, learners rotate through three simulation scenarios that employ either patient simulators or simulated patients. To accommodate larger group sizes, a fourth scenario may be conducted in the format of a table-top case scenario, whereby facilitators use a verbal description and visual props to present the scenario. [26]

During each scenario round, the groups are divided into learners undertaking active and observer roles. The learners self-select the roles, with instructors ensuring that all learners experience both roles across the scenario rounds. Learners with active roles are briefed that they would take part in the action phase of the scenario and debriefing conversations. Learners with observer roles are briefed that they would observe the action phase from within the room and contribute their perspective to the debriefing conversations during the pauses.

The course was slightly modified to enable the questionnaires to be completed. This included a preliminary briefing about the research project including providing questionnaire forms and collecting completed consent forms allowing analysis of data. Immediately after the in-action phase, and before the post-scenario debriefing, the learners were asked to fill in the questionnaire (both active and observers). Questionnaire completion was expected to require five minutes. As we did not intend to modify the DETECT course during the study period, the results were not analyzed by the authors until the study period was over.


Each DETECT course is delivered by three to four instructors. Each scenario has one instructor who delivers their scenario three to four times per course, with learners rotating between scenarios. Prior to instructing on the course, every instructor completes the course-specific instructor accreditation program and achieves the course standard for competency in scenario delivery and constructive student enquiry-focused debriefing. [21] The authors were not involved in teaching and recruiting in the included courses to avoid creating any bias.


As DETECT is mandatory training for the organization it was important that the course remained unaffected by the study and therefore no explicit randomization was performed. Learners were divided into equal sized groups by the course leader during the course introduction without any knowledge of the learners’ demographic attributes, apart from gender, profession, and estimated age. No group of learners had identical rotations.

Outcome measures

Primary outcomes

Construct validity was assessed by correlating the scores for the individual SBT-QA10 items with scores for other perceptions, the total score of the ten items and with scores for the two individual measures of satisfaction.

Secondary outcomes

Learners’ responses to the SBT-QA10 questionnaire were also compared according to their assigned roles as ‘active responders’ or ‘observers’ according to our hypothesis that the strength of the perception would differ between learners in each group. For instance, in comparison to responses provided by learners in active roles, we predicted that learners in observer roles would report reduced scores for realism and mental workload. We also compared questionnaire responses between the second versus first scenario in a given role. Here, we hypothesized that scores for some perceptions such as being observed, use of mental workload and feeling uncomfortable may change reflecting greater familiarity with the teaching environment and methods, an effect observed in the previous study. [13]


Data handling

Questionnaire results were analyzed in Microsoft ® Excel (version 16.23 (190,309)™). Responses for the negatively phrased questionnaire items and the items predicted to have a negative impact on learners’ satisfaction, were reversed so that a high score for each item would be assumed to predict a positive experience. Missing responses for individual questionnaire items or within the set of twelve questionnaire items were treated as missing data and excluded from analysis.

Statistical tests

Data were analyzed using IBM SPSS (version 21). Tests requiring two-group comparison of independent and dependent variables used the Kruskal-Wallis non-parametric test for ordinal or categorical data or where interval data were not normally distributed. Tests requiring correlation employed two tailed Kendall’s tau Rank Order Test of Correlation for non-parametric data. A significance value of p < 0.05 was accepted as significant for all tests.

An exploratory principal component factor analysis (EPCFA) was conducted to further validate the questionnaire items and explore underlying themes. Generally accepted criteria for EPCFA were applied including an Eigenvalue > 1, factor loading of > 0.4, a Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy > 0.7 and Bartletts Test for Sphericity reaching significance of < 0.05. [27].

The eligible populations of these groups average 200 per annum. Based on previous studies, we assumed 50 learners would provide adequate power at 0.8 to detect 10% difference between the groups with 2-sided significance levels of p < 0.05. [8, 28]


Study design, setting and learners

A total of 96 learners agreed to participate in the study from a total of 103 eligible participants who attended the DETECT course on one of three dates between November 2018 and January 2019 (Table 1). In total 349 questionnaires were returned of which thirty-nine were removed, due to one or more items being left unanswered. All 96 learners participated in three scenarios, in which the action phase was conducted using a patient simulator or simulated patient, and 61 learners participated in a fourth scenario.

Table 1 Baseline Characteristics (n = 96)

SBT-QA10 questionnaire

The median scores and interquartile range (IQR) after each scenario including the Total Perception Score (SBT-QA10-TPS) were calculated for individual questionnaire items and the two items for comfort and confidence within each round of scenarios and for the total of all scenarios.

The Median (IQR) SBT-QA10-TPS across all scenarios was 39 [5] with no significant differences observed between the scenarios (Kruskal-Wallis test p = 0.96). The median (IQR) for the four scenarios 1, 2, 3, and 4 were 39 [5], 40 [6], 40 [5] and 39 [5], respectively. The distributions of the TPSs were negatively skewed for all scenarios, indicating that most of the scores were in the upper half of the scale.

The ten individual SBT-QA10 items for perception were correlated with one another and the SBT-QA10-TPS to determine the extent to which they contributed to positive perceptions. We identified fair correlations between nine of the ten items and the SBT-QA10-TPS, the exception being “mental effort”. [29] Nine items were positively correlated as we had predicted, after inverting scores for negatively phrased items, with “mental effort” showing a poor and negative correlation. These findings support the validity of the use for the above mentioned nine items.

In contrast, we identified poor correlations between the ten items and comfort or confidence. [30] (Table 2).

Table 2 Correlations between individual perceptions scores, the satisfaction scores (Comfortable and Confident) and the total perception score (SBT-QA10-TPS).

When compared to determine possible differences across scenarios eight items demonstrated no significant differences, whereas two items, surveillance, and mental effort, displayed significant trends: Learners reported increasing ranked scores from round 1 to 4 for feeling not intimidated by being observed, i.e., they felt increasingly less intimidated. Learners reported decreasing ranked scores for reporting that it did not require mental effort, i.e., it required increasingly more mental effort as the day progressed. These results only weakly support the use of the tool (data not shown).

An exploratory factor analysis identified four factors which accounted for 62% of the variance. (Table 3). Factor 1 included three items: (I felt part of the team, The faculty interacted well, I felt supported) Factor 2 contained three of the SBT-Q10 items (I acted independently, I understood purpose of scenario, I was focussed) including the two global satisfaction items Factor 3 contained two items related to perceptions of surveillance and realism and the fourth factor contained two perceptions related to mental effort and distraction. Reliability analysis for each factors identified Cronbach’s Alpha of 0.814, 0.667, 0.854, and 0.130, respectively.

Table 3 Exploratory factor analysis of the questionnaire items

Learners’ responses to the SBT-QA10 questionnaires were analyzed according to their roles as ‘active responders’ or ‘observers’. We hypothesized that learners in the active group would report more positive scores. For instance, it was expected that the active learners were likely to score higher in realism and focus, feeling part of the team, interacting with the instructor, acting independently, and being observed and have a higher TPS (Table 4). Significant differences were found where active learners reported more positive perceptions related to belonging to the team and interaction with the instructor, their sense of acting independently, and being focused. Active learners reported higher mental effort than observers.

Table 4 The Individual Perceptions Scores Presented for Active and Observer Roles for all scenarios (n = 253)

For completeness, we also analyzed the influence of gender, profession, and previous exposure to SBT (data not shown). Only learners identifying as “Female” or “Male” were included in the in these statistical calculations. Learners identifying as “Other” (n = 2) were omitted here as numbers were too small. No significant gender differences were observed in total SBT-QA10-TPS. Female learners reported lower scores for surveillance suggesting they found it more intimating to be observed but were more comfortable and confident learning by SBT compared to their male colleagues. No significant differences were observed in the total SBT-QA10-TPS for profession. According to the SBT-QA10 responses, nurses felt significantly more as part of a team, more observed but also more supported than doctors. Compared with nurses, doctors reported feeling less confident to manage similar clinical case in the future. When focusing on the learners’ simulation experience: Novices felt that the scenarios were more unrealistic, they felt challenged to follow the scenarios, but were despite this more confident managing a similar clinical case in the future compared to learners with prior experience of SBT.


This paper presents the evaluation of the SBT-QA10 quality assurance tool. Based on the interview study from which they were derived, [16] we predicted that the ten questionnaire items would provide more specific and useful information for scenario design and debriefing facilitation than overall satisfaction scores. Learner reported perceptions could then guide potentially modifiable influencing factors such as instructor skills and facilitation behaviors [16] and more nuanced management of learners according to their active as opposed to observer roles.

Examining learners’ perceptions to SBT is not uncommon and have been described in previous publication. [15, 31,32,33,34,35,36,37] Our study builds on this knowledge by focusing on perceptions that stem from the active phase of the simulation: Apart from “mental effort”, which we recommend is removed, all the items in the SBT-QA10 showed fair correlation suggesting they are all measuring elements of a participant’s experience in the active phase of simulation scenarios. Factor analysis supported these findings and the conclusion from the earlier interview study that learners’ perceptions include both psychosocial and cognitive themes. We proposed that Factor 1 relates to the theme of social connection and support (F1-S), Factor 2 relates to orientation to task (F2-Or), Factor 3 relates to realism (F3-R), and Factor 4 is aligned with mental workload (F4-MWL). To improve the utility of the tool, we recommend that each questionnaire item is denoted by its factor theme, as nominated above. Users should decide whether or not to convert negatively worded statements to positive statements. We believe alternating positive and negative statements strengthens the tool’s validity by reducing opportunities for responders to take cognitive short cuts. While we believe that the best occasion for administering the questionnaire us immediately following the conclusion of the action phase of the scenario, we similarly feel that users are best placed to consider important contextual factors that will determine where the tool best fits into their courses.

The SBT-QA10 enables learners’ perceptions to be deconstructed and quantified, to an extent. This information provides instructors with practical feedback that subsequently can be acted upon to improve their facilitation skills, including pre-scenario briefing, in-scenario debriefing and or rescue strategies. Instructional designers could potentially use the tool to guide elements such as scenario design, the use of remote facilitation technologies and or the separation of active versus observer learners. Course learners may also potentially use the tool to reflect upon their own experience during the debrief and to guide their engagement in future SBT. The tool also has potential utility in research.

Conscious mental effort was one of the eight perceptions in the original interview study [16] where it was defined by the authors as “Low awareness of effort required to interpret a situation” and thought of as impacting our cognition, and thereby affecting the intrinsic cognitive workload. In this study the item for low mental effort did not correlate with a positive experience suggesting it should be omitted. It is well established that cognitive workload impacts upon learning and sources of cognitive workload that are extrinsic and distracting from the learning task should be minimized. [38] Omitting this item from the SBT-QA10 does not preclude any monitoring of a participant’s cognitive workload, since it is indirectly measured in other items including the ability to act independently, feeling supported, scenario realism, understanding the purpose of the scenario and feeling distracted. Other established self-reported measures of cognitive workload are in common use, including the Cognitive Appraisal Index [8] and the single item Paas scale. [39] While we could potentially add these to our monitoring inventory our findings underscore the difficulty of measuring and interpreting cognitive workload, in context.

An unexpected finding of the study was the poor correlation between the ten perceptions and the two measures of satisfaction. As previously mentioned, the questionnaire items were reproduced from an earlier questionnaire-based study involving the DETECT course. [16] Here the measure of comfort was a statistically significant discriminator of face to face versus remote facilitation formats and the measure of confidence was a significant discriminator of instructor experience. Support for these measures was further enhanced in the subsequent interview study from which the perceptions were derived. For example, learners frequently volunteered phrasing describing comfort or enjoyment such as “I felt comfortable/uncomfortable” and “I enjoyed it/did not enjoy it”. Learners also volunteered or responded to probing about the effectiveness of the activity in an affirmative manner using phrasing such as “I learned a lot” or “It was beneficial”. These findings call us to reflect on the meaning of satisfaction scores to program evaluation and reinforce the limitations of relying upon reductionist and quantitative methods to evaluate the impact of learning and its acceptance by learners.

The significant differences between allocated roles found in the SBT-QA10-TPS provide further support for the use of the tool and are consistent with findings reported elsewhere that learners’ experience of simulation are impacted differently by these roles. [18, 19, 40] In this respect, the tool could provide utility to simulation providers to ensure their instructional methods are optimized for both active and observer roles in their training programs. For example, being an active observer who is focused on specific learning outcomes and knowing what to look for has been associated with improved learning in the role. [18, 19, 40] Effective facilitation of observers requires faculty to spend time considering the pre-briefing requirements of this role including setting expectations for active engagement when in the observer role and directly including and specifically asking for the observers’ perspective in the discussion or debrief. [18] Applying these principles is an indication of the value the facilitator places upon the observer role for the purpose of increasing learners’ sense of feeling valued and valuing the observer role. We actively applied these principles as we were aware that some learning outcomes may be better for learners actively engaged in scenarios. [20] Despite this, learners in this study reported higher scores for feeling part of the team and interacting with faculty when in an active role. These differences were relatively small which may in part reflect a partial closing of an otherwise larger gap between these roles however room for further improvement by faculty appears to be warranted. Other differences represent opportunities by which the observer role may be shown to add value. For instance, as expected, observers scored mental effort lower than active learners. We feel the reduced requirement of observers to attend to tasks provides them the space to look at the bigger picture and to notice details and nuances. [41]

This study uses a relatively narrow interpretation of construct validity considering the multiple dimensions with which contemporary validity frameworks have been adapted in the literature. [24, 38,39,40] We applied Relationships with other scores from Messick’s argument-based validity framework for its direct relevance to the intended use of the SBT-Q10 as a quality assurance tool and avoided other measures of validity evidence as we were confident that they could not be applied. [17, 42]

We intentionally did not introduce additional measures of construct validity in this study due to concerns that major changes in the course format would impact negatively on the courses’ relevance to and suitability for the learners. We note opportunities to further validate the tool through additional measures of construct validity, such as altering any of the scenarios as a means of loading them according to different perceptions as this Similarly, opportunities exist to measure concurrent validity by applying the tool to a course involving high complex scenarios and or to test the generalizability of SBT-QA10 by investigating the learner’s perception from other courses.

We were unable to randomize subjects into groups to minimize between group bias as course logistics prevented this. However, by using a single sample with repeated measures methodology, we felt randomization would have minimal effect on the results, since the learners were exposed to different scenarios, in different roles, with colleagues with different clinical and SBT experience. Nevertheless, the internal consistency of the majority of questionnaire items supports the confirmation process that SBT-QA10 can be used to measure perceptions that impact upon learners’ experiences during DETECT and these findings are strengthened by the significant difference in responses when comparing the observer and active roles which are in concordance with findings in other settings. [20]

We believe the tool has potential utility in further research and could potentially serve as inspiration for others.


The SBT-QA10 enables learners to convey the different perceptions they experience during simulation and prompts simulation educators and providers to remain aware of the different perceptions learners may have in response to the same event. “With the exception of mental effort, high scores in the other questionnaire items reflects a positive experience for learners during the action phase of SBT and may also influence their experience in subsequent learning activities and the transfer of learning to daily clinical life.” Their inconsistent correlation of the SBT-Q10 items with measures of comfort and confidence raises questions about the determinants of the latter and or their utility as measures of satisfaction. Further research is needed to demonstrate the validity and applicability of the tool in other simulation-based training settings.

In conclusion, we feel the SBT-Q10 has potential utility for course learners who may use the tool to reflect upon their own experience and to guide their engagement in future SBT. The tool may provide instructors with practical feedback that can be acted upon to improve their facilitation skills, including pre-scenario briefing, in-scenario debriefing, and/or rescue strategies. Finally, instructional designers could potentially use the tool to guide elements such as scenario design, the use of technologies and the separation of active versus observer learners.

We believe SBT-QA10 could be used as a simulation “thermometer” i.e., an intervention tool: Do we understand, what we are exposing the learners to? How do they perceive the scenarios? Do they understand the purpose, the content and context of the scenario? Does simulation “always” work? Sometimes it does not and why is that? The learners’ perceptions matter and should be afforded attention. [15, 31,32,33,34,35,36,37]

Availability of data and materials

The data supporting the conclusions of this article are included within the article. Corresponding author may be contacted to forward requests for data sharing.



Detection, Evaluation, Treatment, Escalation and Communication in Teams


Inter Quartile Range

n :



Not Answered


Simulation-Based Training


Simulation Based Training Quality Assurance Tool (including 10 items)


Total Perception Score

Q1, Q2:

Questionnaire item 1, Questionnaire item 2


  1. Rudolph JW, Raemer DB, Simon R. Establishing a Safe Container for Learning in Simulation - the role of the Presimulation briefing. Sim Healthc. 2014;9:339–49.

    Article  Google Scholar 

  2. Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc. 2007;2(3):183–93.

    Article  Google Scholar 

  3. Dieckmann P, Friis SM, Lippert A, Østergaard D, Ostergaard D, Østergaard D, Goals. Success factors, and barriers for Simulation-Based learning: a qualitative interview study in Health Care. Simul Gaming. 2012;43(5):627–47.

    Article  Google Scholar 

  4. Cheng A, Nadkarni VM, Mancini M, Hunt EA, Sinz E, Merchant R et al. Resuscitation Education Science: Educational Strategies to Improve Outcomes from Cardiac Arrest. Circulation. 2018 Aug 7;138(6):e82-e122.doi:

  5. Eppich W, Cheng A. Promoting excellence and reflective learning in simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc. 2015;10(2):106–15.

    Article  Google Scholar 

  6. Jaye P, Thomas L, Reedy G. ‘The Diamond’: a structure for simulation debrief. 2015;171–5.

  7. Kolbe M, Weiss M, Grote G, Knauth A, Dambach M, Spahn DR, et al. TeamGAINS: a tool for structured debriefings for simulation-based team trainings. BMJ Qual Saf. 2013;22:541–53.

    Article  Google Scholar 

  8. Harvey A, Nathens AB, Bandiera G, Leblanc VR. Threat and challenge: cognitive appraisal and stress responses in simulated trauma resuscitations. Med Educ. 2010;44(6):587–94.

    Article  Google Scholar 

  9. Sørensen JLL, Thellesen L, Strandbygaard J, Svendsen KDD, Christensen KBB, Johansen M, et al. Development of knowledge tests for multi-disciplinary emergency training: a review and an example. Acta Anaesthesiol Scand. 2015;59(1):123–33.

    Article  Google Scholar 

  10. Kihlgren PER, Spanager L, Dieckmann P. Investigating novice doctors’ reflections in debriefings after simulation scenarios. Med Teach. 2014;37(5):1–7.

    Google Scholar 

  11. Husebø SE, Regan SO, Nestel D. Reflective practice and its role in Simulation. Clin Simul Nurs. 2015;11(8):368–75.

    Article  Google Scholar 

  12. Cordeau MA. The lived experience of Clinical Simulation of novice nursing students. Int J Hum Caring. 2010;14(2):8–14.

    Article  Google Scholar 

  13. McNiesh SG. Cultural norms of clinical simulation in undergraduate nursing education. Glob Qual Nurs Res. 2015;2015:1–10.

    Google Scholar 

  14. Walton J, Chute E, Ball L. Negotiating the role of the Professional nurse. The Pedagogy of Simulation: a grounded theory study. J Prof Nurs. 2011;27(5):299–310.

    Article  Google Scholar 

  15. Dieckmann P, Manser T, Wehner T, Rall M. Reality and fiction cues in Medical Patient Simulation: an interview study with Anesthesiologists. J Cogn Eng Decis Mak. 2007;1(2):148–68.

    Article  Google Scholar 

  16. Christensen MD, Oestergaard D, Dieckmann P, Watterson L. Learners’ perceptions during Simulation-Based training: an interview study comparing Remote Versus locally facilitated Simulation-Based training. Simul Healthc. 2018;13(5):306–15.

    Article  Google Scholar 

  17. Cook DA, Hatala R. Validation of educational assessments: a primer for simulation and beyond. Adv Simul. 2016;1(1):31.

    Article  Google Scholar 

  18. O’Regan S, Molloy E, Watterson L, Nestel D. Observer roles that optimise learning in healthcare simulation education: a systematic review. Adv Simul. 2016;1(1):4.

    Article  Google Scholar 

  19. Rogers T, Andler C, OʼBrien B, van Schaik S. Self-reported Emotions in Simulation-Based learning. Simul Healthcare: Simul Healthc. 2019;14(3):140–5.

    Article  Google Scholar 

  20. Blanié A, Gorse S, Roulleau P, Figueiredo S, Benhamou D. Impact of learners’ role (active participant-observer or observer only) on learning outcomes during high-fidelity simulation sessions in anaesthesia: a single center, prospective and randomised study. Anaesth Crit Care Pain Med. 2018;37(5):417–22.

    Article  Google Scholar 

  21. Christensen MD, Rieger K, Tan S, Dieckmann P, Østergaard D, Watterson LM. Remotely versus locally facilitated simulation-based training in management of the deteriorating patient by newly graduated health professionals. Simul Healthc. 2015;10(6):352–9.

    Article  Google Scholar 

  22. Krosnick J. Response strategies for coping with the cognitive demands of attitude measures in surveys. Appl Cogn Psychol. 1991;5(3):213–36.

    Article  Google Scholar 

  23. Krosnick J. Survey Research. Ann Rev Physiol. 1999;50(1):537–67.

    Google Scholar 

  24. Artino AR Jr, La Rochelle JS, Dezee KJ, Gehlbach H, Artino AR, La Rochelle JS, et al. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach. 2014;87(6):463–74.

    Article  Google Scholar 

  25. Rickards G, Magee C, Artino AR. You can’t fix by analysis what you’ve spoiled by design: developing Survey Instruments and Collecting Validity evidence. J Grad Med Educ. 2012;4(4):407–10.

    Article  Google Scholar 

  26. Crichton M, Flin R. Training for emergency management: Tactical decision games. J Hazard Mater. 2001;88(2–3):255–66.

    Article  Google Scholar 

  27. Tabachnick BG, Fidell LS. Using Multivariate Statistics. Boston: Pearson Education; 2013.

    Google Scholar 

  28. KesterGreene N, Filipowska C, Heather Heipel, Dashi G, Piquette D. Learner reflections on a postgraduate emergency medicine simulation curriculum: a qualitative exploration based on focus group interviews. CJEM. 2021;23:374–82.

    Article  Google Scholar 

  29. Chan YH. Biostatistics 104: correlation analysis. Singap Med J. 2003;44(12):614–9.

    Google Scholar 

  30. Akoglu H. User’s guide to correlation coefficients. Turk J Emerg Med. 2018;18(3):91–3.

    Article  Google Scholar 

  31. Vermeulen J, Beeckman K, Turcksin R, van Winkel L, Gucciardo L, Laubach M, et al. The experiences of last-year student midwives with high-fidelity Perinatal Simulation training: a qualitative descriptive study. Women Birth. 2017;30(3):253–61.

    Article  Google Scholar 

  32. Suksudaj N, Lekkas D, Kaidonis J, Townsend GC, Winning TA. Features of an effective operative dentistry learning environment: students’ perceptions and relationship with performance. Eur J Dent Educ. 2015 Feb;19(1):53–62.

  33. Singh D, Kojima T, Gurnanney H, Deutsch E. Do fellows and Faculty share the same perception of Simulation Fidelity? A pilot study. Simul Healthc. 2020;15(4):266–70.

    Article  Google Scholar 

  34. Austin JP, Baskerville M, Bumsted T, Haedinger L, Nonas S, Pohoata E, et al. Development and evaluation of a simulation-based transition to clerkship course. Perspect Med Educ. 2020;9(6):379–84.

    Article  Google Scholar 

  35. Jowsey T, Petersen L, Mysko C, Cooper-Ioelu P, Herbst P, Webster CS, et al. Performativity, identity formation and professionalism: Ethnographic research to explore student experiences of clinical simulation training. PLoS ONE. 2020;15:1–16.

    Article  Google Scholar 

  36. Costello M, Prelack K, Faller J, Huddleston J, Adly S, Doolin J. Student experiences of interprofessional simulation: findings from a qualitative study. J Interprof Care. 2018;32(1):95–7.

    Article  Google Scholar 

  37. Walsh CM, Garg A, Ng SL, Goyal F, Grover SC. Residents’ perceptions of simulation as a clinical learning approach. Can Med Educ J. 2017;8(1):e76–87.

    Article  Google Scholar 

  38. ten Cate O, Sewell JL, Young JQ, van Gog T, O’Sullivan PS, Maggio LA, et al. Cognitive load theory for training health professionals in the workplace: a BEME review of studies among diverse professions: BEME Guide No. 53. Med Teach. 2019;41(3):256–70.

    Article  Google Scholar 

  39. Paas FGWC. Training strategies for attaining transfer of problem-solving skill in Statistics: a cognitive-load Approach. J Educ Psychol. 1992;84(4):429–34.

    Article  Google Scholar 

  40. Bong CL, Lee S, Ng ASB, Allen JC, Lim EHL, Vidyarthi A, et al. The effects of active (hot-seat) versus observer roles during simulation-based training on stress levels and non-technical performance: a randomized trial. Adv Simul. 2017;2:7.

    Article  Google Scholar 

  41. Rantatalo O, Sjöberg D, Karp S. Supporting roles in live simulations: how observers and confederates can facilitate learning. J Voc Educ Train. 2018;71(3):482–99.

    Article  Google Scholar 

  42. Blanié A, Amorim MA, Meffert A, Perrot C, Dondelli L, Benhamou D. Assessing validity evidence for a serious game dedicated to patient clinical deterioration and communication. Adv Simul. 2020;5:4.

    Article  Google Scholar 

Download references


We thank Kathryn Rieger, SCSSC for in depth discussion and help regarding managing the DETECT course in the presence of the SBT-QA10 research project. We thank the learners and facilitators of the course for the opportunity to analyze the data.


Open access funding provided by Royal Danish Library

Author information

Authors and Affiliations



KE, drafted the study design, and drafted the Simulation-Based-Training Quality Tool 10 (SBT-QT10, study-protocol and analyzed and interpreted the data, and were primary contributor to the manuscript. SO helped design the study, helped draft the manuscript and contributed to the background literature, DØ helped draft the manuscript and contributed to the background literature, PD helped draft the manuscript, contributed to the background literature, and provided feedback on the initial drafts of the text. LW refined the study design and study-protocol, helped draft and refine the manuscript including the data analyses and interpreting of the data and contributed to the background literature. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kim Ekelund.

Ethics declarations

Ethics approval and consent to participate

The study was performed accordance with the Declaration of Helsinki and approved by the appropriate ethics committee; by local institutional review board RNSHD HREC Executive Committee (AU14E0935), including Ethical and Scientific Approval (reference RESP/18/316), Research Office, Level 13, Kolling Building, Royal North Shore Hospital, St Leonards NSW 2065, Sydney, Australia and conducted as a single-center prospective single cohortstudy. The participants received and signed Participant Information Sheet/Consent Form. They were informed that they could withdraw their consent at any time.

Consent for publication

In the Participant Information Sheet/Consent Form the learners were informed that the results of the research project were going to be published and/or presented in a variety of forums and that any information will be provided in such a way that the learners would be de-identified.

Competing interests

Dieckmann holds a professorship with the University of Stavanger in Norway. This position is paid for by an unconditional grant from the Laerdal foundation to the University. Dieckmann leads the EuSim group, a network of simulation enthusiasts and centres to provide faculty development programmes on an international basis. The remaining authors have no competing interests

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ekelund, K., O’Regan, S., Dieckmann, P. et al. Evaluation of the simulation based training quality assurance tool (SBT-QA10) as a measure of learners’ perceptions during the action phase of simulation. BMC Med Educ 23, 290 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: