Skip to main content

The effect of using desktop VR to practice preoperative handovers with the ISBAR approach: a randomized controlled trial



The aim was to investigate whether second-year undergraduate nursing students practicing the Identification-Situation-Background-Assessment-Recommendation (ISBAR) communication approach in a desktop virtual reality (VR) application had a non-inferior learning outcome compared with the traditional paper-based method when sorting patient information correctly based on the ISBAR structure.


A non-inferior parallel group assessor blinded randomized controlled trial, conducted in simulation sessions as part of preparation for clinical placements in March and April 2022. After a 20-minute introductory session, the participants were randomized to self-practice the ISBAR approach for 45 minutes in groups of three in either an interactive desktop VR application (intervention) or traditional paper-based (TP) simulation. The primary outcome concerned the proportion of nursing students who sorted all 11 statements of patient information in the correct ISBAR order within a time limit of 5 min. The predefined, one-sided, non-inferiority limit was 13 percentage points in favor of traditional paper-based simulation.


Of 210 eligible students, 175 (83%) participated and were allocated randomly to the VR (N = 87) or TP (N = 88) group. Practicing in the desktop VR application (36% of everything correct) was non-inferior to the traditional paper-based method (22% everything correct), with a difference of 14.2 percentage points (95% CI 0.7 to 27.1) in favor of VR. The VR group repeated the simulation 0.6 times more (95% CI 0.5 to 0.7). Twenty percent more (95% CI 6.9 to 31.6) of the students in the VR group reported liked how they practiced. All the other outcomes including the System Usability Scale indicated non-inferiority or were in favor of VR.


Self-practicing with the ISBAR approach in desktop VR was non-inferior to the traditional paper-based method and gave a superior learning outcome.

Trial registration number

ISRCTN62680352 registered 30/05/2023.

Peer Review reports


Handover of patients from one healthcare professional or organization to another is a situation in which patient safety can be threatened [1]. Handovers require sharing patient information, coordinating care, and transferring accountability and authority to the next team [2]. Structured handovers reduce patient complications, medication errors, and adverse patient events [3], whereas poor handover skills are related to misunderstandings between healthcare providers and can lead to severe consequences for patient safety [2].

When a patient undergoes surgery, a structured handover is an essential skill for healthcare workers [4,5,6]. Although electronic surgical checklists and digital tools to support preoperative handovers are implemented increasingly [7], previous research has demonstrated that these tools do not always improve communication and collaboration [8]. Utilization of the Identification-Situation-Background-Assessment-Recommendation (ISBAR) approach has been recognized internationally and widely adopted as a handover tool to enhance patient safety [9, 10]. ISBAR is used in clinical practice [7] and has been implemented in training and education [11].

Within nursing education lie challenges related to resources, e.g., time, instructors, and available simulation locations to practice skills, such as the ISBAR approach [12]. Furthermore, during student ward practice, there is insufficient time at clinical sites due to a decrease in number and length of hospitalization of surgery patients [13]. To help overcome some of these challenges in the educational setting, one possible solution is to use desktop virtual reality (VR) [14, 15].

VR utilizes 3D computer technology to construct an interactive virtual world, allowing users to engage with a simulated environment [16]. The level of immersion experienced by users in a virtual world may differ based on the hardware and software employed. This has led to suggestions for how to best define VR applications according to the level of immersion [17]. There are also other types of applications that have been termed desktop, screen- or computer-based VR which has been classified as non-immersive compared with VR solutions that use a head-mounted display [18]. In this publication, the term desktop VR is used. Desktop VR implies that individuals use a computer’s keyboard and mouse to observe and interact with a virtual environment displayed on the computer screen [19]. In multiplayer desktop VR versions, users can interact with each other through a representation of an avatar, sound and movement on the screen [18, 20].

Desktop VR has been used in situations, such as computer-based simulation [21], practicing surgical skills [22], and in health care education [23] for enhanced learning. However, a significant literature gap exists regarding rigorous studies with a large sample size to investigate the learning effect of using VR in nursing education [24, 25]. One study have been identified, which explored the potential benefits of nurses using desktop VR to learn handover [26]. This was a randomized controlled trial that found non-inferiority in communication performance using desktop VR for training when compared with live simulations. No studies have been found on desktop VR’s effect with learning the ISBAR approach in a preoperative handover situation with undergraduate nursing students [27].

Therefore, the aim was to investigate whether second-year nursing students self-practicing the ISBAR approach during handovers in a preoperative setting in a desktop VR application experienced a non-inferior learning outcome compared with self-practicing the traditional paper-based (TP) method to sort patient information.


Study design

A non-inferior, parallel group assessor blinded randomized controlled trial (RCT) was conducted at three education sites. The non-inferior approach was chosen because desktop VR simulation is done virtual and thus may have some disadvantages compared with real-life skill practice [23, 28]. The study took place in March and April 2022, and was approved by the Education sector’s Service Provider (SIKT, Reference No. 305866) and the head of the pertinent study programs. No changes were made to protocols after the study commenced. The study was registered 30/05/2023 with trial number ISRCTN62680352 in the ISRCTN registry [29].


The study was conducted as part of simulation sessions that prepared second-year undergraduate nursing students for clinical placement in medical-surgical settings. It took place in nursing programs at a university in Southern Norway (two sites) and at a university in Western Norway (one site). At the fall semester in 2020, there were 175, 153 and 145 students enrolled at the three sites, respectively. However, about half of these students were eligible, as only those undergoing clinical placements at somatic hospitals during that period could be included, in accordance with the curriculum and learning outcomes.

At all the universities, the students had been taught preoperative nursing care for surgical patients, communication between health care providers, and the ISBAR approach before the research study was launched.

The simulation set-up at each site comprised one lecture room with 12 computers with headsets for virtual desktop simulation and a room for paper-based simulation (one large room or smaller group rooms). Four instructors were used to facilitate the sessions and collect data for the study.

Usability and pilot study

A usability study of the desktop VR application, used in the intervention in this study describes details regarding the development of the intervention [30]. In short, nine second-year undergraduate nursing students participated in the study and found the application usable overall, giving it an excellent usability score. Some technological and comprehension issues were identified, and a revised version was used in the present study.

A pilot study was conducted in February 2022 with 15 third-year undergraduate nursing students at two of the sites to try out the planned RCT activities. The pilot study’s results indicated that the planned RCT activities worked well, but it was found that the primary outcome’s difficulty level was too low. It was estimated that 20% of the participants in both groups would get everything correct on the primary outcomes [31, 32], which were used as the basis for the sample size calculation. However, in the pilot study, 80% of participants scored correctly on the primary outcome. The difficulty level was increased, and a revised test was piloted on five nursing educators, two nurses and two third-year undergraduate nursing students, all with moderate knowledge of ISBAR. In the revised test, 20% of the participants scored everything correctly, and this difficulty level was used for the present study.


The inclusion criteria were second-year undergraduate nursing students enrolled in the nursing study program at the participating universities who had no or limited experience in supervised clinical practice in somatic hospitals. Third-year undergraduate nursing students with substantial experience in supervised clinical practice, indicating a level of competence already surpassing the specific learning outcomes targeted in this intervention, were excluded.


General information about the simulation session, including that the students would be asked about participating in this study, was presented verbally during a lecture and presented in the digital learning management system for the study program. Specific information about time and place, in addition to repetition of general information, was provided in the study program schedule (at two of the sites) or sent by email (at one of the sites).

Information about the study, including voluntary study participation, was repeated at the start of the simulation session. The students were told that participation allowed the researchers to collect and use their identified data from the simulation session. Consent was provided by pressing “send” on the first questionnaire.

Randomization and allocation

Randomization had to consider practical organization in which students participated at different times in batches of nine, 12, or 15 students; therefore, separate computer-generated randomization lists were made for each batch of students using the Microsoft Excel RAND function. Using these lists, stickers with identification (ID) numbers and allocation codes were printed. The stickers were then put in separate containers for each batch.

To allocate students into the intervention and control groups, students in the same batch got a random ID sticker from the container. Depending on the site, one ID sticker was taken out of the container and given to the student upon entering the lecture room (one site) or the stickers were given to the students after the students were seated in the lecture room (two sites). In the first case, the order the students came to the room could not be influenced and were random, and in the second case, the ID stickers were drawn from the container to ensure random order. The students wore the ID stickers visibly to allow for inspection and ensure that they participated according to allocation. The students were informed that they would be divided into two different groups that would self-practice using the ISBAR approach after the introduction, when the participants were followed to their simulation sites based on the allocation code on their ID stickers. The allocation on each ID sticker was checked again when students entered their designated sites. No errors were reported.


Both the intervention and control groups participated in a 20-minute introduction session that comprised information about the simulation’s practicalities and the possibility of participating in this study, answering a questionnaire, and watching a nine-minute video that explains the ISBAR approach [33]. The video was made for this study and included general information about the ISBAR approach and why, when, and how to use it. Pre-training was unnecessary and was not integrated into the schedule [20].

The simulation started after the introduction and lasted for 50 minutes. The students were informed that they should resolve any questions they had on their own, as it was a self-training situation. An instructor was present who was given a manual on what to do, including the main directive that they should only help students solve major technical problems and otherwise let the students arrive at solutions themselves.

During the simulation, the participants were divided into groups of three because the desktop VR application used in the study was designed for three participants. Previous studies had reported no difference in performance between groups of three, four, or five participants [34]. Furthermore, dividing participants into smaller groups helped reduce any potential periods of inactivity during the simulation.

Patient case

The patient case used in the simulation was the same for both groups (Table 1). The case was developed through an iterative process involving the research team and a group of seven clinicians and teachers, comprising a surgeon, anesthetist, emergency department nurse, surgery ward nurse, and university lecturers. The research team chose a preoperative setting because nurses play a critical role in giving and receiving patient information during handover before surgery [35]. It was decided to use a patient case in which the patient required acute gallbladder surgery because this is a common condition that typically involves similar procedures performed preoperatively. To involve three participant types and two handovers, it was decided to include nurses working on different shifts (night, day, or nurse anesthetist).

Table 1 The information about the patient case given to the students in both groupsa

Desktop VR application

The intervention group practiced using a desktop VR simulation called the Preoperative ISBAR Desktop VR Application, which was developed specifically for nursing students to practice the ISBAR approach during handover in an acute preoperative setting. The desktop VR application was created as part of a larger VR research project in healthcare education called VirSam (Virtual Collaboration) [36]. The details of its development are described below, in Supplement 1, and in a previous publication [30].

As the tasks involved a substantial amount of written text, including instructions and patient information, and the relatively little interactions with the virtual environment, it was chosen to use a desktop VR application. The academic content was developed by the research group in collaboration with a panel of seven healthcare professionals and educators. The technical solution was developed by the research group with the assistance of a hired programmer utilizing the Unity development platform. Based on experience from earlier application development, onboarding is important in self-practice applications [37]. Thus, the application was designed with integrated introductions for the use of desktop VR. Emphasis was placed on ensuring alignment between the learning outcome, learning activity, and assessment [38, 39], and that the application’s activities and available self-guidance covered learning tasks, supporting information, procedural information, and part-task practice [40]. A visualization of the application with the various activities are presented in a science talk [41]. Table 2 provides a summary of the steps that the participants went through in the application. Further details on VR feature design, including descriptions and classifications based on pedagogic and game elements, can be found in Supplementary file 1 [39, 40, 42, 43].

Table 2 Description of the different activities in the Preoperative ISBAR Desktop VR Application

Traditional paper-based group

The participants in the traditional paper-based group met in-person and were placed around a table in groups of three. Due to uneven numbers, two groups comprised four students. They were given printed papers with the same explanation and tasks––including an explanation of the ISBAR approach and a list of suggestions for correct sorting (Supplementary file 2)––as the VR group (Table 2, Supplementary file 1).

Differences between the groups

The main difference between the groups was that the desktop VR group practiced in a virtual environment. Furthermore, in VR, the participants were represented by avatars, with their names displayed above the avatars’ heads, and instructions were delivered through animations featuring voiceovers and pop-up windows. Feedback was provided, allowing for comparing results and suggestions for correct sorting. Furthermore, feedback was also given by highlighting the first statement in each player’s handover and through debriefing sessions. Another mechanism unique to desktop VR practice was the automatic guidance between activities, with an allocated time limit, indicating progress through the practice sessions. In the VR solution, repetition was promoted through time limits, and by encouraging them to practice again after the session ended by providing a click button to start over.

Data collection

At the beginning of the introduction, the participants completed a baseline characteristics questionnaire online. The outcome data were collected immediately after the simulation training through an online questionnaire and a written test, both with a time limit of 5 min. The ISBAR categories were not visible, i.e., the students had to remember the order and meaning.

During the data collection process, one staff member was present to provide instructions to the participants. They did not interact with the students during the data collection process and were instructed only to answer “do as you think best” in response to any questions from the students.


Written test and scoring rules

The written test (Supplementary file 3) was used for the primary outcome and some of the secondary outcomes, as described below. All the outcomes based on the written test were scored independently by the first author and a research assistant. The assessors were presented with the set of paper responses arranged randomly in the order of submission, and the scorers were blinded to the group allocation. They both provided the same score on 95% of the participants. For the remaining 5%, two members of the research group, who also were blinded, scored and discussed the results together with the first author until a consensus was reached.

The primary and some of the secondary outcomes concerned sorting patient information within correct ISBAR categories. A score of “Everything correct” was assigned if the patient information was sorted into the correct ISBAR category, independent of the order of the patient information within the category. Furthermore, some of the patient information could be sorted correctly within two of the ISBAR categories (S and A).

Participant characteristics

Participant characteristics included sex, age, mother tongue (Norwegian or other), previous experience working in health care, previous experience working in a surgical ward, previous experience practicing using the ISBAR approach, and previous experience playing multiplayer PC games.

Implementation of the intervention

Technical and other problems were registered by asking the instructors who were present if any such issues were experienced.

Primary outcome

The primary outcome was the proportion of nursing students who sorted all 11 statements of patient information into the correct ISBAR order within a time limit of five minutes on the written test (Supplementary file 3). The statements with patient information were presented in random order, numbered and provided on paper. The students were instructed to “write the number on the patient information in the correct order and write the letter where the information belongs”. This outcome variable was based on earlier research [31, 32] and was tested during the pilot study.

Secondary outcomes

  • The proportion that placed the correct patient information within each of the ISBAR categories: This outcome reports the results for each ISBAR category and provides additional information on the primary outcome by identifying the category that was best understood, as determined by the highest proportion of correct patient information placements. The outcome variable was based on prior research [31, 32] and tested during the pilot study.

  • The proportion that arranged the ISBAR words correctly: This outcome came from the online questionnaire. The students were presented with the five words that comprise ISBAR, sorted in the following order “Recommendation-Background-Identification-Situation-Assessment.” They were instructed; “Sort in correct order.” A similar outcome was used in earlier research [31, 32] and tested during the pilot study.

  • The proportion that sorted five statements of patient information (one for each ISBAR category) correctly based on ISBAR: This outcome was from the online questionnaire. The students were presented with the patient information sorted in the following order: “AIRBS” and asked to “sort the patient information correctly based on what you have learned today.” This outcome was made for this study and tested during the pilot study.

  • Students’ experiences with the self-perceived learning outcome on five questions: This outcome came from the online questionnaire:” To which degree did you think: 1. the video about ISBAR gave you enough knowledge before you started to practice; 2. you had enough time to practice; 3. the practice method was likable; 4. the teaching activity (introduction and practice) were a good way to learn the ISBAR approach; and 5. you are confident in conducting communication in the ISBAR approach.” Five answer options were provided: 1 (completely disagree); 2 (disagree); 3 (neither disagree/agree); 4 (agree); or 5 (completely agree). The proportion answering agree/completely agree is reported. These outcomes were used in earlier research [31, 32] and tested during the pilot study.

  • The proportion of complete runs of the practice: This outcome came from the online questionnaire. The students were asked to type the number of complete runs of the practice. A similar outcome was used in earlier research [31, 32] and tested during the pilot study.

  • The simulation method’s perceived usability: This outcome came from the online questionnaire and was measured using the System Usability Scale (SUS) [44]. The SUS has 10 open-ended items, with five answer options ranging from 1 (strongly disagree) to 5 (strongly agree). The score was created by adding up responses and converting it to a 0 to 100 scale, which can be translated into a curved grading scale from A-F [45]. The SUS was viewed as a reliable test of educational technology usability [46], and the validated Norwegian version was used [47].

Sample size calculation

A non-inferior limit of 13 percentage points was chosen for the sample size calculation based on other studies on clinical observation [31, 32, 48, 49]. Using this limit, a power (beta) of 80%, and a significance level (alpha) of 0.05, the sample size calculation demonstrated that 118 participants were needed in each group (Sealed Envelope Ltd., 2012), totaling 236 participants. For practical reasons, the maximum number of students available was 210.


The participant characteristics are presented descriptively. Independent sample proportion tests were used for categorical data, and independent samples t-tests were used for continuous data. The absolute difference is presented. The one-sided p-value with confidence intervals (CI) s on the primary outcome for non-inferiority is reported. Non-inferiority was declared if the lower limit of the one-sided 95% CI in absolute difference on the primary outcome in the VR group did not exceed 13% in favor of the control group. To present the analysis in the conventional manner, the results from a two-sided test with CIs are reported. Because none of the outcomes had more than two missing responses, all available data were used in the analyses. All analyses were performed using IBM Statistical Package for the Social Sciences (SPSS) version 28.0.0 (IBM Corp).


Recruitment and baseline characteristics

Altogether, 210 (78, 68, and 64 from each site) second-year undergraduate nursing students were eligible to participate in the study (Fig. 1). No exclusions were made, as only second-year undergraduate students attended. Ultimately, 35 did not show up for the study, so 175 participants were randomized: 87 to a desktop VR simulation group and 88 to a traditional paper-based (TP) group. One student left before the written test in the control group, and one did not return the written test in the intervention group.

Fig. 1
figure 1

The flow of participants. Abbreviations: VR = desktop virtual reality; TP = traditional paper-based simulation

The participants’ characteristics are presented in Table 3. The sample included 142 females (81.1%), and most participants were 20–24 years old. Nearly all had previously been taught the ISBAR approach, 82% reported having practiced the ISBAR approach, and 43% reported having played multiplayer PC games.

Table 3 Participant characteristics

The groups’ characteristics were similar, but those in the VR group were somewhat younger, and a larger proportion had played multiplayer PC games earlier (Table 3).

Implementation of intervention

The implementation of both groups was executed without major technical or practical problems. The desktop VR program had to be restarted for two of the 29 desktop VR groups because the participants could not talk to each other.


For the primary outcome, the group self-practicing on the desktop VR application (36% had everything correct) was non-inferior to the traditional paper-based group (22% had everything correct), with a difference of 14.2% points (one-sided 95% CI 2.9 to 14.2) on the primary outcome (Fig. 2, Table 4). Furthermore, the desktop VR application was superior to the traditional paper-based simulation in providing a better learning outcome (difference 14.2% points, two-sided 95% CI 0.7 to 27.1) (Table 4).

Fig. 2
figure 2

The difference between the VR and TP groups on sorting patient information, based on ISBAR. Legends: If the horizontal one-sided 95% confidence interval (CI) had crossed or been to the left of the vertical non-inferior limit, desktop virtual reality (VR) would not be non-inferior. Abbreviations: VR = desktop virtual reality; TP = traditional paper-based simulation

Table 4 Primary outcome and secondary outcomes. Numbers (%) of participants for each group and difference in percentage points with a two-sided 95% confidence interval (95% CI) between the groups

For the secondary outcomes, the desktop VR groups had an average of 1.8 complete runs of the practice (distribution in Table 5), compared with 1.2 runs in the TP group (mean difference 0.6, two-sided 95% CI 0.5 to 0.7, P-value < 0.001).

Table 5 The number of completed runs (briefing-rehearsal-debriefing)

The outcomes placing the correct patient information within its correct ISBAR category were similar in the two groups, except for the category assessment (a difference of 19 percentage points in favor of VR, two-sided 95% CI 4.3 to 32.6). The other outcomes on arranging the ISBAR words and pieces of patient information correctly were similar in the two groups.

The outcomes from the students’ experiences with the self-perceived learning outcome indicated that the desktop VR group performed either non-inferior or better than the TP group (Table 6). The VR group participants reported that they liked this type of practice better (difference: 20% points). For the perceived usability of the simulation method, the VR group provided an SUS mean score of 78.6, which was non-inferior to the TP group, with a mean of 76.3. Both groups got a Grade C based on Bangor, Kortum [47] grading scale.

Table 6 Secondary outcomes on the students’ experiences with self-perceived learning outcomes and perceived usability of simulation methods. Numbers (%) of participants for each group and difference in percentage points with a two-sided 95% confidence interval (95% CI) between groups


There was a superior learning outcome of the Preoperative ISBAR Desktop VR Application on sorting patient information correctly based on the ISBAR approach used for handovers in a preoperative setting, compared to traditional paper-based simulation. Most of the other outcomes indicated that desktop VR was non-inferior, but those practicing with desktop VR liked the practice better and practiced more.

More likeable, yet better learning outcome

It was somewhat surprising that desktop VR was found to be superior to traditional practice. The study was designed as a non-inferior study, as VR can offer some disadvantages due to technical and comprehension issues [30, 50], along with a lack of face-to-face communication when practicing in desktop VR [51]. Furthermore, one review of randomized controlled trials investigating desktop virtual simulation compared with traditional learning found no clear differences when measuring learning outcomes [15], and another review found that virtual simulation provided a non-inferior outcome on teamwork attitudes when learning interprofessional team communication [26]. This study’s findings were not in line with expectations and the review’s findings. Thus, more studies that elicit a superior outcome from desktop VR are required before the review findings’ conclusion can be challenged.

Although desktop VR has the same learning outcome as traditional simulation, in this study and others [23, 52], participants reported VR as being more likable. However, even if this study found that the participants’ preferred simulation method (desktop VR) resulted in a better learning outcome, this does not seem to be the general rule. Previous systematic reviews on e-learning that investigated objective learning outcomes and satisfaction found a negative association between these two factors [53, 54], i.e., higher satisfaction is associated with lower learning outcomes. In an RCT, it was found that students who participated in an active learning approach self-reported lower learning outcomes than those in a passive learning approach [55]. However, when objective measures of learning were assessed, students in the active learning group demonstrated higher learning outcomes than their peers in the passive learning group. This indicates that student satisfaction with learning and self-reported learning are not accurate indicators of objective learning outcomes.

Potential mechanisms behind the findings

Aside from the possibility of a chance finding, we suggest five possible mechanisms to explain the superior effect and likability of desktop VR found in this study.

The first is automated individual feedback. A VR application, like the one in this study, can be programmed to provide instant feedback. Feedback on performance is crucial to learning and can be enhanced by timely, specific, and learner-targeted feedback [56]. Drawing on the theoretical perspective of deliberate practice, feedback can function as a stimulus to continuing practicing [57], thereby promoting learning. Several studies have found feedback to be a mechanism for learning through technological learning activities [58] and game-based learning [59,60,61,62].

The second mechanism is that in a virtual environment, players are represented by avatars, which can create a sense of anonymity that can increase enjoyment of the experience [63]. Furthermore, learners in a traditional face-to-face learning environment have reported that they may feel self-conscious about speaking up in front of others, fearing judgment or criticism [64]. Based on Chen and Kent [65], one reason can be that the anonymity provided through avatars can create a sense of security that can shield learners from feeling embarrassed or singled out when making mistakes. Another aspect is that avatars can create a more neutral learning environment by reducing the impact from physical attributes, e.g., sex [66] and ethnicity [67], to help prevent unconscious biases.

The third suggested mechanism is related to how information is provided during the simulation. The use of visual instructions as a tool for learning has been investigated in several studies, and it has been found that both visual appearance of educational content in VR [68] and displaying extra information when practicing can benefit learning [69].

The fourth mechanism is automatic guidance supporting progression during practice. Automatic guidance in VR can exert both positive and negative effects on learning, depending on the context and the type of guidance provided [70]. For example, excessive automatic guidance can lead to a phenomenon known as the “guidance paradox” [70], in which learners become overly reliant on guidance and fail to develop necessary skills and knowledge to perform tasks independently. However, the observed effect in this study indicates that the positive aspects of helping learners navigate the simulation can overcome negative aspects if automatic guidance is used optimally.

The fifth and final mechanism that we suggest is repetition. A notable finding in this study and others [71] is that those practicing in VR repeated the simulation more often during the same practice session. Repetitive simulation practice has been found to enhance learning outcomes [72, 73].

Strengths and limitations

This study’s main strength was the randomized controlled trial design, a relatively high number of students and a blinded assessment of the primary outcome. However, although recent findings suggest that blinding is less important than previously thought [74], this study’s limitation was that it was not possible to blind the students due to the study’s nature. Furthermore, the study evaluated only one type of desktop VR application, which may limit the findings’ generalizability to other VR applications. Finally, the learning outcome was measured immediately after practice, which means that the intervention’s long-term impact was not measured.


This study was designed to investigate whether nursing students, self-practicing the ISBAR approach in desktop VR, achieved a non-inferior learning outcome compared with self-practicing traditional practice, which was confirmed. However, it also was found that desktop VR provided superior learning outcomes. Furthermore, the students preferred using desktop VR and practiced more within the given time limit. This interactive desktop VR can be recommended as a practical and engaging way for second-year undergraduate nursing students to self-practice the ISBAR approach.

Availability of data and materials

The datasets used during the current study available from the corresponding author on reasonable request. It is also available from the Service Provider for the Education Sector (SIKT, reference 305866) repository at, where the persistent web link can also be found.



American Society of Anesthesiologists


Body Mass Index


Identification, Situation, Background, Assessment, and Recommendation


National Early Warning Score


System Usability Scale


Traditional paper-based


Virtual reality.


  1. Burke JR, Downey C, Almoudaris AM. Failure to rescue deteriorating patients: a systematic review of root causes and improvement strategies. J Patient Saf. 2022;18(1):e140–55.

    Article  Google Scholar 

  2. Rosenthal JL, Doiron R, Haynes SC, Daniels B, Li S-TT. The effectiveness of standardized handoff tool interventions during inter-and intra-facility care transitions on patient-related outcomes: a systematic review. Am J of Med Qual. 2018;33(2):193–206.

    Article  Google Scholar 

  3. Bukoh MX, Siah CJR. A systematic review on the structured handover interventions between nurses in improving patient safety outcomes. J Nurs Manag. 2020;28(3):744–55.

    Article  Google Scholar 

  4. Vineet A, Farnan J. Patient handoffs. UpToDate; 2022, Cited 2023 Nov 23. Internet. Available from

  5. Müller M, Jürgens J, Redaèlli M, Klingberg K, Hautz WE, Stock S. Impact of the communication and patient hand-off tool SBAR on patient safety: a systematic review. BMJ Open. 2018;8(8):e022202.

    Article  Google Scholar 

  6. Leonardsen A-C, Klavestad Moen E, Karlsøen G, Hovland T. A quantitative study on personnel's experiences with patient handovers between the operating room and the postoperative anesthesia care unit before and after the implementation of a structured communication tool. Nurs Rep. 2019;9(1):1–5.

    Article  Google Scholar 

  7. Davis J, Riesenberg LA, Mardis M, Donnelly J, Benningfield B, Youngstrom M, et al. Evaluating outcomes of electronic tools supporting physician shift-to-shift handoffs: a systematic review. J Grad Med Educ. 2015;7(2):174–80.

    Article  Google Scholar 

  8. Raman J, Leveson N, Samost AL, Dobrilovic N, Oldham M, Dekker S, et al. When a checklist is not enough: how to improve them and what else is needed. J Thoracic Cardiovasc Surg. 2016;152(2):585–92.

    Article  Google Scholar 

  9. Bressan V, Mio M, Palese A. Nursing handovers and patient safety: findings from an umbrella review. J Adv Nurs. 2020;76(4):927–38.

    Article  Google Scholar 

  10. Shahid S, Thomas S. Situation, background, assessment, recommendation (SBAR) communication tool for handoff in health care––a narrative review. Safety in Health. 2018;4(1):1–9.

    Article  Google Scholar 

  11. Gordon M, Hill E, Stojan JN, Daniel M. Educational interventions to improve handover in health care: an updated systematic review. Acad Med. 2018;93(8):1234.

    Article  Google Scholar 

  12. Wilbeck J, Cross L, Weaver A, Kennedy BB. Utilization of phone simulations to assess competency within nursing education. Nurse Educ. 2022;47(5):278–82.

    Article  Google Scholar 

  13. Nilsson U, Gruen R, Myles P. Postoperative recovery: the importance of the team. Anesthesia. 2020;75:e158–64.

    Article  Google Scholar 

  14. Meri-Yilan S. A constructivist desktop virtual reality-based approach to learning in a higher education institution. In: Hershey PA, editor. Emerging technologies in virtual learning environments. IGI Global; 2019. p. 258–83.

    Chapter  Google Scholar 

  15. Shorey S, Ng ED. The use of virtual reality simulation among nursing students and registered nurses: a systematic review. Nurse Educ Today. 2020;98:104662.

    Article  Google Scholar 

  16. Girvan C. What is a virtual world? Definition and classification. Educ Technol Res Dev. 2018;66(5):1087–100.

    Article  Google Scholar 

  17. Kardong-Edgreen SS, Farra SL, Alinier G, Yong HM. A call to unify definitions of virtual reality. Clin Simul Nurs. 2019;31:28–34.

    Article  Google Scholar 

  18. Lee EA-L, Wong KW. Learning with desktop virtual reality: low spatial ability learners are more positively affected. Comput Educ. 2014;79:49–58.

    Article  Google Scholar 

  19. Bauce K, Kaylor MB, Staysniak G, Etcher L. Use of theory to guide integration of virtual reality technology in nursing education: a scoping study. J Prof Nurs. 2023;44:1–7.

    Article  Google Scholar 

  20. Makransky G, Petersen GB. Investigating the process of learning with desktop virtual reality: a structural equation modeling approach. Comput Educ. 2019;134:15–30.

    Article  Google Scholar 

  21. Lioce L, Lopreiato J, Downing D, Chang T, Robertson J, Anderson M, et al. Healthcare simulation dictionary. Rockville, MD: Agency for Healthcare Research and Quality; 2020, Cited 2023 Nov 23. Internet, Available from:

  22. Perez-Gutierrez B, Uribe-Quevedo A, Vega-Medina L, Salgado JS, Jaimes N, Perez O. Immersive and non-immersive VR percutaneous coronary intervention simulation for acute myocardial infarction. SeGAH IEEE; 2020. 2020.

    Book  Google Scholar 

  23. Jacobs C, Foote G, Joiner R, Williams M. A narrative review of immersive technology enhanced learning in healthcare education. Int Med Educ. 2022;1(2):43–72.

    Article  Google Scholar 

  24. Chen F-Q, Leng Y-F, Ge J-F, Wang D-W, Li C, Chen B, et al. Effectiveness of virtual reality in nursing education: meta-analysis. J Med Internet Res. 2020;22(9):e18290.

    Article  Google Scholar 

  25. Plotzky C, Lindwedel U, Sorber M, Loessl B, König P, Kunze C, et al. Virtual reality simulations in nurse education: a systematic mapping review. Nurse Educ Today. 2021;101:104868.

    Article  Google Scholar 

  26. Liaw SY, Ooi SW, Rusli KDB, Lau TC, San Tam WW, Chua WL. Nurse-physician communication team training in virtual reality versus live simulations: randomized controlled trial on team communication and teamwork attitudes. J Med Internet Res. 2020;22(4):e17279.

    Article  Google Scholar 

  27. Bracq M-S, Michinov E, Jannin P. Virtual reality simulation in nontechnical skills training for healthcare professionals: a systematic review. Simul Healthc. 2019;14(3):188–94.

    Article  Google Scholar 

  28. Piaggio G, Elbourne DR, Pocock SJ, Evans SJ, Altman DG, CONSORT Group ft. Reporting of non-inferiority and equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA. 2012;308(24):2594–604.

    Article  Google Scholar 

  29. Andreasen EM, Høigaard R, Berg H, Steinsbekk A, Haraldstad K. Effect of desktop virtual reality preoperative handover communication. Internet. ISRCTN; 2023, Cited 2023 Nov 23. Available from:

  30. Andreasen EM, Høigaard R, Berg H, Steinsbekk A, Haraldstad K. Usability evaluation of the preoperative ISBAR (identification, situation, background, assessment, and recommendation) desktop virtual reality application: qualitative observational study. JMIR Hum Factors. 2022;9(4):e40400.

    Article  Google Scholar 

  31. Berg H, Steinsbekk A. The effect of self-practicing systematic clinical observations in a multiplayer, immersive, interactive virtual reality application versus physical equipment: a randomized controlled trial. Adv Health Sci Educ. 2021;26(2):667–82.

    Article  Google Scholar 

  32. Berg H, Steinsbekk A. Is individual practice in an immersive and interactive virtual reality application non-inferior to practicing with traditional equipment in learning systematic clinical observation? A randomized controlled trial. BMC Med Educ. 2020;20(1):10.

    Article  Google Scholar 

  33. Andreasen EM, Haraldstad K, Høigaard R, Berg H, Steinsbekk A. ISBAR. University of Agder; 2022. Internet. Cited 2023 Nov 23. Available from:

    Google Scholar 

  34. Rezmer J, Begaz T, Treat R, Tews M. Impact of group size on the effectiveness of a resuscitation simulation curriculum for medical students. Teach Learn Med. 2011;23(3):251–5.

    Article  Google Scholar 

  35. Andreasen EM, Slettebø Å, Opsal A. Learning activities in bachelor nursing education to learn pre- and postoperative nursing care—a scoping review. Int J Educ Res. 2022;115:102033.

    Article  Google Scholar 

  36. Steinsbekk A. Virtuell samhandling. NTNU; 2023.

    Google Scholar 

  37. Berg H, Prasolova-Førland E, Steinsbekk A. Developing a virtual reality (VR) application for practicing the ABCDE approach for systematic clinical observation. BMC Medical Education. 2023;23(1):639.

    Article  Google Scholar 

  38. Committee IS. INACSL standards of best practice: SimulationSM simulation glossary. Clin Simul Nurs. 2016;12:39–47.

    Article  Google Scholar 

  39. Biggs J, Tang C. Teaching for quality learning at university. Maidenhead: McGraw-Hill Education; 2022.

    Google Scholar 

  40. Frerejean J, van Merriënboer JJ, Kirschner PA, Roex A, Aertgeerts B, Marcellis M. Designing instruction for complex learning: 4C/ID in higher education. Eur J Educ. 2019;54(4):513–24.

    Article  Google Scholar 

  41. Andreasen EM, Høigaard R, Berg H, Steinsbekk A, Haraldstad K. The perceived usability of desktop VR for practicing preoperative handovers using ISBAR among nursing students. Science Talks. 2023;5:100140.

    Article  Google Scholar 

  42. Arnab S, Lim T, Carvalho MB, Bellotti F, De Freitas S, Louchart S, et al. Mapping learning and game mechanics for serious games analysis. Br J Educ Technol. 2015;46(2):391–411.

    Article  Google Scholar 

  43. Lapum JL, Verkuyl M, Hughes M, Romaniuk D, McCulloch T, Mastrilli P. Self-debriefing in virtual simulation. Nurse Educ. 2019;44(6):E6–8.

    Article  Google Scholar 

  44. Brooke J. SUS: a “quick and dirty” usability. Usability Evaluation in Industry. 1996;189(3)

  45. Sauro J. A practical guide to the system usability scale: background, benchmarks & best practices. Denver: Colorado: Measuring Usability LLC; 2011.

    Google Scholar 

  46. Vlachogianni P, Tselios N. Perceived usability evaluation of educational technology using the system usability scale (SUS): a systematic review. J Res Technol Educ. 2021:1–18.

  47. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114–23.

    Article  Google Scholar 

  48. Curran V, Fleet L, White S, Bessell C, Deshpandey A, Drover A, et al. A randomized controlled study of manikin simulator fidelity on neonatal resuscitation program learning outcomes. Adv Health Sci Educ. 2015;20(1):205–18.

    Article  Google Scholar 

  49. Mpotos N, De Wever B, Cleymans N, Raemaekers J, Loeys T, Herregods L, et al. Repetitive sessions of formative self-testing to refresh CPR skills: a randomized non-inferiority trial. Resuscitation. 2014;85(9):1282–6.

    Article  Google Scholar 

  50. Kavanagh S, Luxton-Reilly A, Wuensche B, Plimmer B. A systematic review of virtual reality in education. Themes Sci Technol. 2017;10(2):85–119.

    Google Scholar 

  51. Burgess A, van Diggele C, Roberts C, Mellis C. Teaching clinical handover with ISBAR. BMC Med Educ. 2020;20(2):1–8.

    Article  Google Scholar 

  52. Zhao J, Xu X, Jiang H, Ding Y. The effectiveness of virtual reality-based technology on anatomy teaching: a meta-analysis of randomized controlled studies. BMC Med Educ. 2020;20(1):1–10.

    Article  Google Scholar 

  53. Ebner C, Gegenfurtner A. Learning and satisfaction in webinar, online, and face-to-face instruction: a meta-analysis. Front Educ. 2019;4:92.

    Article  Google Scholar 

  54. Van Alten DC, Phielix C, Janssen J, Kester L. Effects of flipping the classroom on learning outcomes and satisfaction: a meta-analysis. Educ Res Rev. 2019;28:100281.

    Article  Google Scholar 

  55. Deslauriers L, McCarty LS, Miller K, Callaghan K, Kestin G. Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. PNAS. 2019;116(39):19251–7.

    Article  Google Scholar 

  56. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.

    Article  Google Scholar 

  57. Ericsson KA. Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90(11):1471–86.

    Article  Google Scholar 

  58. Castro R. Blended learning in higher education: trends and capabilities. Educ Inf Technol. 2019;24(4):2523–46.

    Article  Google Scholar 

  59. Ke F. Designing and integrating purposeful learning in game play: a systematic review. Educ Technol Res Dev. 2016;64:219–44.

    Article  Google Scholar 

  60. Havola S, Koivisto J-M, Mäkinen H, Haavisto E. Game elements and instruments for assessing nursing students’ experiences in learning clinical reasoning by using simulation games: an integrative review. Clin Simul Nurs. 2020;46:1–14.

    Article  Google Scholar 

  61. Koivisto J-M, Haavisto E, Niemi H, Haho P, Nylund S, Multisilta J. Design principles for simulation games for learning clinical reasoning: a design-based research approach. Nurse Educ Today. 2018;60:114–20.

    Article  Google Scholar 

  62. Koivisto J-M, Niemi H, Multisilta J, Eriksson E. Nursing students’ experiential learning processes using an online 3D simulation game. Educ Inf Technol. 2017;22:383–98.

    Article  Google Scholar 

  63. Han E, Miller MR, DeVeaux C, Jun H, Nowak KL, Hancock JT, et al. People, places, and time: a large-scale, longitudinal study of transformed avatars and environmental context in group interaction in the metaverse. J Comput-Mediat Comm. 2023;28(2):zmac031.

    Article  Google Scholar 

  64. Yaniafari RP, Rihardini AA. Face-to-face or online speaking practice: a comparison of students’ foreign language classroom anxiety level. J Eng Educ Linguist Stud. 2021;8(1):49–67.

    Article  Google Scholar 

  65. Chen JC, Kent S. Task engagement, learner motivation and avatar identities of struggling English language learners in the 3D virtual world. System. 2020;88:102168.

    Article  Google Scholar 

  66. Chang F, Luo M, Walton G, Aguilar L, Bailenson J. Stereotype threat in virtual learning environments: effects of avatar gender and sexist behavior on women's math learning outcomes. Cyberpsychol Behav Soc Netw. 2019;22(10):634–40.

    Article  Google Scholar 

  67. Zipp SA, Craig SD. The impact of a user’s biases on interactions with virtual humans and learning during virtual emergency management training. Educ Technol Res Dev. 2019;67:1385–404.

    Article  Google Scholar 

  68. Huang K-T, Ball C, Francis J, Ratan R, Boumis J, Fordham J. Augmented versus virtual reality in education: an exploratory study examining science knowledge retention when using augmented reality/virtual reality mobile applications. Cyberpsychol Behav Soc Netw. 2019;22(2):105–10.

    Article  Google Scholar 

  69. Berki B. Desktop VR as a virtual workspace: a cognitive aspect. Acta Polytech Hungarica. 2019;16(2):219–31.

    Article  Google Scholar 

  70. Kalyuga S, Chandler P, Sweller J. Incorporating learner experience into the design of multimedia instruction. J Educ Psychol. 2000;92(1):126.

    Article  Google Scholar 

  71. Berg H, Båtnes R, Steinsbekk A. Changes in performance during repeated in-situ simulations with different cases. BMJ Simul Technol Enhanc Learn. 2020: bmjstel-2019-000527;

  72. Liaw SY, Carpio GAC, Lau Y, Tan SC, Lim WS, Goh PS. Multiuser virtual worlds in healthcare education: a systematic review. Nurse Educ Today. 2018;65:136–49.

    Article  Google Scholar 

  73. Creutzfeldt J, Hedman L, Medin C, Heinrichs WL, Felländer-Tsai L. Exploring virtual worlds for scenario-based repeated team training of cardiopulmonary resuscitation in medical students. J Med Internet Res. 2010;12(3):e1426.

    Article  Google Scholar 

  74. Moustgaard H, Clayton GL, Jones HE, Boutron I, Jørgensen L, Laursen DR, et al. Impact of blinding on estimated treatment effects in randomized clinical trials: meta-epidemiological study. BMJ-Brit Med J. 2020;368:l6802.

    Article  Google Scholar 

Download references


The authors would like to thank the second-year undergraduate nursing students for their participation in this study. We also would like to thank the staff at the universities who helped with the intervention and data collection. Thanks also to computer programmer Håvard Snarby for helping us create the application. We are also grateful to Ellinor Leistad for helping us score the data.


The University of Agder funded the study for EMA’s doctorate, and the Norwegian University of Science and Technology funded the application with financial support from the Research Council of Norway (260370).

Author information

Authors and Affiliations



All the authors helped design the study. EMA and HB collected the data. EMA, HB, and AS analyzed and interpreted the data. All the authors helped write the manuscript and read and approved the final version.

Corresponding author

Correspondence to Eva Mari Andreasen.

Ethics declarations

Ethics approval and consent to participate

The study protocol was approved by the Service Provider for the Education Sector (SIKT, reference 305866). Study permission was obtained from the head of the nursing study program at the Department of Health and Nursing Sciences at the University of Agder, the Faculty Ethics Committee at the University of Agder, the head of the nursing study program at the Department of Health Sciences, Norwegian University of Science and Technology. The participants were informed both in writing and orally of their rights, the study’s purpose, and that they had to provide consent to participate. The participants were given opportunity to ask questions and seek clarifications before providing their consent. Informed consent was obtained from all students participating in the study. All methods were carried out in accordance with relevant guidelines and regulations to ensure ethics and data security.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Presentation of the Preoperative ISBAR Desktop VR Application with the desktop virtual reality feature description and classification according to pedagogic- and game elements.

Additional file 2.

ISBAR practice – sorting and role play.

Additional file 3.

Individual final assignment and scoring rules.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andreasen, E.M., Berg, H., Steinsbekk, A. et al. The effect of using desktop VR to practice preoperative handovers with the ISBAR approach: a randomized controlled trial. BMC Med Educ 23, 983 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: