Skip to main content

Teaching Medical Students Rapid Ultrasound for shock and hypotension (RUSH): learning outcomes and clinical performance in a proof-of-concept study

Abstract

Background

Point-of-care ultrasound (POCUS) is a critical diagnostic tool in various medical settings, yet its instruction in medical education is inconsistent. The Rapid Ultrasound for Shock and Hypotension (RUSH) protocol is a comprehensive diagnostic tool, but its complexity poses challenges for teaching and learning. This study evaluates the effectiveness of a single-day training in RUSH for medical students by assessing their performance in clinical scenarios.

Methods

In this prospective single-center observational proof-of-concept study, 16 medical students from Saarland University Medical Center underwent a single-day training in RUSH, followed by evaluations in clinical settings and on a high-fidelity simulator. Performance was assessed using a standardized scoring tool and time to complete the RUSH exam. Knowledge gain was measured with pre- and post-training written exams, and diagnostic performance was evaluated with an objective structured clinical examination (OSCE).

Results

Students demonstrated high performance in RUSH exam views across patients (median performance: 85–87%) and improved scanning times, although not statistically significant. They performed better on simulators than on live patients. Written exam scores significantly improved post-training, suggesting a gain in theoretical knowledge. However, more than a third of students could not complete the RUSH exam within five minutes on live patients.

Conclusions

Single-day RUSH training improved medical students’ theoretical knowledge and simulator performance but translating these skills to clinical settings proved challenging. The findings suggest that while short-term training can be beneficial, it may not suffice for clinical proficiency. This study underscores the need for structured and possibly longitudinal training programs to ensure skill retention and clinical applicability.

Peer Review reports

Introduction

Since its introduction to the clinical armamentarium in the early 2000s, point-of-care ultrasound (POCUS) has matured into an essential diagnostic tool in perioperative, emergency, and intensive care settings [1,2,3,4]. Despite its broad clinical applicability, formal ultrasound instruction in medical education has been scarce and highly variable [5,6,7,8]. For example, only half of US medical schools have implemented POCUS training in their curricula [9]. Therefore, structured training in POCUS across medical education is needed.

POCUS is a quick, non-invasive, bedside exam that is useful in the rapid assessment of critically ill patients [10, 11]. Several algorithms were developed to standardize ultrasound examinations and to create a common language to communicate diagnostic signs. One such example is the Focused Assessment with Sonography for Trauma (FAST) exam, used to evaluate pericardial, intraperitoneal, and pleural spaces for free fluid [12].

While FAST is easy to perform, it lacks the ability to diagnose several non-traumatic causes of hemodynamic instability that do not necessarily result in free fluid in the peritoneal cavity. Rapid Ultrasound for Shock and Hypotension (RUSH) is a separate diagnostic algorithm that includes additional sonographic views to assess a greater number of clinically relevant causes for hemodynamic instability than FAST [13, 14]. RUSH has demonstrated good diagnostic accuracy [15], but is associated with lower teaching success than FAST, most likely due to its increased complexity (9–11 views versus 4–5) [16]. Therefore, there is further need to assess the learning outcomes of teaching RUSH in medical students.

Ultrasound teaching courses often use simulation; however, in practice patients are often more difficult to scan with common conditions, such as obesity, impeding scanning performance [17]. Therefore, the evaluation of learning outcomes should assess scanning performance on real patients in a clinical setting.

This prospective observational study assessed the utility of a single-day training in RUSH with a focus on real-world scanning performance. We evaluated performance via both a novel performance score for RUSH and by measuring the time needed to perform the examination. Secondarily, we evaluated students’ gain in knowledge after course participation as well as their diagnostic performance with an objective structured clinical examination (OSCE) using an ultrasound simulator.

Methods

This study was approved by the responsible ethics committee (approval date: 24th January 2022, reference number: 03/22, Ethikkommission der Ärztekammer des Saarlandes, Saarbrücken, Germany). Written informed consent was obtained from each study subject. Our report adheres to the STROBE criteria and the study protocol is provided in supplementary file 1.

Study design

This was a prospective single-center observational proof-of-concept study to evaluate the clinical performance of medical students and teaching outcomes after a single-day training in ultrasound for medical emergencies according to the RUSH protocol (Fig. 1). Ultrasound instruction and subsequent evaluation lasted three days and was conducted with 16 medical students at the Saarland University Medical Center, Saarbrücken, Germany. All medical students were eligible to participate in this study, except for students with extensive prior ultrasound experience. The final course structure was optimized based on the feedback obtained during a preceding condensed course performed with 11 final-year medical students of our department. The condensed course included all educational parts within one day but did not include practical and theoretical exams.

Fig. 1
figure 1

Flow chart of the educational course structure. After one day of instruction, students were assessed in a variety of simulated and clinical environments over the following two course days

Course structure

The overall course structure is visualized in Fig. 1. Participants received a pre-reading at least one week prior to the course providing a detailed description of the RUSH exam [14]. The team of academic tutors was composed of one anesthesiology resident and three board-certified anesthesiologists, all of whom had extensive training and clinical experience in perioperative and intensive care ultrasound diagnostics. At the beginning of the course, entry knowledge was assessed by a written exam (Supplementary File 2).

The first day of training included 4 h of theoretical lectures and practical demonstrations of the RUSH exam in the morning. This was followed by 4 h of practical scanning exercises. For the practical exercises, participants were divided in 4 groups with 4 students each to perform the RUSH exam for 3 h amongst each other and 1 h on a high-fidelity ultrasound simulator (Simbionix Ultrasound Mentor, Surgical Science, Göteborg, Sweden). Every participant was scanned at least three times by every group member, resulting in at least 9 scans performed by each participant on another participant. The groups rotated every hour to a new tutor with a break of 10 min between rotations. All groups spent 1 h scanning the ultrasound simulation model. At least 2 scans were performed on the simulator; first with normal anatomy and second with a simulated pathology. Every participant had to complete at least one of the preinstalled simulated training scenarios for the detection of typical pathologies with the RUSH exam (e.g., pulmonary effusion, cardiac tamponade, abdominal bleeding). All scans were performed unblinded to the group with each participant contributing to an open problem-based learning discussion when difficulties with displaying the required ultrasound views originated. Tutors provided oral as well as hands-on feedback to guide the participants. All participants were able to perform the RUSH exam on healthy subjects within 5 min at the end of course day 1.

On the second day, a short recapitulation of the RUSH exam (about 10 min) preceded the evaluation of practical scanning performance in clinical scenarios. Participants performed the RUSH exam on patients in the intensive care unit (ICU) or postoperative recovery room. Patients with extensive coverage of the scanning areas with a surgical dressing or those that declined participation were excluded and no patient-specific data were collected. Participants were encouraged to perform 3 RUSH exams each on a different patient within 5 min. A maximum scanning time of 10 min was granted per scan and performance was rated based on standardized criteria (Supplementary File 3 and 4). After completion of the scan or after reaching the maximum scanning time, oral and hands-on feedback was provided to improve those ultrasound views that the participant found challenging. No feedback or advice was provided during the first scanning attempt on each patient.

On the third day, the practical performance of the participants was evaluated by an objective structured clinical examination (OSCE) on an ultrasound simulator (Supplementary File 5). Exit theoretical knowledge was evaluated through a repetition of the written entry exam (Supplementary File 2).

Outcomes

Participants’ (1) scanning performance score and (2) time needed to complete the examination in a clinical setting were co-primary outcomes in this analysis. A review of the literature during the planning phase of the study identified potential assessment tools for ultrasound skills (e.g., Objective Structured Assessment of Ultrasound Skills (OSAUS) [18], Ultrasound Competency Assessment Tool (UCAT) [19]). However, these tools did not include a detailed rating of scanning performance and time and were thus deemed unsuitable to answer our study aims. We therefore invented a new performance score for the RUSH exam similar to a previous scoring system used to evaluate simulation-based training of thoracentesis in medical students [20]. Although we did not formally validate our score, we performed test runs during the preceding course with final-year medical students to confirm the applicability of our score.

Primary outcome 1: performance score

Each ultrasound examination was scored based on a standardized protocol by supervising physicians (Supplementary File 4). At the discretion of the evaluating physician, views that were not possible to obtain (e.g. due to dressings) were removed from the maximum number of achievable points. Each possible ultrasound view included in the RUSH protocol was rated as “fully acquired” (2 points), “partially acquired” (1 point), “not acquired” (0 points) or “not possible”. Rating was guided by predefined objective rating criteria for each ultrasound view (Supplementary File 3). Only one examination was performed and scored per patient. Results were expressed as the percentage of achieved points out of all achievable points.

Primary outcome 2: performance time

Performance time was defined as the time needed to complete all views of the RUSH exam, measured to the second. Timing was stopped as soon as students captured each of the 9 graded views or the student indicated they had completed the exercise to the best of their ability.

Secondary outcome 1: score in the practical exam (OSCE)

Obtainable scores in the OSCE were composed of scores related to scanning performance (equivalent to primary outcome 1: performance score) and scores related to the diagnostic and documentation skills of the student during simulation. In addition to rating practical scanning performance on the high-fidelity ultrasound simulator, students were prompted to make a diagnosis for simulated case scenarios based on the views obtained during the RUSH exam. Students also wrote a medical report describing their findings. These reports were rated by evaluators as “good” (2 points), “moderate” (1 point) or “insufficient” (0 points). The result was expressed as a percentage of the maximum achievable points (Supplementary File 5).

Secondary outcome 2: scores in a written exam

A written exam was performed both before and after the course in a large auditorium with each student individually logging into the institutional digital online examination platform (Moodle, Saarland University, Germany); time was restricted to 25 min. The exam consisted of 16 predominantly multiple-choice questions (Supplementary File 2). The results were expressed as a percentage of the maximum achievable points. The correct answers were neither communicated nor discussed individually during the course before the final exam, and the exams were performed 5 days apart.

Statistical analysis

Data were collected with Excel 2019 (Microsoft, Redmond, USA). Statistical analyses were carried out with R (R Core Team, 2023) using the tidyverse package (R Core Team 2023; Wickham et al. 2019). Data are presented as means (SD), medians (interquartile range), or frequencies (percentages) as appropriate. Results of performance-related scores are expressed as the percentage of achieved points of the achievable total points. We performed non-parametric pairwise comparisons with the Wilcoxon rank-sum or signed rank test, which were adjusted for multiple comparisons. A two-sided p < 0.05 was considered statistically significant. Due to the descriptive nature of this study, no a priori sample size estimation was performed. The highest number of participants possible was included in this study based on available teaching resources and considerations on suitable group sizes.

Results

The study participants’ characteristics are presented in Table 1.

Table 1 Study participants’ characteristics

Primary outcome 1: performance score

Participants performed equally well on each of the three patients and were able to obtain most of the RUSH exam views on each of the three patients (median [interquartile range (IQR)] performance in patient A: 87 [83, 93] %; patient B: 87 [79, 94] %; and patient C: 85 [78, 89] %, p = 0.554; Fig. 2A).

Fig. 2
figure 2

Scanning performance and time. Panel A: Participants performed equally well on all three patients and significantly better on simulators than on real patients. Panel B: Participants had similar scanning times on all patients but were significantly faster on the simulator. The dashed horizontal line indicates the time limit for clinical utility (as determined by the investigators)

Primary outcome 2: performance time

Participants were able to complete the RUSH exam in a similar amount of time for each of the three live patients. Students took a median [IQR] of 6 min (m) 30 s (s) [5 m 16 s, 7 m 8 s] on patient A, 5 m 29 s [4 m 36 s, 6 m 18 s] on patient B, and 4 m 31 s [4 m 1 s, 5 m 18 s] on patient C. Between the first and third scan (patient A to C), the median difference in scanning time improved by 1 m 59 s — although this difference was not statistically significant (95% confidence interval (95%CI): 21 s, 2 m 37 s; p = 0.07; Fig. 2B).

Secondary outcome 1: score in the practical exam (OSCE)

Median [IQR] scores in the performance section (i.e., obtaining the images) and the diagnostic section (i.e., documentation and diagnosis) of the final OSCE exam were 100 [93, 100] % and 100 [98, 100] %. Participants performed significantly better on simulators than on live patients (p < 0.02 for all patients compared to the simulator; Fig. 2A). Participants were significantly faster on the simulator than on their first and second, but not third live patient (patient A versus simulator, p = 0.002; patient B versus simulator, p = 0.013; patient C versus simulator, p = 0.22; Fig. 2B).

Secondary outcome 2: scores in a written exam

Participants’ multiple choice written exam scores improved after course participation (p = 0.001), with a median [IQR] of 88 [74, 94] % on the entry exam and 100 [94, 100] % on the final exam (Fig. 3A). Performance was widespread on the entry exam ranging from 36 to 100%, but all students scored higher than 85% on the final exam (Fig. 3B). Students had a median [IQR] exam performance improvement of 12 [6, 19]% after course participation.

Fig. 3
figure 3

Students’ MC exam performance scores pre- and post-intervention. Panel A: Student exam performance improved after instruction (p = 0.001, Wilcoxon signed rank test). Panel B: Students improved by a median [IQR] difference of 12 [6, 19] %. The outlier (+ 64%) was a first-year student

Discussion

The RUSH exam can be an invaluable tool in the diagnostic toolbox of physicians in perioperative, emergency, and intensive care settings [13,14,15, 21]. We evaluated a single-day training in RUSH with a focus on scanning performance under clinical conditions. Despite the brevity of the course, most students were able to capture more than 80% of the ultrasound views of the RUSH exam on each of the three patients with sufficient quality. In addition, two thirds of the students succeeded in performing the RUSH exam within a clinically relevant time frame of 5 min.

While scanning performance did not improve with each patient scan, the required time to complete the RUSH exam decreased, although this improvement was not statistically significant. The study team agreed on a clinical utility threshold of 5-minutes for the RUSH exam. Though most students were able to achieve this by their third patient, more than a third did not complete the exam within that time. Our course participants performed at least 14 RUSH scans prior to the final exam; 9 on healthy subjects, 2 on a simulator, and 3 on patients. To reach reasonable scanning and diagnostic skills, a much higher number of repetitions (50–75) may be needed to master ultrasound examinations used in emergency medicine [22]. However, with reduced complexity of ultrasound examinations, learning plateaus have been seen as early as after 10 to 15 repetitions [23]. Although the RUSH exam requires views from several different body areas (i.e., cardiac, abdominal, and lung ultrasound), it only includes a subset of the most important ultrasound views for each body region, which suggests that a lower number of repetitions is needed to reach a learning plateau– at least in comparison to more detailed examinations. Though our course has been a sufficient primer for the RUSH exam in medical students, many more scans are likely required to master the RUSH exam.

Medical students were slower and performed worse on real patients than on the simulator. While all but one student completed the exam in under 5 min on the simulator, a third needed longer on real patients. This reveals current limitations of ultrasound simulators in refactoring real-world clinical conditions. However, while ultrasound simulators do not replace the real-world experience, they are useful tools to demonstrate principles of ultrasound examinations and gain early learners’ ultrasound skills before approaching real patients [24,25,26,27,28,29,30]. Training to mastery on the simulator was found to lower the repetitions needed to master abdominal ultrasound examinations on real patients in 25 first-year residents randomized to simulation-based or conventional clinical training [25]. Simulation-based training thus helps to shorten the steep initial part of a trainees’ learning curve, while lowering the burden on patients resulting from initial scanning attempts by ultrasound novices.

The ability of simulator models to discriminate in proficiency of scanning skills may vary substantially. As such, a recent report showed that a simulation-based assessment of scanning skills discriminated reasonably well between the performance of novices and experts in abdominal ultrasound [31]. In contrast, most participants of our course scored 100% on the simulator. The high median performance of students on our simulator suggests that the model is too easy to scan and fails to provide enough of a distribution to allow for discrimination in scanning skills. Given the significant deviation between performance under clinical and simulated conditions, our results suggest that ultrasound scanning skills should be evaluated on live patients or at least human models to obtain meaningful estimates of scanning performance.

Participation in our course led to a significant gain in theoretical knowledge assessed by a written exam before and after the course– an important tool for quality assurance of the educational value of our course. The fact that some but not all students scored 100% in both entry and final exams suggests reasonable discrimination amongst the performance of the students. Scores on the written exam increased by a relative median difference of 14%, which is lower when compared to a similar report on a course in RUSH and eFAST [16]. In the study of Cevik et al., scores in written exams more than doubled after a course in RUSH in medical final-year students (RUSH: 166%, eFAST: 114% of relative increase in performance) [16]. Also in contrast to our findings, only 47% of students passed the exam for RUSH compared to 79% passing the exam for eFAST [16]. The better performance in the written exams in our study could be explained by the fact that we provided a pre-reading script, leading to a higher baseline knowledge. In addition, our course included 4 h (versus 1) of theoretical lectures, 4 h (versus 2) of practical scanning training, simulator training, and clinical hands-on scanning sessions [16]. The increase in scores after participation in our course suggests that the amount of educational content and hands-on experience was sufficient to achieve a reasonable gain in theoretical knowledge.

The core educational components of our course, such as lectures and hands-on scanning time on participants and the simulator, were conducted in a single day. Reasonable improvement in learning outcomes and clinical performance suggests that the course length was sufficient as a primer for most students to master the RUSH exam. Similarly, Cevik et al. reported a high 84% success rate in passing an OSCE for eFAST after a 1-hour theoretical lecture and 2-hours of practical scanning in 54 final-year medical students [32]. In contrast, Boniface et al. demonstrated that a longitudinal curriculum for internal medicine residents resulted in considerably better skill retention for ultrasound procedures compared to a single-day workshop, suggesting that ongoing education may be more effective for long-term competency [33]. Future studies could explore longitudinal follow-up to evaluate long-term skill retention, or the inclusion of a group exposed to longitudinal refresher courses. Taken together, short-term learning outcomes appear to be reasonably good after a single-day structured training in the RUSH exam, but long-term skill retention remains unclear and could benefit from longitudinal inclusion of refresher courses in academic curricula.

Limitations

This study evaluated students’ performance under clinical conditions, which directly mirrors the real-world scenarios where the RUSH exam would be employed; however, this study does have limitations which merit attention. First, all medical students were from a single institution, possibly limiting the external validity of the results; however, we did enroll students from different semesters of clinical training. Second, the use of a single-day training session as the intervention may not be sufficient to produce lasting competency in the RUSH exam. Retention of skills over time could be a factor more crucial for clinical applicability. A blend of longitudinal training with periodic reinforcement may emerge as a superior strategy. Third, the small study population and the fact that students had to actively apply for the course, making them a highly motivated sub-population of medical students, severely limits generalizability of our results. Finally, the scoring metric we used to evaluate performance were not previously validated.

Conclusion

Medical students showed satisfactory performance in both theoretical and practical evaluations following a one-day RUSH training session. However, a notable disparity emerged between simulated and actual scanning environments, with worse performance on real patients. This study enhances our understanding of the effectiveness of brief RUSH training in medical students, especially its applicability in real clinical scenarios. However, critical inquiries persist regarding long-term skills retention and the adaptability of the training program across diverse educational and clinical settings. Future investigations should prioritize addressing these gaps, potentially through multi-institutional or longitudinal studies, to offer a more comprehensive assessment of RUSH exam training’s impact in medical education.

Data availability

Data and the instructional slide deck (5 presentations: introduction, basics, shock, RUSH, ultrasound pathology quiz) are available from the corresponding author upon reasonable request. The R code of the statistical analysis can be found in the GitHub online repository of WMP: https://github.com/pattwm16/rushpro-us.

References

  1. Díaz-Gómez JL, Mayo PH, Koenig SJ. Point-of-Care Ultrasonography. N Engl J Med. 2021;385:1593–602.

    Article  Google Scholar 

  2. Whitson MR, Mayo PH. Ultrasonography in the emergency department. Crit Care. 2016;20:1–8.

    Article  Google Scholar 

  3. Ramsingh D, Bronshteyn YS, Haskins S, Zimmerman J. Perioperative point-of-care ultrasound: from concept to application. Anesthesiology. 2020;:908–16.

  4. Campbell SJ, Bechara R, Islam S. Point-of-care Ultrasound in the Intensive Care Unit. Clin Chest Med. 2018;39:79–97.

    Article  Google Scholar 

  5. Wolf R, Geuthel N, Gnatzy F, Rotzoll D. Undergraduate ultrasound education at german-speaking medical faculties: a survey. GMS J Med Educ. 2019;36:1–23.

    Google Scholar 

  6. Bahner DP, Goldman E, Way D, Royall NA, Liu YT. The state of ultrasound education in U.S. Medical schools: results of a national survey. Acad Med. 2014;89:1681–6.

    Article  Google Scholar 

  7. Feilchenfeld Z, Dornan T, Whitehead C, Kuper A. Ultrasound in undergraduate medical education: a systematic and critical review. Med Educ. 2017;51:366–78.

    Article  Google Scholar 

  8. Rajamani A, Shetty K, Parmar J, Huang S, Ng J, Gunawan S, et al. Longitudinal competence programs for Basic Point-of-care ultrasound in critical care: a systematic review. Chest. 2020;158:1079–89.

    Article  Google Scholar 

  9. Russell FM, Zakeri B, Herbert A, Ferre RM, Leiser A, Wallach PM. The state of point-of-care Ultrasound Training in Undergraduate Medical Education: findings from a National Survey. Acad Med. 2022;97:723–7.

    Article  Google Scholar 

  10. Shokoohi H, Boniface KS, Pourmand A, Liu YT, Davison DL, Hawkins KD, et al. Bedside ultrasound reduces diagnostic uncertainty and guides resuscitation in patients with undifferentiated hypotension. Crit Care Med. 2015;43:2562–9.

    Article  Google Scholar 

  11. Pontet J, Yic C, Díaz-Gómez JL, Rodriguez P, Sviridenko I, Méndez D et al. Impact of an ultrasound-driven diagnostic protocol at early intensive-care stay: a randomized-controlled trial. Ultrasound J. 2019;11.

  12. Pace J, Arntfield R. Focused assessment with sonography in trauma: a review of concepts and considerations for anesthesiology. Can J Anesth. 2018;65:360–70.

    Article  Google Scholar 

  13. Seif D, Perera P, Mailhot T, Riley D, Mandavia D. Bedside ultrasound in resuscitation and the rapid ultrasound in shock protocol. Crit Care Res Pract. 2012;2012.

  14. Weingart SD, Duque D, Nelson B. The RUSH exam: Rapid Ultrasound for Shock and Hypotension. EMCrit Project. 2008. https://emcrit.org/rush-exam.

  15. Bagheri-Hariri S, Yekesadat M, Farahmand S, Arbab M, Sedaghat M, Shahlafar N, et al. The impact of using RUSH protocol for diagnosing the type of unknown shock in the emergency department. Emerg Radiol. 2015;22:517–20.

    Article  Google Scholar 

  16. Cevik AA, Cakal ED, Abu-Zidan F. Point-of-care Ultrasound Training during an Emergency Medicine Clerkship: a prospective study. Cureus. 2019;11.

  17. Brahee DD, Ogedegbe C, Hassler C, Nyirenda T, Hazelwood V, Morchel H, et al. Body mass index and abdominal ultrasound image quality: a pilot survey of sonographers. J Diagn Med Sonography. 2013;29:66–72.

    Article  Google Scholar 

  18. Todsen T, Tolsgaard MG, Olsen BH, Henriksen BM, Hillingsø JG, Konge L, et al. Reliable and valid assessment of point-of-care ultrasonography. Ann Surg. 2015;261:309–15.

    Article  Google Scholar 

  19. Bell C, Hall AK, Wagner N, Rang L, Newbigging J, McKaigney C. The Ultrasound Competency Assessment Tool (UCAT): development and evaluation of a Novel competency-based Assessment Tool for Point-of-care Ultrasound. AEM Educ Train. 2021;5:1–12.

    Article  Google Scholar 

  20. Jiang G, Chen H, Wang S, Zhou Q, Li X, Chen K et al. Learning curves and long-term outcome of simulation-based thoracentesis training for medical students. BMC Med Educ. 2011;11.

  21. Perera P, Mailhot T, Riley D, Mandavia D. The RUSH exam: Rapid Ultrasound in SHock in the evaluation of the critically lll. Emerg Med Clin North Am. 2010;28:29–56.

    Article  Google Scholar 

  22. Blehar DJ, Barton B, Gaspari RJ. Learning curves in emergency ultrasound education. Acad Emerg Med. 2015;22:574–82.

    Article  Google Scholar 

  23. Breunig M, Hanson A, Huckabee M. Learning curves for point-of-care ultrasound image acquisition for novice learners in a longitudinal curriculum. Ultrasound J. 2023;15:1–8.

    Article  Google Scholar 

  24. Lewiss RE, Hoffmann B, Beaulieu Y, Phelan MB. Point-of-care Ultrasound Education. J Ultrasound Med. 2014;33:27–32.

    Article  Google Scholar 

  25. Østergaard ML, Rue Nielsen K, Albrecht-Beste E, Kjær Ersbøll A, Konge L, Bachmann Nielsen M. Simulator training improves ultrasound scanning performance on patients: a randomized controlled trial. Eur Radiol. 2019;29:3210–8.

    Article  Google Scholar 

  26. Østergaard ML, Ewertsen C, Konge L, Albrecht-Beste E, Bachmann Nielsen M. Simulation-based abdominal ultrasound training– a systematic review. Ultraschall Der Medizin-European J Ultrasound. 2016;37:253–61.

    Article  Google Scholar 

  27. Stefanidis D, Scerbo MW, Montero PN, Acker CE, Smith WD. Simulator training to automaticity leads to improved skill transfer compared with traditional proficiency-based training: a randomized controlled trial. Ann Surg. 2012;255:30–7.

    Article  Google Scholar 

  28. Jensen JK, Dyre L, Jørgensen ME, Andreasen LA, Tolsgaard MG. Collecting Validity evidence for Simulation-Based Assessment of Point‐of‐Care Ultrasound skills. J Ultrasound Med. 2017;36:2475–83.

    Article  Google Scholar 

  29. C, Taksøe-Vester LDJS. Simulation-based ultrasound training in obstetrics and gynecology: a systematic review and meta-analysis. J Ultrasound. 2020;42:e42–54.

    Google Scholar 

  30. Simon R, Petrisor C, Bodolea C, Golea A, Gomes SH, Antal O, et al. Efficiency of Simulation-based learning using an ABC POCUS Protocol on a high-Fidelity Simulator. Diagnostics. 2024;14:173.

    Article  Google Scholar 

  31. Teslak KE, Post JH, Tolsgaard MG, Rasmussen S, Purup MM, Friis ML. Simulation-based assessment of upper abdominal ultrasound skills. BMC Med Educ. 2024;24:1–7.

    Article  Google Scholar 

  32. Cevik AA, Noureldin A, El Zubeir M, Abu-Zidan FM. Assessment of EFAST training for final year medical students in emergency medicine clerkship. Turk J Emerg Med. 2018;18:100–4.

    Article  Google Scholar 

  33. Boniface MP, Helgeson SA, Cowdell JC, Simon LV, Hiroto BT, Werlang ME, et al. A longitudinal curriculum in Point-Of-Care Ultrasonography improves medical knowledge and psychomotor skills among Internal Medicine residents. Adv Med Educ Pract. 2019;10:935–42.

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful for the continuous efforts of the educational staff of the Medical Faculty of the Saarland University in supporting academic educators by improving teaching and learning environment and providing access to the ultrasound simulator and ultrasound machines. This educational project emerged from the medical faculty’s course for academic teaching “Teach the Teacher”. We acknowledge Dr. Scott Weingart and colleagues for inventing the RUSH exam (https://emcrit.org/rush-exam/).

Funding

This conduction of the study was solely financed from institutional and/or departmental funds. Lukas M. Müller-Wirtz acknowledges support by the German Research Foundation (DFG) within the Walter-Benjamin-Fellowship program (reference no.: MU 4688-1-1). Article Processing Charges (APC) were funded via the Projekt DEAL agreement (https://deal-konsortium.de/en).

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

LMMW and DC wrote the study protocol, applied for ethical approval, and were the driving forces in conducting the study. WMP and SO helped with the statistical analysis, interpretation of the results, and manuscript writing. AB, AM, TV, UB helped with writing of the study protocol, conduction of the study, and interpretation of the results. TV supervised the project and allocated appropriate departmental resources to enable the conduction of the educational project. All authors critically reviewed and revised the final version of the manuscript.

Corresponding author

Correspondence to Lukas Martin Müller-Wirtz.

Ethics declarations

Ethics approval

This study was approved by the responsible ethics committee (approval date: 24th January 2022, reference number: 03/22, Ethikkommission der Ärztekammer des Saarlandes, Saarbrücken, Germany).

Consent to participate

Written informed consent was obtained from each study subject.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Müller-Wirtz, L.M., Patterson, W.M., Ott, S. et al. Teaching Medical Students Rapid Ultrasound for shock and hypotension (RUSH): learning outcomes and clinical performance in a proof-of-concept study. BMC Med Educ 24, 360 (2024). https://doi.org/10.1186/s12909-024-05331-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-024-05331-3

Keywords