Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Participation in EHR based simulation improves recognition of patient safety issues

  • Laurel S Stephenson1,
  • Adriel Gorsuch1,
  • William R Hersh2,
  • Vishnu Mohan2 and
  • Jeffrey A Gold1Email author
BMC Medical Education201414:224

https://doi.org/10.1186/1472-6920-14-224

Received: 22 April 2014

Accepted: 18 September 2014

Published: 21 October 2014

Abstract

Background

Electronic health records (EHR) are becoming increasingly integrated into the clinical environment. With the rapid proliferation of EHRs, a number of studies document an increase in adverse patient safety issues due to the EHR-user interface. Because of these issues, greater attention has been placed on novel educational activities which incorporate use of the EHR. The ICU environment presents many challenges to integrating an EHR given the vast amounts of data recorded each day, which must be interpreted to deliver safe and effective care. We have used a novel EHR based simulation exercise to demonstrate that everyday users fail to recognize a majority of patient safety issues in the ICU. We now sought to determine whether participation in the simulation improves recognition of said issues.

Methods

Two ICU cases were created in our EHR simulation environment. Each case contained 14 safety issues, which differed in content but shared common themes. Residents were given 10 minutes to review a case followed by a presentation of management changes. Participants were given an immediate debriefing regarding missed issues and strategies for data gathering in the EHR. Repeated testing was performed in a cohort of subjects with the other case at least 1 week later.

Results

116 subjects have been enrolled with 25 subjects undergoing repeat testing. There was no difference between cases in recognition of patient safety issues (39.5% vs. 39.4%). Baseline performance for subjects who participated in repeat testing was no different than the cohort as a whole. For both cases, recognition of safety issues was significantly higher among repeat participants compared to first time participants. Further, individual performance improved from 39.9% to 63.6% (p = 0.0002), a result independent of the order in which the cases were employed. The degree of improvement was inversely related to baseline performance. Further, repeat participants demonstrated a higher rate of recognition of changes in vitals, misdosing of antibiotics and oversedation compared to first time participants.

Conclusion

Participation in EHR simulation improves EHR use and identification of patient safety issues.

Background

The use of electronic health records (EHR) continues to grow in the US. The reasons are myriad, and include financial incentives related to the American Recovery and Reinvestment Act as well as a body of literature suggesting benefits of EHR use such as improved safety, improved efficiency and increased adherence to guideline based care [13]. The passage of the Health Information Technology for Economic and Clinical Health (HITECH) in 2009 has promoted the adoption of EHRs, and incentivization of EHR meaningful use has greatly expanded the role of EHRs in acute care hospitals [4]. From 2008 to 2012, use of any EHR in US hospitals increased from 9.4% to 44.4% and the percent of hospitals adopting a comprehensive EHR (defined as a system that includes electronic patient demographics, computerized provider order entry, results management and decision support) has risen from 1.6% to 16.9% over the same time period [5].

As EHRs become increasingly prevalent in healthcare, we continue to see a number of unintended consequences associated with their use. In 2005, Han et al. described the implementation of a computerized provider order entry (CPOE) system in a pediatric ICU. During the post-implementation study period they noted an increase in mortality associated with a change in ICU workflows [6]. There are a multitude of reasons for reports such as these, many of which were recently described in an Institute of Medicine Report on EHR safety. Specifically, the authors detail the impact of poor EHR implementation, inadequate training and education and the impact of the EHR user interface on inducing cognitive errors in medical decision making [7]. This last component is closely related to the contextual nature of the data generated, entered into, and viewed in the EHR. These issues are magnified in the ICU, as an individual patient generates more than 1400 data items per 24 hour period (excluding clinical notes, medication orders and details of medication administration) and could explain why many of the reports of difficulty with implementation of the EHR have come from the ICU environment [6, 8, 9].

With this increasing awareness of the potential negative impact of poor implementation and/or use of EHRs, there has been greater attention paid towards integrating the EHR into medical education and defining the competencies associated with proficient use [1012]. Unfortunately, most EHR education activities are perceived to be inadequate by most providers. In one study looking at 9 basic EHR competencies, medical interns were unable to complete these tasks without assistance between 7 and 37% of the time, depending on the competency [11]. In another study, authors noted at least 3–5 days of training is required for physician satisfaction with the EHR and usability continued to improve even after a full week of training [13]. Given the time constraints placed on individual practitioners, it is unreasonable to expect that this amount (3–5 days) of training can be universally implemented. Further, most EHR training programs are generic and often not specific tailored to individual workflows. Combined, these issues suggest other approaches are necessary to successfully affect EHR implementation and training.

Simulation is increasingly used as a modality in physician training in large part due to several benefits such as posing minimal risk to patients, allowing for standardization of training environments and tailoring clinical situations to meet the needs of learners. The most established role for simulation in medicine has been to improve user proficiency with highly complex medical devices and procedures such as ultrasound, angiography and laparoscopic surgery with multiple studies documenting transfer of skills from the simulation suite to the clinical environment [1416]. A growing number of studies demonstrate that simulation based educational activities can also improve the ability of subjects to manage medical emergencies and improve diagnostic (cognitive) performance when subjects subsequently encounter similar patients in real life [1719]. In toto, it appears that utilizing a curriculum that includes simulation training improves subjects’ readiness for acute care in the inpatient setting [20]. Due to all of these benefits, and the complexity of the EHR, many groups including the IOM and the National Institute of Standards and Technology (NIST) recommend the use of simulation to aid in EHR education [21, 22]. There are few previous reports of the use of simulation to improve the use of the EHR as an adjunct to initial EHR training. In one study, investigators were successful in creating a realistic simulated ICU environment including use of an EHR to test decision-making variability in patient triage [23]. However, in most simulation-centered studies, the EHR was utilized as a tool as opposed to the focus of the simulation exercise itself. The few other studies exclusively centered on EHR simulation have not been in the ICU nor have they tested physician ability to recognize and process information (as opposed to order entry) [24, 25].

We recently published our preliminary experience with creation of an EHR based simulation exercise in which high-fidelity, data rich cases were created specifically to test whether participants could identify patient safety related issues and/or changes in clinical status. We demonstrated that the average user identified only a fraction of the issues present within the case and that their degree of patient safety issue or error recognition was independent of training level [26]. In this report, we describe the creation of an additional case with similar performance characteristics and the subsequent use of this case in our simulations with immediate debriefing. This can be used as a viable method of training that allows a significant transfer of learning effect through participation in the EHR simulation.

Methods

The study was approved by the Oregon Health and Science University Institutional Review Board. The study was deemed minimal risk and formal informed consent was not required, however all participants were provided with an information sheet about our research protocol.

We have previously published our creation of a simulated patient environment within our EHR, case creation and the detailed method of the simulation exercise for evaluating use of the EHR in the ICU [26]. Based on the initial results, we created a second simulated medical ICU (MICU) patient with a different clinical scenario, different trends in vital signs and lab values, and incorporated new patient safety issues. The two cases were similar in the number of patient safety issues/action items to be identified, and the types of EHR skills required for completion, including data finding and assessment of trends. In both cases we attempted to make the EHR data as robust as possible, with hourly vital signs, intake and output reports, lab data, as well as resident, attending, nursing and respiratory therapist notes. Prior to using the second case as part of our simulation testing, we had several trainees run through the simulation to ensure compatibility. A complete list of errors included in each case as well as the type of error represented is listed in the Additional file 1: Table S1.

Subjects participated in the EHR simulation while on service in the MICU. Testing occurred once per week as previously described [26]. Subjects were given a written signout and were then allowed ten minutes to review the EHR. Participants then presented the case to a member of the study team and were graded on the number of patient safety issues identified. After the exercise, every participant underwent an immediate, standardized debriefing session on action items missed and received suggestions to improve their skills for EHR use. Beginning with the laboratory data, participants were shown the important trends in renal function and blood counts, as well as a tutorial regarding the graphing functions available. From there, assessment and evaluation of the medication administration report was completed, with discussion of appropriate dosing of medications and finding therapeutic drug monitoring assessments. This would be followed by reviewing vital signs, beginning with the most commonly used screen to assess vitals and using two other screens that display the same information in different contexts. Participants were shown possible customizability options and graphing functions within the vital signs pages as well as specific information found only in these screens. Next, participants would review ventilator data and discuss lung protective and low tidal volume ventilation, as well as how to assess appropriateness of an individual patient’s ventilator settings. Volume status and intake/output reports were then viewed and specific issues surrounding volume status in ARDS were discussed. Finally, participants were given time to ask questions, re-review any functions of the EHR, and discuss any concerns regarding participation in the simulation exercise. Participants were tested initially with either Case #1 or Case #2, with cases rotating on a weekly basis to ensure an equal distribution of participants in each case.

In order to test the effect of participation in the EHR simulation on performance, a cohort of 25 subjects were enrolled twice based on their presence in the ICU on the day of testing. When subjects underwent repeat testing, we ensured that they were tested with a different case from their initial test. Note one participant was initially served as a beta-test subject for the development of Case #1; as such only his performance on repeat testing is included.

We used a non-paired T-Test to compare performance among first time test takers for each case. To evaluate the effect of repeat testing for individual participants, we used a paired analysis. A chi square test was used to assess performance on individual metrics for each case. All data was analyzed with Graphpad Prism (La Jolla, CA).

Results

We enrolled 116 subjects in our study, all of whom rotated through the Medical ICU at our institution. Among first time participants in the simulation, we enrolled 55 interns, 35 residents and 26 fellows. Of these, 71 subjects were tested with Case #1 and 45 subjects were tested with Case #2. Note that data on the first 39 subjects to participate in the simulation using Case #1 were previously published [26].For first time participants, there was no difference in the percent of patient safety issues recognized between Case #1 (39.5%) and Case #2 (39.4%) demonstrating relative equivalency of difficulty of the two cases (Figure 1). For individual safety issues within both cases, there was an equivalent distribution of error recognition for each action item with the majority (85%) of items within the 2 cases recognized between 20 and 80% of the time by participants (not shown). This not only exhibits the comparable nature of the difficulty between the two cases, but also demonstrates that the lack of consistency in recognition of issues within the case is dependent on the participant as opposed to the case.Twenty-five subjects underwent testing with both cases. At the time of repeat testing, the cohort comprised of 18 residents (4 of whom were interns) and 7 fellows. For those who participated in repeat testing, their initial performance on Case #1 and Case #2 was no different than those who only participated in the simulation once (Figure 2). Of those participants, 16 were initially tested with Case #1 followed by Case #2 and 8 were first tested with Case #2 followed by Case #1. The interval between testing was greater than 4 weeks for 20 of the repeat participants (indicating a different ICU rotation), with a maximum interval of less than 1 year. Five of the participants underwent repeat testing after less than 4 weeks, with a minimum interval of 1 week between testing sessions for all subjects. Of these participants, four of the five underwent repeat testing one to two weeks after initial participation.For subjects repeating the simulation, their scores on repeat performance were greater than the performance for first time only participants (62% vs. 39%) (Figure 2). This persisted irrespective of level of training (62% vs. 30% (Interns), vs. 42% (Residents), vs. 49% (Fellows)). When we analyzed data by case, subjects who participated initially with Case #2 then went on to view Case #1 correctly identified 64.3% of action items in Case #1 (their second case viewed) as opposed to first time participants who correctly identified 39.5% of action items in Case #1, a 38.5% relative improvement (p = 0.0002). We observed similar data with Case #2, in that repeat participants identified 68.6% of action items compared to 39.4% among first time viewers, a 42.6% relative improvement (p = 0.0002) (Figure 3B & C).When we analyzed data by controlling for individual baseline performance, we observed a similar overall affect. For the entire cohort we found that individual subjects improved their rate of overall recognition of patient safety issues when re-tested with a different case (39.9% vs. 63.4%, p < 0.0001, Figure 4A). This effect remained regardless of whether the subject underwent simulation first with Case #1 followed by Case #2 (41.5% vs. 62.8%, p = 0.0003) or with Case #2 followed by Case #1 (37.5% vs. 64.3%, p = 0.001) (Figures 4B and C). When we looked for predictors of magnitude of improvement, there was no difference in relative improvement between residents and fellows or whether testing sessions occurred <4 weeks apart or >4 weeks (Data not shown). There was a strong correlation between baseline performance and magnitude of improvement, with poor performers showing the greatest relative improvement (Figure 5).While the content of the two cases was different, there were 3 core themes common to both cases. First was recognition of inappropriate medication dosing based on renal function. When combining data from both case progression scenarios (Case #1 followed by Case #2, and vice versa), only 21% of first time participants recognized the inappropriate dosing of antibiotics compared to 48% among repeat test-takers (p < 0.006) (Figure 6A). Second was ability to recognize significant changes in hemodynamics over a period for >24 hrs. Again, only 53% of first time test-takers recognized these trends, which increased to 78% among repeat test-takers (p = 0.01) (Figure 6B). Third was recognition of oversedation based on Motor Activity Assessment Scale (MAAS). Again, repeat participants had a higher rate of recognition compared to first time participants (47 vs 22%; p < 0.009) (Figure 6C).
Figure 1

Case 1 and Case 2 have equal performance characteristics among first time test takers. % of errors recognized for first time participants for Case #1 (N = 71) and Case #2 (N = 49).

Figure 2

Subjects participating in repeat testing have similar baseline performance. 25 subjects participated in repeated testing. Their baseline performance in the simulation was identical to those who did not participate in repeat testing (N = 91).

Figure 3

Repeat test takers perform better than first time test takers for each individual case. Panel A- Performance for first time participants for Case #1 (N = 71) and repeat participants (N = 8) (who were initially trained on Case #2). Panel B. Performance for first time participants for Case #1 (N = 45) and repeat participants (N = 17) (who were initially trained on Case #2.

Figure 4

Individual performance improves with participation in simulation. Panel A. Initial and repeat performance for all individuals (N = 25). Panel B. Initial and repeat performance for subjects who started with Case #1 (N = 17). Panel C. Initial and repeat performance for subjects who started with Case #2 (N = 8).

Figure 5

Relative improvement in performance correlates inversely with baseline performance. Correlation between relative improvement in simulation and baseline performance (R = -0.69; p = 0.002).

Figure 6

Repeat test takers perform better in identification of specific safety issues. Panel A. Recognition of misdosing of antibiotics among first time and repeat participants. Panel B. Recognition of new hypotension among first time and repeat participants. Panel C. Recognition of inappropriate MASS score.

Discussion

We created a novel EHR based simulation exercise to evaluate the use of the EHR by medical trainees at our institution. Our prior study suggested that residents and fellows do not adequately recognize clinical trends and other action item data within the EHR that are key to optimizing patient safety [26]. We have now expanded upon those findings with the creation of a similarly-calibrated second case with respect to rates of recognition of patient safety issues among first time test takers. As with the first case, there is a random distribution of recognition across the individual action items contained within Case #2. The creation of our second case allowed for repeat testing of prior participants, thus testing the effectiveness of the simulation as a learning activity.

The most important finding in our study is that identification of patient safety issues improved with repeated simulation. This effect is most likely due to participation in the simulation itself, as opposed to increased use of system in the period between tests, as repeat participants outperformed all first time only participants irrespective of level of training. Further, we found an improvement in recognition even after controlling for the case employed, both in general as well as in specific patient safety issues that were built into our simulation cases. Paired analysis indicated that the improvement in recognition of patient safety issues was independent of which simulated case was viewed first by the participant. While this finding is consistent with other studies indicating improvement in performance following use of simulation based training [2729], prior research has focused on traditional simulation scenarios such as cardiopulmonary arrest training or obstetrics deliveries [27, 28]. Our study is the first, to our knowledge, confirming this finding using EHR in a high-fidelity case simulation setting.

In our study, repeat testing sessions were at least a week apart and most were more than a month apart, which suggests that participation in the simulation has a lasting learning effect on the subject. This is particularly emphasized by the process of debriefing that we employed, which fulfilled the structural elements required for the process as outlined by Lederman [30]. Traditional models of debriefing focus on identifying the impact of the activity, clarifying concepts, emotions, empathy, and engaging in systematic reflection and analysis [3133]. In our study we specifically focused the debrief on optimizing the subjects’ ability to recognize critical data within the EHR and to optimize strategies for data finding and visualization. This short-duration process appears to have a lasting beneficial effect. More importantly, the degree of improvement in the simulation was inversely related to baseline performance, irrespective of level of training. This suggests that this exercise has its greatest benefit with those who have the greatest difficulty in using the EHR for effective clinical decision making.

In order to maximize the potential this exercise would have on reducing medical errors in the ICU, we incorporated 3 core areas into both cases: recognition of dangerous trends in hemodynamics, medication misdosing and recognition of oversedation. Multiple studies suggest failure to recognize problems in these 3 areas is associated with missed clinical deterioration including cardiac arrest, increased rate of medical errors (e.g. medication errors, missed diagnosis), increased time on the ventilator, and increased ICU length of stay or increased mortality [3436]. For all 3 domains, repeat test takers consistently outperformed first time test takers. This has significant implications for both learning and patient safety. It also emphasizes the ability of this type of educational activity to address multiple different competencies simultaneously including Medical Knowledge, System Based Practice and Practice Based Learning as defined by the Accreditation Council for Graduate Medical Education (ACGME) [37]. Further studies will be required to determine if these exercises have any impact on rates of actual errors within the ICU or other care environments.

The ability to document a significant learning effect with our EHR based simulation now opens the possibility to incorporating the EHR into other high fidelity simulation activities. Our data amongst first time participants highlights the importance data acquisition from the EHR plays into clinical decision making. Thus, it will be essential to now incorporate the EHR routinely into other complex simulation scenarios in order to truly understand the role the EHR plays in this context. This becomes even more important for team based training. It is well established that not only do different professions utilize the EHR differently, but that the EHR can have independent effects on interprofessional communication [38]. Our use of high fidelity cases which contain relevant information for all members of the interprofessional team will allow for further expansion and testing of these learning activities.

Our study has several limitations. Most importantly, we are unable to identify whether the improvement in performance is due to improved utilization of the EHR or whether training with the EHR improved cognitive processing of information already viewed. While the overall net improvement in performance is perhaps the most relevant endpoint, especially in relation to patient safety, identifying the contribution of EHR skill acquisition vs. improved cognition will be essential for refinement of the activity. Our ability to create multiple standardized cases will allow us to incorporate more objective measures of EHR usability (such as eye tracking) into future simulations and is the subject of ongoing studies. Further, while the overall rate of recognition of patient safety issues improved from the first simulation to the second, we did not approach 100% recognition with repeat simulation. One likely factor is that each subject may require differing amounts of training to achieve optimal recognition rates, which may in turn be based on their level of clinical proficiency as well as their EHR navigation and use skill levels. This may be further confounded by subjects’ prior exposure to different EHRs before using the EHR at our institution. However, even if we are able to control for baseline EHR exposure, we do not know at this time how advantageous additional simulation exposure will be with respect to improving recognition of patient safety issues. We are creating additional cases to allow for continued participation in multiple simulation case scenarios to assess at what level, if any, there is a saturation effect to the benefit for participation in the exercise. It is possible that we will be unable to further improve performance with simulation training alone, especially if persistent inability to recognize patient safety issues, or a plateau in the detection rate, reflect issues related to the EHR user interface as opposed to clinician training.

Another issue is the fidelity of the activity in regards to provider workflow. For the simulation, subjects utilized an EHR interface identical to what they use clinically (including their unique customizations) in order to closely mimic their real-world EHR experience. Additionally, we conducted our simulations in the ICU at a dedicated work station to approximate the working environment of our participants. However, the simulations were conducted in isolation rather than as part of a larger workflow (as in pre-rounding) on a full complement of 5 or 6 ICU patients (for a given resident). Participants may also exhibit some degree of fatigue in the afternoons, when our simulations were conducted, since they would have already participated in a full day of rounds, a factor which may which may affect clinical their information processing and decision making. Indeed, there appears to be an association with resident fatigue on clinical decision making and medication errors in the ICU [39]. We did not maintain a record of the ICU census on each specific day of testing, which may limit our ability to assess the correlation between error detection rates and subject workloads on the day of simulation. Finally, we acknowledge that while performing simulations in situ might better recapitulate the real life use of the system, this may also have increased the likelihood of distractions and thus affected the ability of subjects to identify errors.

Many of our residents use pre-populated (auto-templated) electronic note templates as their rounding tool, which we did not make available in the simulation. As a result, the residents and fellows would hand write data to be presented. This difference in pre-rounding workflow may have affected their cognitive processing either positively or negatively as the generation of a rounding artifact can affect information processing [40, 41]. In addition, during patient care rounds, the resident does not present data in isolation, rather there are other members of the inpatient team, such as nursing staff, attending physicians, and pharmacists who contribute to the daily plan. We do not know how many of the errors or action items missed by the subject would have been subsequently recognized by other members of the interprofessional rounding team. This will be further assessed as part of a future study involving interdisciplinary teams in our simulation exercise.

Conclusion

In conclusion, we have created a novel simulation environment for evaluating the use of the EHR in a clinical environment and have found that we are relatively poor at recognizing patient safety issues and trends within the EHR. We demonstrated overall improvement in identification of patient safety issues with repeat high-fidelity clinical case-based simulation, however we note that identification rates improve primarily in specific areas such as the evaluation of longitudinal trends and medication errors, but remain poor in other areas. We are continuing to evaluate how residents and fellows use the EHR to evaluate data and what strategies may be more effective in improving patient safety in the ICU.

Declarations

Acknowledgements

Funded By: AHRQ R18HS021367 and NIH U24OC000015.

Authors’ Affiliations

(1)
Department of Pulmonary and Critical Care Medicine, Oregon Health and Science University
(2)
Department of Medical Informatics & Clinical Epidemiology, Oregon Health and Science University

References

  1. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144 (10): 742-752. 10.7326/0003-4819-144-10-200605160-00125.View ArticleGoogle Scholar
  2. Goldzweig CL, Towfigh A, Maglione M, Shekelle PG: Costs and benefits of health information technology: new trends from the literature. Health Aff (Millwood). 2009, 28 (2): w282-w293. 10.1377/hlthaff.28.2.w282.View ArticleGoogle Scholar
  3. Buntin MB, Burke MF, Hoaglin MC, Blumenthal D: The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood). 2011, 30 (3): 464-471. 10.1377/hlthaff.2011.0178.View ArticleGoogle Scholar
  4. Blumenthal D: Launching HITECH. N Engl J Med. 2010, 362 (5): 382-385. 10.1056/NEJMp0912825.View ArticleGoogle Scholar
  5. Charles D, King J, Vaishali P, Furukawa MF: Adoption of electronic health record systems among U.S. Non-federal acute care hospitals: 2008–2012. ONC Data Brief. 2013, 9: 1-9.Google Scholar
  6. Han YY, Carcillo JA, Venkataraman ST, Clark RS, Watson RS, Nguyen TC, Bayir H, Orr RA: Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005, 116 (6): 1506-1512. 10.1542/peds.2005-1287.View ArticleGoogle Scholar
  7. Kohn LT, Corrigan JM, Donaldson MS: To Err is Human: Building a Safer Health System. 1999, Washington, D.C: National Academy PressGoogle Scholar
  8. Manor-Shulman O, Beyene J, Frndova H, Parshuram CS: Quantifying the volume of documented clinical information in critical illness. J Crit Care. 2008, 23 (2): 245-250. 10.1016/j.jcrc.2007.06.003.View ArticleGoogle Scholar
  9. Ali NA, Mekhjian HS, Kuehn PL, Bentley TD, Kumar R, Ferketich AK, Hoffmann SP: Specificity of computerized physician order entry has a significant effect on the efficiency of workflow for critically ill patients. Crit Care Med. 2005, 33 (1): 110-114. 10.1097/01.CCM.0000150266.58668.F9.View ArticleGoogle Scholar
  10. Milano CE, Hardman JA, Plesiu A, Rdesinski RE, Biagioli FE: Simulated electronic health record (Sim-EHR) curriculum: teaching EHR skills and use of the EHR for disease management and prevention. Acad Med. 2014, 89 (3): 399-403. 10.1097/ACM.0000000000000149.View ArticleGoogle Scholar
  11. Nuovo J, Hutchinson D, Balsbaugh T, Keenan C: Establishing electronic health record competency testing for first-year residents. J Grad Med Educ. 2013, 5 (4): 658-661. 10.4300/JGME-D-13-00013.1.View ArticleGoogle Scholar
  12. Hersh WR, Gorman PN, Biagioli FE, Mohan V, Gold JA, Mejicano GC: Beyond information retrieval and electronic health record use: competencies in clinical informatics for medical education. Adv Med Educ Pract. 2014, 5: 205-212.View ArticleGoogle Scholar
  13. Underwood WS, Brookstone AJ, Barr MS: The Correlation of Training Duration with EHR Usability and Satisfaction: Implications for Meaningful Use. American EHR Patners. 2011, 1-32. http://www.americanehr.com/blog/2011/10/new-report-from-americanehr-correlating-ehr-training-with-usabilitysatisfaction-and-meaningful-use/.Google Scholar
  14. Ziv A, Wolpe PR, Small SD, Glick S: Simulation-based medical education: an ethical imperative. Acad Med. 2003, 78 (8): 783-788. 10.1097/00001888-200308000-00006.View ArticleGoogle Scholar
  15. Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK, Satava RM: Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg. 2002, 236 (4): 458-463. 10.1097/00000658-200210000-00008. discussion 463–454View ArticleGoogle Scholar
  16. Chaer RA, Derubertis BG, Lin SC, Bush HL, Karwowski JK, Birk D, Morrissey NJ, Faries PL, McKinsey JF, Kent KC: Simulation improves resident performance in catheter-based intervention: results of a randomized, controlled study. Ann Surg. 2006, 244 (3): 343-352.Google Scholar
  17. Ruesseler M, Weinlich M, Muller MP, Byhahn C, Marzi I, Walcher F: Simulation training improves ability to manage medical emergencies. EMJ. 2010, 27 (10): 734-738. 10.1136/emj.2009.074518.View ArticleGoogle Scholar
  18. Fraser K, Wright B, Girard L, Tworek J, Paget M, Welikovich L, McLaughlin K: Simulation training improves diagnostic performance on a real patient with similar clinical findings. Chest. 2011, 139 (2): 376-381. 10.1378/chest.10-1107.View ArticleGoogle Scholar
  19. Singer BD, Corbridge TC, Schroedl CJ, Wilcox JE, Cohen ER, McGaghie WC, Wayne DB: First-year residents outperform third-year residents after simulation-based education in critical care medicine. Simul Healthc. 2013, 8 (2): 67-71. 10.1097/SIH.0b013e31827744f2.View ArticleGoogle Scholar
  20. Antonoff MB, Shelstad RC, Schmitz C, Chipman J, D’Cunha J: A novel critical skills curriculum for surgical interns incorporating simulation training improves readiness for acute inpatient care. J Surg Educ. 2009, 66 (5): 248-254. 10.1016/j.jsurg.2009.09.002.View ArticleGoogle Scholar
  21. Warden GL, Bagian JP: Health IT and Patient Safety:Builder Safer Systems for Better Care. 2011, Washington, DC: Institue of Medicine: National Academies PressGoogle Scholar
  22. Schumacher RS, Patterson ES, North RN, Zhang J, Lowry SZ, Quinn MT, Ramaiah R: Technical Evaluation, Testing and Validation of Usability of Electronic Health Records. Edited by: Technology NIoSa. 2011, Washington, D.C: US Department of Commerce, 1-108.Google Scholar
  23. Barnato AE, Hsu HE, Bryce CL, Lave JR, Emlet LL, Angus DC, Arnold RM: Using simulation to isolate physician variation in intensive care unit admission decision making for critically ill elders with end-stage cancer: a pilot feasibility study. Crit Care Med. 2008, 36 (12): 3156-3163. 10.1097/CCM.0b013e31818f40d2.View ArticleGoogle Scholar
  24. Muma RD, Niebuhr BR: Simulated patients in an electronic patient record. Acad Med. 1997, 72 (1): 72.Google Scholar
  25. Kushniruk AW, Borycki EM, Anderson JG, Anderson MM: Preventing technology-induced errors in healthcare: the role of simulation. Stud Health Technol Inform. 2009, 143: 273-276.Google Scholar
  26. March CA, Steiger D, Scholl G, Mohan V, Hersh WR, Gold JA: Use of simulation to assess electronic health record safety in the intensive care unit: a pilot study. BMJ Open. 2013, 3 (4): 1-10.View ArticleGoogle Scholar
  27. Andreatta P, Saxton E, Thompson M, Annich G: Simulation-based mock codes significantly correlate with improved pediatric patient cardiopulmonary arrest survival rates. Pediatr Crit Care Med. 2011, 12 (1): 33-38. 10.1097/PCC.0b013e3181e89270.View ArticleGoogle Scholar
  28. Draycott T, Sibanda T, Owen L, Akande V, Winter C, Reading S, Whitelaw A: Does training in obstetric emergencies improve neonatal outcome?. BJOG. 2006, 113 (2): 177-182. 10.1111/j.1471-0528.2006.00800.x.View ArticleGoogle Scholar
  29. Frengley RW, Weller JM, Torrie J, Dzendrowskyj P, Yee B, Paul AM, Shulruf B, Henderson KM: The effect of a simulation-based training intervention on the performance of established critical care unit teams. Crit Care Med. 2011, 39 (12): 2605-2611.View ArticleGoogle Scholar
  30. lederman LC: Debriefing: toward a systematic assessment of theory and practice. Simul Gaming. 1992, 23 (2): 145-160. 10.1177/1046878192232003.View ArticleGoogle Scholar
  31. Thatcher DC, Robinson MJ: An introduction to games and simulations in education. 1985, Hants: Solent SimulationsGoogle Scholar
  32. Petranek C: A maturation in experiential learning: principles of simulation and gaming. Simul Gaming. 1994, 25 (4): 513-523. 10.1177/1046878194254008.View ArticleGoogle Scholar
  33. Lederman LC: Differences that make a difference: Intercultural communication, simulation, and the debriefing process in diverse interaction. Annual Conference of the International Simulation and Gaming Association. 1991, Kyoto, JapanGoogle Scholar
  34. Kause J, Smith G, Prytherch D, Parr M, Flabouris A, Hillman K: A comparison of antecedents to cardiac arrests, deaths and emergency intensive care admissions in Australia and New Zealand, and the United Kingdom–the ACADEMIA study. Resuscitation. 2004, 62 (3): 275-282. 10.1016/j.resuscitation.2004.05.016.View ArticleGoogle Scholar
  35. Camire E, Moyen E, Stelfox HT: Medication errors in critical care: risk factors, prevention and disclosure. CMAJ. 2009, 180 (9): 936-943. 10.1503/cmaj.080869.View ArticleGoogle Scholar
  36. Weinert CR, Calvin AD: Epidemiology of sedation and sedation adequacy for mechanically ventilated patients in a medical and surgical intensive care unit. Crit Care Med. 2007, 35 (2): 393-401. 10.1097/01.CCM.0000254339.18639.1D.View ArticleGoogle Scholar
  37. Heard JK, Allen RM, Clardy J: Assessing the needs of residency program directors to meet the ACGME general competencies. Acad Med. 2002, 77 (7): 750.View ArticleGoogle Scholar
  38. Collins SA, Bakken S, Vawdrey DK, Coiera E, Currie L: Model development for EHR interdisciplinary information exchange of ICU common goals. Int J Med Inform. 2010, 80 (8): e141-e149.View ArticleGoogle Scholar
  39. Landrigan CP, Rothschild JM, Cronin JW, Kaushal R, Burdick E, Katz JT, Lilly CM, Stone PH, Lockley SW, Bates DW, Czeisler CA: Effect of reducing interns’ work hours on serious medical errors in intensive care units. N Engl J Med. 2004, 351 (18): 1838-1848. 10.1056/NEJMoa041406.View ArticleGoogle Scholar
  40. Patel VL, Arocha JF, Kaufman DR: A primer on aspects of cognition for medical informatics. J Am Med Inform Assoc. 2001, 8 (4): 324-343. 10.1136/jamia.2001.0080324.View ArticleGoogle Scholar
  41. Weir CR, Hurdle JF, Felgar MA, Hoffman JM, Roth B, Nebeker JR: Direct text entry in electronic progress notes. An evaluation of input errors. Methods Inf Med. 2003, 42 (1): 61-67.Google Scholar
  42. Pre-publication history

    1. The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/14/224/prepub

Copyright

© Stephenson et al.; licensee BioMed Central Ltd. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Advertisement