Skip to main content

Exploring patterns of error in acute care using framework analysis



Junior doctors are often the first responders to deteriorating patients in hospital. In the high-stakes and time-pressured context of acute care, the propensity for error is high. This study aimed to identify the main subject areas in which junior doctors’ acute care errors occur, and cross-reference the errors with Reason’s Generic Error Modelling System (GEMS). GEMS categorises errors according to the underlying cognitive processes, and thus provides insight into the causative factors. The overall aim of this study was to identify patterns in junior doctors’ acute care errors in order to enhance understanding and guide the development of educational strategies.


This observational study utilised simulated acute care scenarios involving junior doctors dealing with a range of emergencies. Scenarios and the subsequent debriefs were video-recorded. Framework analysis was used to categorise the errors according to eight inductively-developed key subject areas. Subsequently, a multi-dimensional analysis was performed which cross-referenced the key subject areas with an earlier categorisation of the same errors using GEMS. The numbers of errors in each category were used to identify patterns of error.


Eight key subject areas were identified; hospital systems, prioritisation, treatment, ethical principles, procedural skills, communication, situation awareness and infection control. There was a predominance of rule-based mistakes in relation to the key subject areas of hospital systems, prioritisation, treatment and ethical principles. In contrast, procedural skills, communication and situation awareness were more closely associated with skill-based slips and lapses. Knowledge-based mistakes were less frequent but occurred in relation to hospital systems and procedural skills.


In order to improve the management of acutely unwell patients by junior doctors, medical educators must understand the causes of common errors. Adequate knowledge alone does not ensure prompt and appropriate management and referral. The teaching of acute care skills may be enhanced by encouraging medical educators to consider the range of potential error types, and their relationships to particular tasks and subjects. Rule-based mistakes may be amenable to simulation-based training, whereas skill-based slips and lapses may be reduced using strategies designed to raise awareness of the interplay between emotion, cognition and behaviour.

Peer Review reports


Junior doctors are often the initial responders to patients who become acutely unwell in hospital. It is, however, an area in which junior doctors feel consistently poorly prepared for practice [1]. Previous work has shown that the behaviour of junior doctors in acute care contexts is influenced by a range of interconnected factors [2] and the propensity for error is high [3]. Improved understanding of the errors made by junior doctors within such contexts is therefore pivotal to developing effective educational interventions and improving patient outcomes. The overall aim of this study was to identify patterns of error in acute care, which may subsequently be used to guide the development of targeted educational strategies.

This observational study aimed to build on previous work which examined the validity of Reason’s generic error modelling system (GEMS) in categorising errors made by junior doctors in acute care contexts [4]. As junior doctors rarely work in isolation, the original framework was amplified to include two novel error types which are specific to the team-based nature of acute care provision. The original version of GEMS, along with the additional error categorisations proposed in the aforementioned study are defined and illustrated in Table 1.

Table 1 Definitions, descriptions and examples of the error types described in the amplified version of GEMS [4,5]

Whilst the classification of errors occurring in acute care contexts according to the amplified version of GEMS is of academic interest, it is of limited value in developing educational strategies aimed at reducing error. In order to identify educationally-useful patterns within the data, the GEMS classifications need to be cross-referenced with the knowledge, skills and behaviours that are most applicable to the management of acutely unwell patients. The identification of specific patterns of error may facilitate research-informed curriculum design and the development of tailored educational strategies. It seems likely, for example, that the reduction of knowledge-based mistakes necessitates different educational techniques to the reduction of skill-based slips and lapses.

In order for cross-referencing to be undertaken, an appropriate framework or taxonomy encapsulating the subject areas relevant to the assessment and management of acutely unwell patients was required. Several such taxonomies have been developed and utilised in previous studies. Whilst there are clear benefits to the application of a validated framework, the reasons that pre-existing taxonomies were deemed unsuitable are summarised in Table 2.

Table 2 Categorisation, descriptions and limitations of pre-existing taxonomies and frameworks relevant to acute care


This study aimed to answer the following questions:

  1. 1.

    What are the main subject areas in which junior doctors’ acute care errors occur?

  2. 2.

    How do the errors made in each subject area relate to the types of error as classified by the amplified GEMS framework?



Ethical approval for this work was waived by the South East Scotland Research Ethics Committee. Written consent for audio and video data collection and publication of anonymised results was obtained from all participants.


This study used the data obtained from the simulated acute care scenarios described in a previous study [4]. Due to the practical and ethical implications of observing junior doctors treating acutely unwell patients on the wards, high-fidelity simulation was used to observe junior doctors’ behaviours. Eight simulated scenarios were designed by VRT and two consultant anaesthetist colleagues. After piloting with 16 junior doctors, feedback was sought and the scenarios were refined. The four scenarios considered most reproducible and realistic were used for this study; postoperative haemorrhage, severe sepsis, respiratory distress and hypoglycaemic coma. Each scenario was used with equal frequency.

A full-body adult mannequin simulator (Emergency Care Simulator, Medical Education Technologies, Inc., Sarasota, Florida) was utilised, and was accompanied by the equipment, drugs and paperwork used on the wards where the junior doctors worked. The patient’s voice and physiology (as shown on the bedside monitor and in the mannequin’s respiratory rate) were manipulated from the control room. A telephone present in the simulation room connected directly to the control room. A member of staff unknown to the participants played the role of a ward nurse and provided accurate information when requested but did not actively prevent errors.

Thirty-eight junior doctors (representing recent graduates of seven different UK medical schools) were recruited on a volunteer basis. They were briefed regarding room layout, nurse capabilities and mannequin features and limitations. They then participated in a total of 18 simulated scenarios in groups of two or three and were asked to treat the patient (mannequin) as they would do on the ward. A facilitated debrief focusing on the cognitive aspects of decision-making occurred immediately after each scenario. The debriefs were conducted by one of three trained senior clinicians (VRT and two consultant anaesthetists). Each debrief involved playback of video from the scenario and encouraged articulation of the cognitive processes. Debriefs were audio recorded and field notes were taken by either VRT or SES.

Evidence from the video-recorded scenarios, audio-recorded debriefs and field notes were used to list all of the errors made in each scenario. These errors were classified according to the amplified version of GEMS, as described previously [4].

Inductive development of key subject areas

The first research question was addressed by using the principles of ‘framework analysis’ to inductively develop a thematic framework consisting of key subject areas [22]. Originally developed within the field of applied social policy research, ‘framework analysis’ is an analytical process which facilitates systematic analysis of qualitative data whilst promoting the generation of “actionable outcomes” [22]. During the preliminary stage of this work, VRT and SES noted brief descriptions of each of the errors that were identified. Using a combination of these descriptions and the intention-related evidence derived from either the video-recorded scenario or audio-recorded debrief, VRT and SES inductively developed a preliminary thematic framework. As expected, the first version of the framework drew heavily on previous related work [2], and other “a priori issues” [22]. VRT and SES then independently applied the early version of the framework to the list of errors obtained from the first four scenarios, allowing the developing framework to be influenced by emergent issues and analytical themes arising from the recurrence of particular error types. VRT and SES then discussed their independent analyses and compared, contrasted and negotiated categories of errors until agreement on a final indexing system was reached.

Application of thematic framework

Once finalised, the thematic framework was systematically applied to the entire dataset of junior doctor errors (i.e. the full list of errors obtained from the initial analysis of all 18 scenarios). Working together, VRT and SES discussed the error descriptions from the video-recorded scenarios in conjunction with the additional evidence derived from the scenarios (such as direct quotes, body language and other non-verbal clues) and debriefs (including direct quotes and other paralinguistic clues such as laughter), until agreement on categorisation was reached. The use of Excel (Microsoft Office 2007) for the indexing of errors facilitated inter-scenario and intra-scenario comparison of errors so that patterns within the dataset as a whole could be identified and explored.

Pattern identification

In order to address the second research question, a multidimensional analysis involving both the amplified GEMS classifications [4] and the inductively-developed subject areas was undertaken. In keeping with the principles of framework analysis, a distilled summary of each error was entered into a chart to promote abstraction and synthesis [22]. Throughout the analysis, each error remained referenced with a specific numerical code so that the source scenario could be traced and contextual validity continually checked. The errors within an individual subject area were then compared and contrasted, and patterns within the data were sought.

Patterns were identified by counting the number of errors that occurred in relation to each subject area and GEMS classification. The use of numbers in qualitative research is a controversial issue. Most qualitative researchers who reject the use of numerical data articulate their objections with reference to the philosophical underpinning of their work. Maxwell (2010) states, “Primarily, this is because they have believed that numerical data are incompatible with a constructivist stance for research, as such data imply the existence of a single “objective” reality that can be measured and statistically analysed to reach generalisable conclusions” [23]. However, several prominent qualitative researchers have supported the inclusion of numbers in qualitative research practices and reports for many years [24,25], and the discipline of medical education is beginning to embrace the concept [26]. This study was undertaken on the premise that the use of numbers alone does not define the difference between constructivist and positivist research paradigms. The incorporation of numerical data in this work helps to reveal patterns, provide precision and promote clarity. They have, however, been used only in ways that recognise their limitations, preserve the richness of the dataset and do justice to the complexity of the phenomena being studied.


Eight key subject areas formed the final version of the thematic framework: hospital systems, infection control, prioritisation, procedural skills, situation awareness, treatment, communication and ethical principles in practice. The number of errors relating to each subject area, sub-classified using the amplified version of GEMS, is displayed in Table 3. The purpose of Table 3 is to allow comparison of the different error types within, as opposed to between, the various subject areas. It is the patterns within the data, as opposed to the actual numerical values, that are of interest. Table 4 illustrates specific examples of errors relating to each of the key subject areas and details the associated GEMS classifications.

Table 3 A multidimensional analysis of errors categorised according to both the amplified version of GEMS and the inductively-developed key subject areas
Table 4 Specific examples of errors relating to seven of the key subject areas

Summary of error patterns

In relation to hospital systems, there was a predominance of rule-based mistakes, with many errors related to attempts to obtain senior assistance. These errors often involved a misunderstanding of the purpose of certain procedures or protocols, and frequently involve application of a ‘bad’ rule. As shown in Table 3, the same pattern was observed in relation to errors of prioritisation. Rule-based mistakes commonly involved junior doctors deciding to undertake investigations, such as an electrocardiogram, prior to assessing the patient’s airway patency. In relation to procedural skills, most errors were skill-based slips or lapses, commonly involving failure to remove the tourniquet from the patient’s arm following intravenous cannula insertion. The predominance of slips and lapses is likely to be attributable, at least in part, to the psychomotor aspects of procedural skills. Many of the errors that could be attributed to a lack of situation awareness were skill-based slips or lapses stemming from interruptions during the initial clinical examination. There were also a large number of compound errors originating from the misunderstandings of others, as well as from a junior doctor’s own misperception of information.

Treatment errors were commonly rule-based mistakes related to type or flow rate of intravenous fluid resuscitation or antibiotic choice. In contrast, communication-related errors were mainly skill-based slips and lapses that involved mishearing or misinterpreting verbal information provided by the nurse helper, or misinterpreting what was said in a telephone conversation. In relation to ethical principles in practice, rule-based mistakes most commonly occurred when the capacity of the patient to refuse life-saving treatment was impaired due to critical illness, but potentially life-saving treatment was withheld or even withdrawn as a result of overarching concern for patient autonomy. There were insufficient data to elucidate the causes of error in relation to infection control.


This study has built on previous work by using the amplified version of GEMS, in combination with iteratively-derived key subject areas, to explore and classify the types of errors made by junior doctors in acute care contexts. The results provide a springboard for the deeper consideration of specific errors types, their origins within medical training and potential educational strategies aimed at reduction of error and improvement of patient outcomes.

The finding that prioritisation was a key subject area in which rule-based mistakes were commonly made echoes previous work concluding that prioritisation is a key component of a junior doctor’s role which is usually learned ‘on the job’, making doctors in their early days feel unprepared [27,28]. A focus group study of junior doctors’ behaviour in acute care contexts has previously described the difficulties that newly qualified doctors face when attempting to transfer knowledge into practice [2], particularly in relation to applying a structured approach to patient assessment [2]. In acute care, popular assessment structures (such as ABCDE: airway, breathing, circulation, disability, exposure) and standardised protocols can make prioritisation of tasks easier. However, a high level of familiarity with such structures is required to recall and utilise them in times of acute stress. Primary medical training programmes could tackle this issue by facilitating the repeated rehearsal of basic patient assessments in a variety of contexts, to emphasise the transferability of such assessment structures. This learning is amenable to simulation training, whereby students can experiment with changing priorities whilst observing and subsequently discussing the clinical consequences. However, care must be taken in the planning and execution of such training to replicate the complexities and pressures of the environment in which clinical decisions will ultimately be made. The decontextualised rehearsal of basic assessment structures in simulation training may actually hinder educational development and, if trained in this way, junior doctors are likely to continue to have difficulty utilising such knowledge in the stressful and hierarchical world of clinical practice [29,30].

In contrast to prioritisation errors, procedural skills were strikingly vulnerable to slips and lapses. It is likely that the prevalence of slips and lapses in relation to procedural skills is, at least in part, influenced by the stressful nature of acute care [2]. Elevated stress levels have been shown to impede performance in a multitude of cognitive processes required in acute care contexts including those that involve divided attention, working memory, retrieval of information from memory, and decision making [3]. Furthermore, the results of this study demonstrate that the undertaking of a procedural skill within an acute care scenario predisposes to the common tendency for attention to become so focussed on one aspect of a situation that other important cues go unnoticed [3,31]. It is therefore important that junior doctors are aware of the interplay between emotion, cognition and behaviour, and the roles of such factors in errors and adverse events. Emotional skills training, particularly with reference to dynamic, high-stakes situations, might help to facilitate this learning. Such training should acknowledge the influence of stress and provide strategies to reduce its impact. Another approach to tackling the problem of slips and lapses whilst performing procedures would be to utilise educational techniques that involve distraction. Techniques can be developed which specifically aim to enhance performance of basic procedures safely and effectively whilst deploying attention elsewhere. Based on automaticity theory [32], the gradual additions of distraction or time–pressure to the rehearsal of practical procedures are useful strategies that are beginning to be explored within the field of healthcare education [33]. It therefore seems likely that carefully designed simulation-based training has the potential to expose and address multiple error types, including those related to both prioritisation and procedural skills.

Strengths and limitations

This study used high-fidelity simulation to explore patterns of error in acute care. Observation through video is an under-utilised research method [34] that has the advantage of capturing linguistic, paralinguistic and non-verbal communication. The inductive development of a novel framework has the advantage of maintaining the richness of the dataset, and the use of framework analysis has facilitated the generation of actionable outcomes. However, the study has several important limitations. Particular difficulties were associated with identifying all of the errors contained within the scenarios, and the identification process was undoubtedly influenced by the ideas, beliefs and clinical experience of the researchers. Some of the errors categorised into one key subject area could arguably be classified into another if slightly different definitions had been adopted. It is also likely that, given the complexity and somewhat subjective nature of the analysis, alternative researchers would have coded some errors differently. Furthermore, the lack of sufficient evidence to attribute 53 of the errors to a single cause necessitated their exclusion from the multidimensional analysis, as detailed in Table 3.

A major limitation of all studies employing simulation is that behaviour in simulated environments may not mimic behaviour in everyday clinical practice. In the context of this work, this seems particularly likely in relation to certain key subject areas, such as infection control, where the absence of a real sense of infection risk may have influenced the decision to wear gloves for infection-prone procedures. These limitations were minimized by the use of high-fidelity simulation involving fake blood and genuine wound dressings, but could not be entirely eliminated. Infection control errors were rarely explored during debriefing and consequently there was usually insufficient evidence to confidently attribute each infection control error to one of a number of possible explanations. It is interesting to consider whether tutor suspicion of ‘simulator artefact’ was the explanation for this lack of emphasis during debriefing. The error pattern within this key subject area has therefore not been established using this method. In addition, the presence of a nurse helper who always provided information that was accurate and relevant may not reflect the clinical workplace. It is likely that, despite their best intentions, nurses and other professionals may, at times, actually contribute to error generation, particularly compound and submission errors.

Future work

The patterns of error identified in this study could be used to explore some specific educational strategies (as discussed above) designed to reduce error in acute care. The impact of such strategies on the subsequent behaviour of junior doctors needs to be carefully examined, perhaps using simulated environments. Furthermore, analyses such as the one detailed here could be used to provide information on the shortfalls of individual primary medical degree programs, and the impact of curricular changes. Similar methods could also be used to delineate the types of error most prevalent in other contexts or professional groups, in the hope that tailored education innovations will be more effective at reducing error than generic teaching.


For the initial assessment and management of acutely unwell patients by junior doctors to be improved, it is important that medical educators understand the causes and patterns of common errors. Adequate knowledge alone does not ensure prompt and appropriate management and referral. Acute care skills education may be enhanced by encouraging medical educators to consider the range of potential error types, and their relationships to particular tasks and subjects. In conjunction with process review and system redesign, it is hoped that novel teaching strategies may be developed and implemented, enhancing the performance of junior doctors and the safety of acutely unwell patients.



Generic error-modelling system


Rule-based mistake


Knowledge-based mistake


  1. 1.

    Tallentire VR, Smith SE, Skinner J, Cameron HS. The preparedness of UK graduates in acute care: a systematic literature review. Postgrad Med J. 2012;88:365–71.

    Article  Google Scholar 

  2. 2.

    Tallentire VR, Smith SE, Skinner J, Cameron HS. Understanding the behaviour of newly qualified doctors in acute care contexts. Med Educ. 2011;45(10):995–1005.

    Article  Google Scholar 

  3. 3.

    LeBlanc VR. The effects of acute stress on performance: implications for health professions education. Acad Med. 2009;84:S25–33.

    Article  Google Scholar 

  4. 4.

    Tallentire VR, Smith SE, Skinner J, Cameron HS. Exploring error in team-based acute care scenarios: an observational study from the United Kingdom. Acad Med. 2012;87(6):792–8. 710.1097/ACM.1090b1013e318253c318259e318250.

    Article  Google Scholar 

  5. 5.

    Reason J. Human Error. Cambridge: Cambridge University Press; 1990.

    Google Scholar 

  6. 6.

    Klampfer B, Flin R, Helmreich RL, Häusler R, Sexton B, Fletcher G, et al. Enhancing performance in high risk environments. Recommendations for the use of behavioural markers. In: Group interaction in high risk environments. Swissair Training Centre; 2001.

  7. 7.

    Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Rating non-technical skills: developing a behavioural marker system for use in anaesthesia. Cognit Technolo Work. 2004;6:165–71.

    Google Scholar 

  8. 8.

    Flin R, Maran N. Identifying and training non-technical skills for teams in acute medicine. Qual Safe Health Care. 2004;13 suppl 1:i80–4.

    Article  Google Scholar 

  9. 9.

    Yule S, Flin R, Paterson-Brown S, Maran N, Rowley D. Development of a rating system for surgeons’ non-technical skills. Med Educ. 2006;40:1098–104.

    Article  Google Scholar 

  10. 10.

    Mitchell L, Flin R, Yule S, Mitchell J, Coutts K, Youngson G. Evaluation of the Scrub Practitioners’ List of Intraoperative Non-Technical Skills (SPLINTS) system. Int J Nurs Stud. 2012;49:201–11.

    Article  Google Scholar 

  11. 11.

    Mishra A, Catchpole K, McCulloch P. The Oxford NOTECHS System: reliability and validity of a tool for measuring teamwork behaviour in the operating theatre. Qual Saf Health Care. 2009;18(2):104–8.

    Article  Google Scholar 

  12. 12.

    Smith G, Poplett N. Knowledge aspects of acute care in trainee doctors. Postgrad Med J. 2002;78:335–8.

    Article  Google Scholar 

  13. 13.

    Smith CM, Perkins GD, Bullock I, Bion JF. Undergraduate training in the care of the acutely ill patient: a literature review. Intensive Care Med. 2007;33:901–7.

    Article  Google Scholar 

  14. 14.

    Scavone B, Sproviero M, McCarthy R, Wong C, Sullivan J, Siddall V, et al. Development of an objective scoring system for measurement of resident performance on the human patient simulator. Anesthesiology. 2006;105(2):260–6.

    Article  Google Scholar 

  15. 15.

    Murray D, Boulet J, Ziv A, Woodhouse J, Kras J, McAllister J. An acute care skills evaluation for graduating medical students: a pilot study using clinical simulation. Med Educ. 2002;36(9):833–41.

    Article  Google Scholar 

  16. 16.

    Paskins Z, Kirkcaldy J, Allen M, Macdougall C, Fraser I, Peile E. Design, validation and dissemination of an undergraduate assessment tool using SimMan® in simulated medical emergencies. Med Teach. 2010;32(1):e12–7.

    Article  Google Scholar 

  17. 17.

    Boulet JR, Murray D, Kras J, Woodhouse J, McAllister J, Ziv A. Reliability and validity of a simulation-based acute care skills assessment for medical students and residents. Anesthesiology. 2003;99(6):1270–80.

    Article  Google Scholar 

  18. 18.

    Donoghue A, Nishisaki A, Sutton R, Hales R, Boulet J. Reliability and validity of a scoring instrument for clinical performance during pediatric advanced life support simulation scenarios. Resuscitation. 2010;81(3):331–6.

    Article  Google Scholar 

  19. 19.

    Marsh HW, Chanalb JP, Sarrazinb PG. Self-belief does make a difference: a reciprocal effects model of the causal ordering of physical self-concept and gymnastics performance. J Sport Sci. 2006;24(1):101–11.

    Article  Google Scholar 

  20. 20.

    Perkins GD, Barrett H, Bullock I, Gabbott DA, Nolan JP, Mitchell S, et al. The Acute Care Undergraduate TEaching (ACUTE) Initiative: consensus development of core competencies in acute care for undergraduates in the United Kingdom. Intensive Care Med. 2005;31(12):1627–33.

    Article  Google Scholar 

  21. 21.

    Tallentire VR, Smith SE, Wylde K, Cameron HS. Are medical graduates ready to face the challenges of Foundation training? Postgrad Med J. 2011;87(1031):590–5.

    Article  Google Scholar 

  22. 22.

    Ritchie J, Spencer L. Qualitative data analysis for applied policy research. In: Bryman A, Burgess RG, editors. Analyzing Qualitative Data. London and New York: Routledge; 1994. p. 173–94.

    Google Scholar 

  23. 23.

    Maxwell J. Using numbers in qualitative research. Qual Inq. 2010;16(6):475–82.

    Article  Google Scholar 

  24. 24.

    Becker H. Field Work Evidence. In: Becker H, editor. Sociological Work: Method and Substance. New Brunswick, NJ: Transaction Books; 1970. p. 39–62.

    Google Scholar 

  25. 25.

    Hammersley M. Reconstructing the Qualitative-Quantitative Divide. In: Hammersley M, editor. What’s Wrong With Ethnography? Methodological Explorations. London: Routledge; 1992. p. 159–73.

    Google Scholar 

  26. 26.

    Rees C, Monrouxe L. Medical students learning intimate examinations without valid consent: a multicentre study. Med Educ. 2011;45(3):261–72.

    Article  Google Scholar 

  27. 27.

    Illing J, Morrow G, Kergon C, Burford B, Spencer J, Peile E, et al. How prepared are medical graduates to begin practice? A comparison of three diverse UK medical schools. GMC Education Committee 2008.

  28. 28.

    Lempp H, Cochrane M, Seabrook M, Rees J. Impact of educational preparation on medical students in transition from final year to PRHO year: a qualitative evaluation of final-year training following the introduction of a new year 5 curriculum in a London medical school. Med Teach. 2004;26(3):276–8.

    Article  Google Scholar 

  29. 29.

    Kneebone R. Simulation and transformational change: the paradox of expertise. Acad Med. 2009;84(7):954–7.

    Article  Google Scholar 

  30. 30.

    Issenberg SB, Mcgaghie WC, Petrusa ER, Gordon DL, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27(1):10–28.

    Article  Google Scholar 

  31. 31.

    Flin R, O’Connor P, Crichton M. Safety at the Sharp End. Farnham: Ashgate; 2008.

    Google Scholar 

  32. 32.

    Ashby GF, Ennis JM, Spiering BJ. A neurobiological theory of automaticity in perceptual categorization. Psychol Rev. 2007;114(3):632–56.

    Article  Google Scholar 

  33. 33.

    Smith SE, Tallentire VR, Wood SM, Cameron HS. The Distracted Intravenous Access (DIVA) test. Clin Teach. 2012;9(5):320–4.

    Article  Google Scholar 

  34. 34.

    Rees C. Identities as performances: encouraging visual methodologies in medical education research. Med Educ. 2010;44(1):5–7.

    Article  Google Scholar 

Download references


The authors wish to thank Dr. Jeremy Morton, Dr. Halia O’Shea, Mr. Stephen Hartley, Dr. Edward Mellanby, and Mr. Chris Winter for their expertise in scenario design, implementation, and debriefing. Thanks also to the late Professor Henry Walton for his insightful comments in the early stages of this work. This study was supported by a grant from the Clinical Skills Managed Educational Network (Research and Development grant reference 003) but they had no involvement in study design, data collection or analysis, writing of the report or the decision to submit for publication.

Author information



Corresponding author

Correspondence to Victoria R Tallentire.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

VRT designed the study, collected and analysed the data and drafted the manuscript. SES assisted in the design, data collection and analysis, and extensively revised the manuscript. JS helped to finalise the research aims, advised on study design, discussed the results and critically revised the manuscript. HSC advised on all stages of study design, discussed the methods, advised on the presentation of results and critically revised the manuscript. All four authors approved the final manuscript for publication.

Authors’ information

VRT, MBChB, MRCP, MD, is an Honorary Fellow in Medical Education in the Centre for Medical Education, University of Edinburgh, UK and an Acute Medicine Specialty Trainee in Melbourne, Australia.

SES, MBChB, MRCGP, is an Honorary Fellow in Medical Education in the Centre for Medical Education, University of Edinburgh, UK and a General Practitioner in Lothian, UK.

JS, MBChB, FRCS, is the Director of Clinical Skills, Centre for Medical Education, University of Edinburgh and a Consultant in Emergency Medicine, NHS Lothian, UK.

HSC, MBChB, MRCP, is the Director of the Centre for Medical Education, University of Edinburgh, UK.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tallentire, V.R., Smith, S.E., Skinner, J. et al. Exploring patterns of error in acute care using framework analysis. BMC Med Educ 15, 3 (2015).

Download citation


  • Error
  • Junior doctors
  • Acute care
  • Emergencies
  • Framework analysis