Skip to main content
  • Research article
  • Open access
  • Published:

Perception of the usability and implementation of a metacognitive mnemonic to check cognitive errors in clinical setting

Abstract

Background

Establishing a diagnosis is a complex, iterative process involving patient data gathering, integration and interpretation. Premature closure is a fallacious cognitive tendency of closing the diagnostic process before sufficient data have been gathered. A proposed strategy to minimize premature closure is the use of a checklist to trigger metacognition (the process of monitoring one’s own thinking). A number of studies have suggested the effectiveness of this strategy in classroom settings. This qualitative study examined the perception of usability of a metacognitive mnemonic checklist called TWED checklist (where the letter “T = Threat”, “W = What if I am wrong? What else?”, “E = Evidence” and “D = Dispositional influence”) in a real clinical setting.

Method

Two categories of participants, i.e., medical doctors (n = 11) and final year medical students (Group 1, n = 5; Group 2, n = 10) participated in four separate focus group discussions. Nielsen’s 5 dimensions of usability (i.e. learnability, effectiveness, memorability, errors, and satisfaction) and Pentland’s narrative network were adapted as the framework to study the usability and the implementation of the checklist in a real clinical setting respectively.

Results

Both categories (medical doctors and medical students) of participants found that the TWED checklist was easy to learn and effective in promoting metacognition. For medical student participants, items “T” and “W” were believed to be the two most useful aspects of the checklist, whereas for the doctor participants, it was item “D”. Regarding its implementation, item “T” was applied iteratively, items “W” and “E” were applied when the outcomes did not turn out as expected, and item “D” was applied infrequently. The one checkpoint where all four items were applied was after the initial history taking and physical examination had been performed to generate the initial clinical impression.

Conclusion

A metacognitive checklist aimed to check cognitive errors may be a useful tool that can be implemented in the real clinical setting.

Peer Review reports

Background

An overarching concern a doctor faces in generating differential diagnoses is whether enough data have been gathered to make a diagnosis. Failure to do so may lead to fallacious cognitive tendencies such as premature closure, which may in turn, lead to diagnostic errors [1]. Premature closure refers to the tendency to close the diagnoses generation process before a diagnosis has been fully verified [2].

Differential diagnoses generation is typically believed to be a complex, iterative process involving two inter-related steps [3, 4]. The first step (hypothesis generation) is predominantly a fast, non-analytical step construed through a pattern-recognition process. It works by matching the patient’s presenting data with the doctor’s mental disease models (also called ‘illness scripts’ or ‘schemas’) [3, 4]. In dual-process theory, this step is known as a Type 1 thinking process [5, 6]. The second step, known as hypothesis evaluation, is predominantly a slower, analytical process of evaluating competing diagnoses prompted by the mental representations in the first step. In dual-process theory, this is known as a Type 2 thinking process [5, 6]. These two processes interact with one another (e.g., the analytical evaluation of competing diagnoses alters the mental representation of the patient data and vice versa), until a most probable diagnosis is reached [5, 6]. The failure to gather sufficient data may result in premature closure. For example, if one misses the history of sympathomimetic substance abuse such as cocaine in a profusely diaphoretic patient with chest discomfort and elevated body temperature, one might miss considering cocaine-induced myocardial ischemia as a potential diagnosis [7].

A number of strategies have been proposed to minimize cognitive errors like premature closure [8, 9]. As premature closure is believed to be more prevalent in Type 1 thinking processes than in Type 2 thinking processes [1, 10], a proposed strategy to mitigate premature closure is metacognitive monitoring, i.e., a process of critical self-reflection by cognitively slowing down and monitoring one’s own thinking [11,12,13]. In this regard, a metacognitive checklist functions as an activation trigger in facilitating the “slow thinking” process of reflecting on the plausibility of one’s diagnostic workup [14,15,16,17].

For example, in a study on the effectiveness of a checklist meant to induce reflection on diagnostic reasoning after making an initial diagnosis, it was found that metacognitive monitoring results in more accurate diagnoses, particularly for complex or unusual clinical cases [14]. As another example, in a study conducted with a group of final year medical students using a mnemonic checklist called the TWED checklist (where the letter “T = Threat”, “W = What if I am wrong? What else?”, “E = Evidence,” and “D = Dispositional influence”), it was shown that this checklist was effective in helping the task performer generate more relevant differential diagnoses [18, 19]. Nonetheless, from a non-exhaustive focused search by author KSC using Web of Science, MEDLINE and Google Scholar, most studies on the use of non-mnemonic checklists to facilitate metacognitive monitoring [14, 15, 17,18,19,20,21] have been conducted using paper-based assessments in classroom settings. The present study will investigate the use of a metacognitive checklist in a real clinical setting and consider its usability and implementation in work routines.

According to Nielsen (1996), it is often simplistic to assume that the usability of a tool is unidimensional [22]. Instead, he proposed a multi-dimensional metric originally meant to assess the usability of a tool in a human-computer interface context [21]. The five dimensions of the Nielsen’s usability metric are: (1) learnability of the tool – how easy is it for users to accomplish the intended task the first time they are using it? (2) effectiveness – once users have learned the design, how quickly and effectively can they perform the intended tasks using the tool? (3) memorability – when users return to the tool after a period of not using it, how easy can they reestablish proficiency in using the tool? How easy is it for them to remember the characteristics of the tool? (4) errors/pitfalls – what are the errors, pitfalls or flaws in using the tool? (5) satisfaction or pleasantness – how satisfied or pleasant is it to use the tool [22]?

Besides its usability, how well a checklist can be implemented in a work routine should also be considered. In this regard, a narrative network can be a helpful framework in mapping out the potential new patterns of actions that can be implemented in a routine work process [23]. An example described by Pentland and Feldman (2007) is the mapping of the various permutations of implementing information and communication technology (ICT) in airline ticket purchase [23].

Although previous studies [18, 19] have suggested the usefulness of the TWED checklist in reducing the risk of cognitive errors, these studies were conducted in classroom settings. Making clinical decisions in a classroom setting falls short of the ecological validity of a busy clinical ward. This is because in a classroom setting, one could possibly focus fully on working on the case scenarios without being distracted by the “multi-sensorial” noises as expected in a busy ward (e.g., sight, sound, smell, the competing demands, the chaotic work nature, etc. as expected in an emergency department). This study intends to fill a gap in the literature by addressing two research questions: (1) how useful is a checklist that facilitates metacognitive regulation in differential diagnoses generation in such real clinical environment? And (2) how can users implement such a checklist in their daily clinical routines? To answer the first research question, the five dimensions of usability based on Nielsen (1996)‘s metric (i.e., learnable, memorable, effective, pleasant, as well as potential pitfalls or limitations) [22] is used as the conceptual framework whilst to explore the second question on the various ways that a metacognitive checklist can be implemented in the complex process of differential diagnoses generation, the narrative network [23] was adopted.

Method

To capture the perception on how doctors and medical students interacted with the TWED checklist in actual clinical settings, the two research questions mentioned above were answered qualitatively by means of focus group discussions [24].

Participants

Two categories of participants were purposively sampled in this study. The first category consisted of medical doctors with at least 3 years’ clinical experience from the emergency and trauma department of Sarawak General Hospital, Malaysia. Eleven doctors (4 male, 7 female) with ages ranging from 28 to 32 years old were recruited. Medical doctors with less than 3 years of clinical experience as well as doctors who are still undergoing supervised internship were excluded. One participant who initially agreed to participate had subsequently withdrawn due to work commitments. Responses from this group of participants were coded to answer both the first and second research questions.

The second category of participants consisted of final year medical students from the faculty of medicine and health sciences of Universiti Malaysia Sarawak (UNIMAS). The medical degree program of UNIMAS is a 5-year medical program where all final year medical students must have successfully completed and met the passing criteria for year-4 study before they can be promoted to their final year of study. As these students are not yet licensed medical practitioners, responses from this group of participants were coded to answer the first research question only. Thus, questions related to the implementation of the checklist in the clinical setting (i.e., the second research question) were not raised in this group. Two separate focus groups were organized from this category: medical student group 1 consisted of 5 students (2 male and 3 female) aged between 24 and 25 years old, and the medical student group 2 consisted of 10 students (1 male and 9 female) aged between 24 and 25 years old.

A total of four focus group discussions were conducted (one session each for the medical student group 1 and medical student group 2, and two sessions for the doctor group). For both medical student groups, only one session was held per group due to logistic challenges as the students had to undergo a tight schedule in completing the various clinical rotations. All participations were volitional, and no monetary compensation was involved in the recruitment of the participants.

Materials

Prior explanation on diagnostic errors, premature closure and other types of cognitive errors in clinical decision-making as well as the application of the metacognitive checklist (i.e., TWED checklist) was given to participants from both the medical students and doctor groups three months before starting the focus group discussions.

The TWED mnemonic checklist is a 4-item checklist purported to minimize the tendency of cognitive errors such as premature closure [25]. The first item represented by the letter “T = Threat” is to reflect on the question “Is there any life or limb threatening conditions I need to rule out in this patient?” The second item, “W = What else?” is about reflecting on the questions of “What if I am wrong? What else?” The third item “E = Evidence” is about reflecting on the question of the sufficiency of the evidence or data to support or refute a particular diagnosis (“Do I have enough evidence to support or exclude this diagnosis?”) whereas the fourth item, “D = Dispositional influences” deals with the hidden emotional or environmental dispositional factors (e.g. doctor’s fatigue, busy emergency ward, etc.) that may influence the quality of the diagnostic decisions. Hence, reflecting on the first two items (“T = Threat”, “W = What else?”) may trigger more patient data collection, whereas reflecting on the third item (“E = Evidence”) may trigger the evaluation of how well a diagnosis under consideration is supported by the various pieces of patient data. The fourth item (“D = Dispositional influence”) acts as an overarching self-reflective mechanism to guard against premature closure due to extrinsic influences [25].

During the focus group discussions, open questions on the five dimensions of usability (i.e. learnability, effectiveness, memorability, errors, and satisfaction) [22] were based on the following scheme:

  1. 1.

    How easy was it for you to learn how to use this checklist in your clinical encounters with the patients (“learnability”)? Explain your response.

  2. 2.

    How effective do you think this checklist is in preventing diagnostic errors or near-missed diagnoses (“effectiveness”)? Do you have any specific examples to share? Explain your response.

  3. 3.

    How easy was it for you to remember the items or components of this checklist, especially after a period of not using it (“memorability”)? Explain your response.

  4. 4.

    Have you encountered any flaws, errors, limitations or pitfalls while using this checklist (“pitfalls/limitations”)? Explain your response.

  5. 5.

    How satisfied were you with using this checklist in clinical decision making (“satisfaction”)? Explain your response.

Regarding the narrative network used to answer the second research question, the narrative or the steps of how differential diagnoses generation would routinely occur (“pre-implementation”) was first constructed. This was accomplished through group discussion (by participants in the doctor group) and differences of opinion were resolved through working towards general consensus during the focus group discussions. In the unlikely event that consensus could not be reached, one of the authors (KSC) would make a final decision after taking all opinions into consideration. The various permutations on how the various components of the TWED checklist were implemented into the routines of differential diagnoses generation was then discussed in order to construct the “post-implementation” narrative.

Procedure

Explanations on different types of diagnostic errors, classification of cognitive errors in clinical decision-making as well as on the application of the TWED checklist were given by one of the authors (KSC) to all participants three months prior to the commencement of the focus group discussions. Participants were given a digital copy of the TWED checklist, delivered to their smartphones and electronic devices. For the next three months, they were told to use the checklist as often as they wanted to, and in any manner deemed suitable. At the end of the three months, focus group discussions were held.

Participants were told that their discussions would be audio-recorded and transcribed anonymously (i.e., their identities would not be revealed and that they would be identified in such manner as Student 1, Student 2, etc. for the medical student groups or as Doctor 1, Doctor 2, etc. for the doctor group). They were also assured of the confidentiality of their responses.

As mentioned, for the doctor group only, the focus group discussions were slightly longer (2 h versus 45 min to one hour in the medical student focus group discussions). This was because after the discussion related to the first research question on the usability of the checklist, participants in this group were also asked how they implemented the checklist in their clinical practice. All focus group discussions were conducted in classrooms with participants seated facing each other in circular arrangement.

One of the participants in each group was selected as the moderator of the group prior to starting the first session of the focus group discussions. The moderator was first briefed and given the interview scheme on the usability of the checklist as well as the framework for the narrative network. One of the authors (KSC) acted as the note-taker to scribble down the participants’ responses, observed the dynamics of the group interactions as well as to ensure that the discussions stayed in focus.

Transcriptions of the audio recordings were performed by one of the authors (KSC). The transcripts were then sent back to the participants for member-checking. This author (KSC) and another independent medical doctor (who was not a co-researcher of this study) from Sarawak General Hospital then performed the open coding deductively using thematic content analysis method through iterative readings and labelling of keywords and phrases from these transcripts [26]. After the initial open coding, a second axial coding was performed by re-analyzing these open codes to look for patterns and relationships among them. Discrepancies between the codings by the two coders were resolved through face-to-face and online discussions. NVivo version 12.0 for Mac software was used to aid the coding process. Approval from the grants and research ethics committee of UNIMAS was obtained prior to commencement of the study (F05/SoTL/1477/2016).

Results

The perception of checklist usefulness

To answer the first research question on the perception of usability of the checklist in clinical setting, themes identified along with their illustrative comments, anecdotes and how these themes can be related to the TWED checklist are compiled and tabulated in detail in Table 1. With regards to the dimension of effectiveness, this checklist was perceived to be effective in promoting metacognition. This finding is probably best encapsulated in the below quote:

Table 1 Key themes identified (first phrase in bracket refers to the group the participant belonged, second phrase refers to the anonymized identity of the participant)

“This tool will at least help me not to simply discharge the patient, reducing my risk of misdiagnosing and mismanaging the patient.” (Doctor 10).

However, it seems that not all four items in the TWED checklist are equally memorable. The participants from both categories commented on the fact that the memorability of a checklist is not just dependent on its organization and mnemonic structure but also on its relevance and familiarity. Participants from both categories could remember to apply items “T” (threats) and “W” (what else) as these two items are important and directly relevant to their clinical practices. But, most participants in the medical student groups could not remember the items “E” (evidence) and “D” (dispositional influence). They perceived these two items as not relevant and important. This is because these medical students are seeing patients for the purpose of learning, and not for the purpose of actually treating or managing them. This is typified in the following two quotes:

“I think for the first two items (i.e., the items “T” and “W”), they are easier for us to remember because these two items are relevant to us in our clinical encounters as medical students, because we apply them everyday. The other components or items are more difficult to remember.” (Student 2, Group 1).

“I can only remember one item out of the four; and that is the item T which stands for life-threatening conditions, because that is the most important thing for us, ruling out emergency conditions. For example, when I encountered a case of antepartum hemorrhage, one of the first things I must think of is abruptio placenta because that is an emergency.” (Student 4, Group 2).

Some participants in the medical student group also commented that items “W” and “E” are inter-related as the consideration of one of these 2 items may trigger the consideration of the other item. For example,

“In my opinion, I think after we consider ‘E = evidence’, we should go back to ‘W’, i.e., whether we are wrong or not? Or whether the evidence support my diagnoses or not? And what else it could be? (Student 5, Group 1)”.

“…I think that there are some overlaps between item no. 2 (“W”) with item no. 3 (“E”). Because, by the fact that I can say I might be wrong means that I have evidence to show it to be so…” (Student 1, Group 1).

On the other hand, a number of participants in the doctor group commented that items “T”, “W” and “E” are items that they are already familiar enough with and have been practicing unconsciously on a daily basis. Hence, they felt that they do not need a checklist for these three items. On the contrary, unlike the medical students, for these doctors, item “D = Disposition” represents an often-neglected but relevant group of factors. Hence, they felt that they needed to pay more attention to how these factors influence the quality of their diagnostic decisions. This is illustrated by the quote below:

“…I think the components “T”, “W”, “E” are something which we are already practicing on the daily basis but the “D = Disposition” component of the tool is something we need to pay particular attention too. The “environment” we are working in can influence our judgment.” (Doctor 9).

Third, in terms of its learnability, the participants believed that as the checklist only has four items, it is not complicated to learn. In order to enhance the learning of such this checklist, participants from both categories agreed that it should be introduced as early as possible in medical schools, preferably before a medical student begins his or her clinical rotations. For example, one of the participants in the doctor group commented:

“Yeah, this tool should be taught earlier… as early as possible, maybe while in medical school because once they already out working, the junior doctors would have already developed their own ways of approaching patients, and it might be difficult to introduce new tools for them” (Doctor 10).

Fourth, in terms of the dimension of limitations or pitfalls of the checklist (i.e., limitations), participants from both categories believed that the effectiveness of the checklist is hampered by a lack of prior medical knowledge. In other words, although this checklist may be effective in promoting metacognition, it is not a “magic bullet” in resolving premature closure. For it to work, the doctors and medical students need to have adequate prior knowledge. Another limitation highlighted by the participants from the doctor group was the fact that applying this checklist on every case can be time consuming; hence, it may slow down the entire working process. This is illustrated by the quote below:

“To me the limitation of this tool is that when we keep thinking too much on the patient, we may then be worrying too much about the case, spending too much time thinking of what could the errors be, etc. I mean, being a bit skeptical, applying critical thinking is good, but sometimes, this can become too time-consuming especially when we are too skeptical, which in turn, delays our management, leading to stress and frustration…. The challenge for us then is to know when and for which case do we need to apply the checklist, and which ones we do not.” (Doctor 8).

And lastly, regarding the satisfaction or pleasantness in using the checklist (i.e., satisfaction), although some participants believed that its mnemonic and simplicity makes the checklist pleasant to use, there are also participants who commented that as the checklist exposed them to their own shortcomings (i.e., inadequate prior knowledge) in generating pertinent different diagnoses, hence, the checklist is in not so pleasant to use.

Implementation of checklist

To answer the second research question, a narrative network was constructed. In this narrative network, the narrative of the diagnostic process prior to implementation of the TWED checklist (i.e., pre-implementation narrative) was mapped with the specific items in the checklist in order to generate the so-called post-implementation narrative (see Fig. 1). From the post-implementation narrative, it is apparent that the items in the checklist are not applied in the same manner and/or with the same frequency. The item “T = Threat” was applied iteratively at almost every cognitive checkpoint when decisions are to be made, for example, after taking history and performing physical examination, after obtaining further data from more history-taking and re-examining the patient, when reconsidering differential diagnoses based on data from investigations, or when the patient is not improving as he or she should be. On the other hand, the participants believed that the items “W = What if I am wrong? What else?” and “E = Evidence” should only be applied when the outcome did not appear the way that it should be, for instance, when the patient is not improving. Finally, the “D = Dispositional influences” is the item that needs to be applied infrequently. It is akin to a “cognitive brake” in the fast lane of clinical decision making, to ensure that one does not allow emotional influences or the ambient atmosphere of the workplace environment to cloud or distort one’s judgment. The one cognitive point where all four items converge is after the initial history taking and physical examination has been performed to generate the initial clinical impression (Step C in the pre-implementation narrative as shown in Fig. 1).

Fig. 1
figure 1

showing the narrative network on how the specific items of TWED checklist can be implemented in the diagnostic process

Discussion

With regard to the first research question, this qualitative study suggests that the TWED checklist is a useful tool to both medical students and doctors. As commented by a participant in the student group, by the fact that the checklist only has four items, it is not complicated to learn and should not interfere with the application of other checklists that students have often been taught in medical schools (for example, the “VINDICATED” checklist for etiologies of differential diagnoses where, V = vascular, I = Inflammatory, N = neoplastic, D = drugs, I = Infective, C = congenital, A = autoimmune, T = trauma, E = endocrine and metabolic and D = degenerative). The checklist also appears to be effective in promoting metacognition. However, it can be time consuming, particularly when one first learns how to apply it. In addition, while some participants opined that the mnemonic in the checklist makes it pleasant to use, others felt that it was not. This is due to the fact that the checklist is akin to a mirror in reflecting the inadequacy of their prior knowledge.

Different groups of participants found different aspects of the checklist useful. For the medical students, the items “T = Threat” and “W = What else?” were the most useful. These two items awakened their awareness on the importance of mitigating cognitive errors such as premature closure.

As for the doctors, they claimed that only item “D = Dispositional influences” was something new to them. It helped them realize the importance of emotional and environmental factors in diagnostic decisions. They further claimed that items “T = Threat” and “W = What else?” are practices they had already been doing unconsciously. Excessive and redundant use of checklists could overburden doctors, complicate tasks, and reduce efficiency [27]. This is because, insisting on the use of external tools like a checklist when the clinicians are competent enough may paradoxically tax their working memory, hence, distracting them from the fluent execution of their Type 2 thinking process [28]. The external tool might have triggered the need for the users to reconcile the information contained in the checklist with what they have already internalized [28, 29]. This is further compounded by the fact that the working memory (the active processing memory) is limited in terms of the number of items that can be held at any one time [30, 31]. This potentially counter-productive effect is known as the expertise-reversal effect [28].

On the basis of these findings, while checklist training for novices like medical students should first emphasize on learning items “T” and “W”, the same cannot be said for training doctors. The training for doctors should best be emphasizing on the importance of item “D” only.

With regards to the second research question, the implementation of the TWED checklist in the routine diagnostic process seems simple enough even for the medical students. But just as in learning any new task, integrating this checklist in daily clinical routine takes time and effort. Repetitive practice could allow the tasks embedded in the TWED checklist to be relegated from Type 2 thinking process to the more efficient, automatized Type 1 thinking processes [10].

Some participants from both categories pointed out that the effectiveness of the checklist can be hampered by a lack of prior medical knowledge. The importance of having sufficient knowledge to recognize these cognitive errors and to rectify them with the correct responses has also been alluded to in a number of literature [32,33,34]. Furthermore, although it is previously mentioned that cognitive errors are believed to be more prevalent in a predominantly Type 1 thinking mode [1, 10], these errors can also occur in a predominantly Type 2 process. As such, merely asking the students and doctors to slow down and reflect on their own thinking using a checklist may not be helpful particularly when they are already slowing down to be more systematic and analytical in their thinking processes (Type 2 thinking) [35]. In other words, implementing the TWED checklist should go in tandem with increasing the knowledge of the participants [36].

In this study, both the Nielsen’s 5 dimensions of usability (i.e. learnability, effectiveness, memorability, errors, and satisfaction) and the Pentland’s narrative network were found to be useful frameworks that can be applied to evaluate the usability and exploring ways of implementation of a psychometric tool in a clinical setting.

A number of limitations of this study warrant mentioning. First, qualitative data were only gathered in one session for each of the two medical student focus groups, and in two sessions for the doctor group. Hence, data saturation was probably not reached for the medical student groups. Second, data were only collected at a single point in time, three months after using the checklist. Had a series of data been collected at different points in time, (e.g., also after 6 months and 9 months), a change of trend might be observed. Allowing a longer period of time for the participants to apply the checklist would afford them the opportunity to master the skill of checking cognitive errors. Ultimately, as the aim of this checklist is to facilitate metacognition to reduce diagnostic errors through minimizing the risk of committing cognitive errors [1], future works could include a quantitative study on the effectiveness of the checklist to reduce the incidence rate of diagnostic errors. However, this could only be done if the doctors have been given sufficient time to master the skill.

Overall, despite its limitations, the results from this study suggest that the TWED checklist is useful in facilitating metacognitive regulation in diagnostic process in a clinical setting. The results imply that this checklist may serve as a trigger to switch from a predominantly automatized decision making mode into a more analytical thinking mode. As aptly stated by Moulton et al. (2017), a good, well-calibrated decision maker is not one who is just able to automatize and speed up a diagnostic process for cases with typical clinical presentations; but one who has mastered the act and the art of “slowing down when one should slow down” [37]. To state it in another way, the critical aspect or the hallmark of a good decision maker is not just the ability to adapt the presenting problems to known illness scripts or clinical solutions but the ability to apply novel or creative solutions to ill-defined or unusual clinical presentations [37]. In Gollwitzer’s conceptual framework [38], the TWED checklist can be said to function as an implementation intention strategy to bridge the gap between the intention of minimizing cognitive errors and the execution of that intended goal.

Conclusion

Albeit the fact that different groups of participants may use it in slightly different manners (depending on their level of experience), this qualitative study showed that a metacognitive checklist aimed to check cognitive errors is perceived to be likely useful in real clinical setting.

References

  1. Balogh EP, Miller BT, Ball JR. (eds). Institute of Medicine National Academy of science, engineering. Medicine Improving Diagnosis in Health Care. Available at: https://www.nap.edu/catalog/21794/improving-diagnosis-in-health-care.

  2. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775–80.

    Article  Google Scholar 

  3. Charlin B, Boshuizen HP, Custers EJ, Feltovich PJ. Scripts and clinical reasoning. Med Educ. 2007 Dec;41(12):1178–84.

    Article  Google Scholar 

  4. Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2005;39(1):98–106.

    Article  Google Scholar 

  5. Evans JS, Stanovich KE. Dual-process theories of higher cognition: advancing the debate. Perspect Psychol Sci. 2013 May;8(3):223–41.

    Article  Google Scholar 

  6. Durning SJ, Dong T, Artino AR, van der Vleuten C, Holmboe E, Schuwirth L. Dual processing theory and experts' reasoning: exploring thinking on national multiple-choice questions. Perspect Med Educ. 2015 Aug;4(4):168–75.

    Article  Google Scholar 

  7. Mittleman MA, Mintzer D, Maclure M, Tofler GH, Sherwood JB, Muller JE. Triggering of myocardial infarction by cocaine. Circulation. 1999;99(21):2737–41.

    Article  Google Scholar 

  8. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 2: impediments to and strategies for change. BMJ Qual Saf. 2013;22(Suppl 2):ii65–72.

    Article  Google Scholar 

  9. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535–57.

    Article  Google Scholar 

  10. Croskerry P, Petrie DA, Reilly JB, et al. Deciding about fast and slow decisions. Acad Med. 2014;89(2):197–200.

    Article  Google Scholar 

  11. Norman G. Dual processing and diagnostic errors. Advances in health sciences education : theory and practice. 2009 Sep;14(Suppl 1):37–49.

    Article  Google Scholar 

  12. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002 Nov;9(11):1184–204.

    Article  Google Scholar 

  13. Coderre S, Wright B, McLaughlin K. To think is good: querying an initial hypothesis reduces diagnostic error in medical students. Acad Med. 2010;85(7):1125–9.

    Article  Google Scholar 

  14. Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42(5):468–75.

    Article  Google Scholar 

  15. Graber M, Sorensen Asta V, Biswas J, et al. Developing checklists to prevent diagnostic error in emergency room settings. Diagnosis. 2014:223.

  16. Gawande A. The checklist manifesto: how to get things right. New York: Picador. 2010.

  17. Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med. 2011;86(3):307–13.

    Article  Google Scholar 

  18. Chew KS, Durning SJ, van Merrienboer JJ. Teaching metacognition in clinical decision-making using a novel mnemonic checklist: an exploratory study. Singap Med J. 2016;57(12):694–700.

    Article  Google Scholar 

  19. Chew KS, van Merrienboer JJG, Durning SJ. Investing in the use of a checklist during differential diagnoses consideration: what’s the trade-off? BMC Medical Education. 2017;17(1):234.

    Article  Google Scholar 

  20. Graber ML. Educational strategies to reduce diagnostic error: can you teach this stuff? Adv Health Sci Educ Theory Pract. 2009;14(Suppl 1):63–9.

    Article  Google Scholar 

  21. Arzy S, Brezis M, Khoury S, Simon SR, Ben-Hur T. Misleading one detail: a preventable mode of diagnostic error? J Eval Clin Pract. 2009 Oct;15(5):804–6.

    Article  Google Scholar 

  22. Nielsen J. Usability metrics: tracking interface improvemments. IEEE Softw. 1996;13(6).

  23. Pentland BT, Feldman MS. Narrative networks: patterns of technology and organization. Organ Sci. 2007;18(5):781–95.

    Article  Google Scholar 

  24. Carey MA, Smith MW. Capturing the group effect in focus groups: a special concern in analysis. Qual Health Res. 1994;4(1):123–7.

    Article  Google Scholar 

  25. Chew KS, van Merrienboer J. Durning SJ. A portable mnemonic to facilitate checking for cognitive errors. BMC Res Notes. 2016;9(1):445.

    Article  Google Scholar 

  26. Liamputtong P. Qualitative research methods. 3rd ed. south. Melbourne: Oxford University Press; 2009.

    Google Scholar 

  27. Winters BD, Gurses AP, Lehmann H, Sexton JB, Rampersad CJ, Pronovost PJ. Clinical review: checklists - translating evidence into practice. Crit Care. 2009;13(6):210.

    Article  Google Scholar 

  28. Kalyuga S. Expertise reversal effect and its implications for learner-tailored instruction. Educ Psychol Rev. 2007 Dec 01;19(4):509–39.

    Article  Google Scholar 

  29. Mayer RE. Models for Understanding. Rev Educ Res. 1989;59(1):43–64.

    Article  Google Scholar 

  30. van Merrienboer JJ, Sweller J. Cognitive load theory in health professional education: design principles and strategies. Med Educ. 2010;44(1):85–93.

    Article  Google Scholar 

  31. Sweller J, van Merrienboer JJG, Paas FGWC. Cognitive Architecture and Instructional Design. Educ Psychol Rev. 1998;10(3):251-96.

  32. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med. 2017;92(1):23–30.

    Article  Google Scholar 

  33. Monteiro SD, Sherbino J, Patel A, Mazzetti I, Norman GR, Howey E. Reflecting on diagnostic errors: taking a second look is not enough. J Gen Intern Med. 2015;30(9):1270–4.

    Article  Google Scholar 

  34. Zwaan L, de Bruijne M, Wagner C, Thijs A, Smits M, van der Wal G, et al. Patient record review of the incidence, consequences, and causes of diagnostic adverse events. Arch Intern Med. 2010;170(12):1015–21.

    Article  Google Scholar 

  35. Sherbino J, Dore KL, Wood TJ, Young ME, Gaissmaier W, Kreuger S, et al. The relationship between response time and diagnostic accuracy. Acad Med. 2012;87(6):785–91.

    Article  Google Scholar 

  36. McLaughlin K, Eva KW, Norman GR. Reexamining our bias against heuristics. Adv Health Sci Educ. 2014;19(3):457–64.

    Article  Google Scholar 

  37. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007;82(10 Suppl):S109–S116.

  38. Gollwitzer PM. Implementation intentions: strong effects of simple plans. Am Psychol. 1999;54(7):493–503.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to extend their acknowledgement to Dr. Shazrina Ahmad Razali for her help as an independent reader and evaluator for the coding of the transcripts.

Funding

Scholarship of Teaching and Learning (SoTL) grant was obtained from UNIMAS (F05/SoTL/1477/2016) to conduct this study.

Availability of data and materials

The transcripts for the focus group discussions where the findings of this study are derived from are attached as supplementary materials labelled Additional file 1: Transcript 1, Additional file 2: Transcript 2 and Additional file 3: Transcript 3.

Author information

Authors and Affiliations

Authors

Contributions

KSC was responsible for the acquisition of the quantitative data as well initial drafting of the manuscript. All authors (KSC, JJGvM and SJD) were responsible for the conception of the project, analysis of the data, revisions of the manuscript as well as contributing to the intellectual content of the manuscript. And all authors approved the final version of the manuscript and all authors are accountable for all aspects of the work in relation to the accuracy or integrity of the work.

Corresponding author

Correspondence to Keng Sheng Chew.

Ethics declarations

Ethics approval and consent to participate

Written approval from the institutional research ethics committee of Universiti Malaysia Sarawak (UNIMAS) was obtained prior to commencement of the study (F05/SoTL/1477/2016). All participants in this study consented to participate voluntary. Before the beginning of the focus group discussions, participants were told that their discussions were audio-recorded and transcribed anonymously. They were also assured of the confidentiality of their responses. No monetary compensation was involved in the recruitment of the participants.

Consent for publication

Permissions were also obtained from the participants to publish their data anonymously without revealing their names and identities. Instead, they were told that they would be identified in such manner as Student 1, Student 2, etc. for the medical student groups or as Doctor 1, Doctor 2, etc. for the doctor group).

Competing interests

All authors declare that they do not have any competing or conflict of interest in this study

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional Files

Additional file 1:

Transcript 1 Focus Group Discussion (Group 1 final year medical students) (DOCX 18 kb)

Additional file 2:

Transcript 2 Focus Group Discussion (Group 2 final year medical students) (DOCX 16 kb)

Additional file 3:

Transcript 3 Focus Groupd Discussion (Doctors Group) (DOCX 24 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chew, K.S., van Merrienboer, J.J.G. & Durning, S.J. Perception of the usability and implementation of a metacognitive mnemonic to check cognitive errors in clinical setting. BMC Med Educ 19, 18 (2019). https://doi.org/10.1186/s12909-018-1451-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-018-1451-4

Keywords