Skip to main content
  • Research article
  • Open access
  • Published:

The development and validation of the clinicians’ awareness towards cognitive errors (CATChES) in clinical decision making questionnaire tool

Abstract

Background

Despite their importance on diagnostic accuracy, there is a paucity of literature on questionnaire tools to assess clinicians’ awareness toward cognitive errors. A validation study was conducted to develop a questionnaire tool to evaluate the Clinician’s Awareness Towards Cognitive Errors (CATChES) in clinical decision making.

Methods

This questionnaire is divided into two parts. Part A is to evaluate the clinicians’ awareness towards cognitive errors in clinical decision making while Part B is to evaluate their perception towards specific cognitive errors. Content validation for both parts was first determined followed by construct validation for Part A. Construct validation for Part B was not determined as the responses were set in a dichotomous format.

Results

For content validation, all items in both Part A and Part B were rated as “excellent” in terms of their relevance in clinical settings. For construct validation using exploratory factor analysis (EFA) for Part A, a two-factor model with total variance extraction of 60% was determined. Two items were deleted. Then, the EFA was repeated showing that all factor loadings are above the cut-off value of >0.5. The Cronbach’s alpha for both factors are above 0.6.

Conclusion

The CATChES questionnaire tool is a valid questionnaire tool aimed to evaluate the awareness among clinicians toward cognitive errors in clinical decision making.

Peer Review reports

Background

According to the Institute of Medicine’s report titled “Improving Diagnosis in Health Care (2015)”, diagnostic error is defined as “the failure to (a) establish an accurate and timely explanation of the patient’s health problem(s) or (b) communicate that explanation to the patient” [1]. Three broad categories of diagnostic errors have been identified by Graber et al. [2], viz., the no fault errors, system-related errors and cognitive errors [1, 2]. The category of no-fault errors is defined as errors caused by external factors outside the control of the clinician or the health care system. These include atypical disease presentation or the misleading information provided by the patients. The second category, i.e., system-related errors, are errors due to technical or organizational barriers such as weaknesses in communication and care coordination, inefficient processes and faulty equipment. The third category, i.e., cognitive errors (also known as cognitive biases), are errors due to poor critical thinking skills of the clinicians [3, 4]. Cognitive errors are deviations from rationality and may derail the clinicians into making diagnostic errors if left unchecked [5]. Although they may believe otherwise, studies have shown that clinicians are in fact, just as prone to commit cognitive errors as anyone else [6, 7].

Campbell et al. [8] have classified the common clinically important cognitive errors into six categories [8]. These categories are (1) “errors due to over-attachment to a particular diagnosis (examples of cognitive biases in this class include anchoring and confirmation bias)”, (2) “errors due to failure to consider alternative diagnoses (for example, search satisficing)”, (3) “errors due to inheriting someone else’s thinking (for example, diagnostic momentum and framing effect)”, (4) “errors in prevalence perception or estimation (for example, availability bias, gambler’s fallacy and posterior probability error)”, (5) “errors involving patient characteristics or presentation context (for example, cognitive biases: fundamental attribution error, gender bias)”, and (6) “errors that are associated with the doctor’s affect or personality (for example, visceral bias and sunk cost fallacy)” [8].

In a survey by MacDonald et al. [9] involving 6400 clinicians on diagnostic errors, the top three reasons for diagnostic errors cited have to do with cognitive errors [9]. A total of 75% of these clinicians cited atypical patient presentation (resulting in the doctors being misled to consider other diagnoses), 50% cited failure to consider other diagnoses while 40% cited failure to gather adequate history from patients [9].

Nonetheless, as important as these cognitive errors are, it is not known how many clinicians are aware of this. As pointed out by Prochaska et al. [10] in their Transtheoretical Model of Change, the first step towards behavioral change is known as contemplation. In the context of cognitive errors in clinical decision making, this is the stage where a clinician becomes acutely aware of the negative impact of cognitive errors on diagnostic accuracy as well as factors that increase the vulnerability of a clinician in committing such biases in clinical decision making. Once a clinician is in the contemplation stage, he or she would likely see the necessity to initiate steps towards the intended behavioral change (in this case, minimizing the risk of committing cognitive errors when making clinical decisions). This step is known as the preparation stage [10].

On the other hand, a person who is unaware of the problem sees no reason to take any action to change. This prior stage is known as the pre-contemplation stage [10]. A tool is therefore necessary to facilitate the transition from the stage of pre-contemplation to the stage of contemplation.

Despite the impact of cognitive errors on diagnostic accuracy, there is a paucity of literature on questionnaire tools aimed to assess the clinicians’ awareness toward cognitive errors. This paper describes the development and validation of a questionnaire purported to evaluate the Clinician’s Awareness Towards Cognitive Errors (CATChES) in clinical decision making. The purpose of this tool is to help create the awareness among clinicians who are in the pre-contemplation stage with the hope of moving them from this stage to the stage of contemplation in the Transtheoretical Model of Change. This tool can also be used as a pre-intervention material to supplement educational resources in teaching cognitive errors in clinical medicine (such as this resource in MedEdPORTAL [11]).

Methods

Participants

For content validation, based on the recommendation by Lynn [12], ten experts consisting of emergency physicians from Universiti Sains Malaysia were invited to determine the content validation and out of these ten, nine of them consented.

For construct validation, emergency physicians and emergency residents with a minimum of four years’ working experience in Hospital Universiti Sains Malaysia were identified as the participants. Using the rule of thumb of a minimum of five participants per item, a minimum of 30 participants were needed. Clinicians who were not residents pursuing a postgraduate degree in emergency medicine or clinicians with less than four years of working experience in the emergency department were excluded. The authors invited 35 of these emergency residents to participate in the construct validation and 31 of them responded. All nine emergency physicians who participated in the content validation also participated in this construct validation process. Hence, a total of 40 participants were recruited in this construct validation process.

Materials

The questionnaire tool in this study is divided into two parts. The first part of the questionnaire (Part A) aimed to evaluate the awareness of clinicians towards cognitive errors in clinical decision making while the second part (Part B) aimed to evaluate the clinician’s perception towards specific categories of cognitive errors in clinical setting (Part B).

A preliminary version of the questionnaire was first developed by two authors (KS and AH) and checked by the third author (YC). For the development of Part A of this questionnaire, the Transtheoretical Model of Change [9] was used as the theoretical framework. Six items were generated in this preliminary version. The theoretical basis for each of the items is given in Table 1. Whereas, for Part B, the classification of cognitive errors used by Campbell et al. [10] was used to generate the preliminary list of categories of cognitive errors. Each category of the cognitive errors is defined as an item. A total of six items were generated.

Table 1 Preliminary list of items to evaluate the attitude of clinicians toward cognitive errors in clinical decision making

Procedure

This was a cross-sectional study conducted among clinicians (emergency physicians for content validation; emergency physicians and emergency residents for construct validation) from Hospital Universiti Sains Malaysia (HUSM). Convenience sampling was applied in recruiting the participants. Human Research Ethics approval was obtained from The Human Research Ethics Committee of Universiti Sains Malaysia before the study was commenced.

The content validity of the questionnaire (both Part A and Part B) was first determined by a panel of experts consisting of the emergency physicians in HUSM. These experts were briefed by one of the authors (AH) on how to respond to the relevance of the items, ranked in a Likert scale of four, ranging from “1 = not relevant at all” to “4 = highly relevant”. The experts were told to respond anonymously and that they were free to withdraw from the study at any time. The response sheets were passed to the experts to respond on their own and were collected back by author (AH) the following day. The document on the glossaries of terms were handed out and read out to the participants prior to starting the questionnaire.

After the content validation process, the construct validation of the questionnaire was determined. For the construct validation of Part A, participants were first briefed on how to respond to the items ranked in a Likert scale of five, ranging from “1 = strongly disagree” to “5 = strongly agree”. Participants were told to respond anonymously and that they were free to opt out at any time. A separate document on the glossaries of terms were handed out and read to the participants prior to starting the questionnaire. All participants responded individually in one sitting.

Since the purpose of this Part B is to identify the clinician’s perception towards the specific categories of cognitive errors in clinical setting, it was set in a dichotomous format (i.e., whether they are relevant or not relevant) and not in an ordinal format. As such, construct validation for this part was not determined. The sequence of content and construct validation is illustrated in Fig. 1.

Fig. 1
figure 1

The sequence of content validation followed by construct validation

Statistical analyses

Exploratory factor analysis (EFA) was used to determine the construct validity of Part A of the questionnaire. Principal axis factoring was chosen as the extraction method. The initial run of the factor analysis was performed to determine the number of items to be extracted. An eigenvalue of more than 1 was chosen as the cut-off value to determine whether the numbers of factors to be fixed. Scree plotting was also performed to further verify the number of factors for extraction. Repeated runs of the factor analysis were then performed to determine the factor loadings of the items as well as to identify problematic items that may need to be removed. A cut-off point of 0.5 was used as the criteria in factor loading to determine whether an item is to be removed or not [13]. Whereas for communality (extraction), a value of >0.25 was set as the cut-off value to determine the need for item removal [14]. Promax oblique rotation was used. The internal consistency reliability of the item was determined by analyzing the Cronbach’s alpha coefficients. Cronbach’s alpha refers to the degree to which participants’ responses are consistent across the items within this questionnaire construct [15]. A cut-off point of Cronbach’s alpha >0.6 was set for this study for the criteria of a good degree of internal consistency [15]. The software SPSS version 22.0 for Mac was used for data analysis.

To evaluate the content validity of item relevance, the content validity index (CVI) and the modified kappa (κ) were used. The item relevance CVI (I-CVI) for relevance is defined as the proportion of the judges who rate the item with scores of 3 or 4 on a four-point Likert scale (with 1 = not relevant at all, 2 = somewhat relevant, 3 = quite relevant, and 4 = highly relevant) [12]. CVI value of 0.85 and above is considered as valid [12]. The modified kappa (κ) was computed in order to account for the possibility of chance agreement in CVI [16].

Results

With regards to the content validity, the CVI values for all items were rated highly as valid in terms of their relevance in clinical settings. In terms of the values of their modified kappa (κ), all items were rated as “excellent” in terms of the validity of their relevance in clinical settings. The results of CVI for Part A and B are given in Tables 2 and 3 respectively.

Table 2 Content Validity of Item Relevance for Part A of Questionnaire
Table 3 Content Validity of Item Relevance for Part B of Questionnaire

With regards to the construct validation using EFA on Part A, generally the Kaiser-Mayer-Olkin measure of sampling adequacy was found to be 0.74 which demonstrates a moderate degree of common variance shared among the items. The Bartlett’s test of sphericity was statistically significant (with chi-square statistics = 43.93, p < 0.05). This shows that there are correlations among the items based on the correlation matrix. Initial eigenvalue indicates that the first two factors (which has the eigenvalue >1) explain 60% of the total variance (42 and 18% respectively). Furthermore, 2 factors were shown to be above the point of inflexion of eigenvalue on the scree plot (Fig. 2). The number of factors was therefore, fixed at 2 for re-run of the analysis.

Fig. 2
figure 2

Scree Plot Showing The Two Factors Above The Point Of Inflexion

After 2 rounds of test re-run, two items out of the six were removed as they did not meet the minimum cut-off points of factor loading >0.5, and communality values of >0.25. In particular, item no. 3 (“Authority gradient discourage critical thinking and thus increase the vulnerability to commit cognitive errors”) was recognized as problematic with factor loadings of only 0.14 and 0.20 in both factors and its communality value (extraction) of 0.084 only. Item no. 4 (“Something, rather than nothing, can be done to minimize the risk of falling into these errors”) was also identified as problematic with factor loading of 0.481 in Factor 2 and a communality (extraction) value of 0.145.

After removal of the two items, the re-run of the principal axis analysis of the remaining 4 items shows that they explain 75% of the variance with two factors extracted. All items in this analysis had factor loadings of >0.5 and communalities (extraction) of >0.25. The pattern matrix of the factor loading is presented in Table 4. Item no. 1 (“Cognitive errors in general have important impact towards clinical decision making in emergency medicine”) and item no. 2 (“Being aware of cognitive errors help me to be more careful in my clinical decisions”) load on Factor 2 whereas item no. 5 (“The understanding of cognitive errors and its impact on clinical decision making and patient safety should be made a component in emergency medicine curriculum in postgraduate training”) and item no. 6 (“The understanding of cognitive errors and its impact on clinical decision making and patient safety should be taught at undergraduate level”) load on Factor 1. Hence, we labeled factor 1 as the “educational interventions to reduce the risk of cognitive errors” whereas Factor 2 is labeled as the “impact of cognitive errors in clinical decision making”.

Table 4 Pattern Matrix of the Factor Loadings

Both factors in Part A yield a Cronbach’s alpha of 0.676 and 0.635 respectively and no further improvement in the Cronbach’s alpha values could be achieved by deleting any of the items. Cronbach’s alpha for Part B is 0.657. Similarly, no further improvement in Cronbach’s alpha value could be achieved by deleting any of the items in this part. In the finalized version of the CATChES questionnaire (Table 5), the sequence of the factors is logically reversed, with Factor 2 placed first before Factor 1 for Part A of the questionnaire.

Table 5 The Final Version of the CATChES Questionnaire

Discussion

Based on the content validity evaluation of Part A and B of the questionnaire, all items were retained as they were shown to have excellent content validity in terms of their relevance in clinical setting.

From the EFA, Part A of the questionnaire is constructed with two factors, i.e., the “impact of cognitive errors in clinical decision making” and “educational interventions to reduce the risk of cognitive errors”. Each of these two factors has two items. Referring back to the Transtheoretical Model of Change by Prochaska et al. [9, 10], Factor 2 “impact of cognitive biases in clinical decision making” reflects the contemplation stage of the model, whereas Factor 1 “educational interventions to reduce the risk of cognitive biases” reflects the preparation stage of the model.

Furthermore, from the EFA, it is also shown that there are two items that had to be removed. For item no. 4 (“Something, rather than nothing, can be done to minimize the risk of falling into these biases”), the phrase ‘something, rather than nothing’ is rather ambiguous and this may have resulted in its rejection by the participants as a valid item. Re-phrasing it with a more direct sentence may bring greater clarity. For example, it could be rephrased, as ‘Specific de-biasing strategies are effective in minimizing the risk of committing cognitive biases.

For item no. 3 (“Authority gradient discourages critical thinking and thus increase the vulnerability to commit cognitive errors”), its rejection could be due to the fact that the statement is overly generalized, particularly in an Asian culture. Authority gradient is defined as the gradient that may exist between two individuals’ professional status, experience, or expertise that contributes to gap in exchanging information or communicating concerns [17]. In our study, perhaps our participants did not think that authority gradient is always bad. Nurtured in an environment where a healthy level of authority gradient is respected, a senior, experienced clinician can train a junior clinician in inculcating better clinical decision making skill.

In terms of the internal consistency analysis of Part A, a moderate degree of internal consistency measured by both Cronbach’s alpha values of more than 0.6 in both factors was noted. The internal consistency could be improved by adding more items within the factors. Therefore, future research should consider including more items that are relevant to Factor 1 (educational interventions to reduce the risk of cognitive biases) and Factor 2 (impact of cognitive biases in clinical decision making) or items to generate more factors to move up to the next of stage of “taking actions” along the ladder of Transtheoretical Model of Change [9].

There are a number of limitations in this validation study. First, for Part A, confirmatory factor analysis (CFA) was not performed on another set of samples to confirm the constructs developed based on the EFA result. Second, face validity was not performed to determine its comprehensibility and readability. For example, as mentioned, the phrase ‘something, rather than nothing’ in item no. 4 is rather vague. Third, more items should be included to improve the internal consistency of the constructs. The future development of this project would include rewording and rephrasing the items as well as adding more relevant items based on Transtheoretical Model of Change [9]. More samples should be included to replicate the present EFA results and as well as including CFA test in the analysis to devise the second edition of this questionnaire.

Conclusion

Despite its limitations, the construct and content validation suggest that the CATChES questionnaire tool is useful in evaluating the awareness among clinicians toward cognitive errors in clinical decision making. Such awareness may in turn, motivate them to take measures to minimize risk of committing these errors.

References

  1. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington, DC: The National Academies Press; 2015. Available at http://nas.edu/improvingdiagnosis, Accessed on 25 November 2015.

    Google Scholar 

  2. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493–9.

    Article  Google Scholar 

  3. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008;121(5 Suppl):S2–23.

    Article  Google Scholar 

  4. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775–80.

    Article  Google Scholar 

  5. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf. 2013;22 Suppl 2:ii58–64.

    Article  Google Scholar 

  6. Bornstein BH, Emler AC. Rationality in medical decision making: a review of the literature on doctors’ decision-making biases. J Eval Clin Pract. 2001;7(2):97–107.

    Article  Google Scholar 

  7. Klein JG. Five pitfalls in decisions about diagnosis and prescribing. BMJ. 2005;330(7494):781–3.

    Article  Google Scholar 

  8. Campbell SG, Croskerry P, Bond WF. Profiles in patient safety: a “perfect storm” in the emergency department. Acad Emerg Med. 2007;14(8):743–9.

    Google Scholar 

  9. MacDonald OW. Physician Perspectives on Preventing Diagnostic Errors. 2011. In: QuantiaMD. Available at URL: https://www.quantiamd.com/q-qcp/QuantiaMD_PreventingDiagnosticErrors_Whitepaper_1.pdf, Accessed 23 Oct 2015.

  10. Prochaska JO, DiClemente CC, Norcross JC. In search of how people change. Applications to addictive behaviors. Am Psychol. 1992;47(9):1102–14.

    Article  Google Scholar 

  11. Chew KS, van Merrienboer J, Durning S. Teaching cognitive biases in clinical decision making: a case-based discussion. MedEdPORTAL Publications. 2015;11:10138. http://dx.doi.org/10.15766/mep_2374-8265.10138

  12. Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35(6):382–5.

    Article  Google Scholar 

  13. Hair Jr JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. Prentice Hall: Upper Saddle River; 2010.

    Google Scholar 

  14. Kline RB. Principles and practice of structural equation modeling. 3rd ed. New York: Guilford Press; 2011.

    Google Scholar 

  15. Cho E, Kim S. Cronbach’s coefficient alpha: well known but poorly understood. Organ Res Methods. 2015;18(2):207–30.

    Article  Google Scholar 

  16. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30(4):459–67.

    Article  Google Scholar 

  17. Cosby KS, Croskerry P. Profiles in patient safety: authority gradients in medical error. Acad Emerg Med. 2004;11(12):1341–5.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Availability of data and materials

All data related to the content validation are published in Tables 2 and 3. The dataset for construct validity cannot be shared in order to preserve participants’ anonymity.

Authors’ contributions

Author KS was responsible for the conception of the project, acquisition and analysis of the data, initial drafting and subsequent revisions the manuscript, contributing to the intellectual content of the manuscript. YC was responsible for the analysis of data, critically revising its content and making substantial contribution to the intellectual content of the manuscript. AH was responsible for the initial conception of the study design, the acquisition and analysis of the data for the work and revising the manuscript critically for important intellectual content. All authors approved the final version of the manuscript and all authors are accountable for all aspects of the work in relation to the accuracy or integrity of the work.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Ethical clearance was obtained from the institutional research ethics committee of Universiti Sains Malaysia. All participants agreed for this anonymous voluntary participation in this study as well as any subsequent publications based on the results of this study.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keng Sheng Chew.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chew, K.S., Kueh, Y.C. & Abdul Aziz, A. The development and validation of the clinicians’ awareness towards cognitive errors (CATChES) in clinical decision making questionnaire tool. BMC Med Educ 17, 58 (2017). https://doi.org/10.1186/s12909-017-0897-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-017-0897-0

Keywords