Skip to main content
  • Research article
  • Open access
  • Published:

Preclinical curriculum of prospective case-based teaching with faculty- and student-blinded approach

Abstract

Background

Case-based teaching with real patient cases provides benefit of simulating real-world cognition. However, while clinical practice involves a prospective approach to cases, preclinical instruction typically involves full disclosure of case content to faculty, introducing hindsight bias into faculty teaching in medical curricula.

Methods

During 2015–2018, we piloted an optional medical school curriculum involving 6–7 one-hour sessions over a 3-month period each year. New groups enrolled each year from first- and second-year classes. A facilitator provided a blinded physician discussant and blinded students with case information during and not in advance of each session, allowing prospective case-based discussions. Cases were based on real patients treated in the Department of Medicine. Clinical material was presented in the chronologic sequence encountered by treating physicians. Content covered a median of 5 patient visits/case (range: 2–10) spanning over months. A 14-item survey addressing components of the reporter-interpreter-manager-educator (RIME) scheme was developed and used to compare self-reported clinical skills between course participants and non-participant controls during the 2016 course iteration.

Results

This elective curriculum at Stanford School of Medicine involved 170 preclinical students (22.7% of 750 eligible). During the 2016 course iteration, a quasi-experimental study compared self-reported clinical skills between 29 course participants (response rate: 29/49 [59.2%]) and 35 non-participant controls (response rate: 35/132 [26.5%]); students self-assessed clinical skills via the RIME-based survey developed for the course. Two-sample t-tests compared the change in pre- and post-course skills between course participants and non-participants. Of 15 Department of Medicine faculty members invited as discussants, 12 (80%) consented to participate. Compared with controls, first-year participants self-assessed significantly greater improvement in understanding how clinicians reason through cases step-by-step to arrive at diagnoses (P = 0.049), work through cases in longitudinal settings (P = 0.049), and share information with patients (P = 0.047). Compared with controls, second-year participants self-assessed significantly greater improvement (P = 0.040) in understanding how clinicians reason through cases step-by-step to arrive at diagnoses.

Conclusions

Prospective case-based discussions with blinding of faculty and students to clinical content circumvents hindsight bias and may impart real-world cognitive skills as determined by student self-report.

Peer Review reports

Background

In the medical school curriculum, preclinical medical education has shifted over the past few decades from a focus on traditional discipline-specific teaching (eg, physiology, anatomy, pharmacology) towards case-based learning (CBL) [1, 2]. Case-based learning in the preclinical curriculum provides a central benefit of imparting clinical problem-solving skills at an early stage of medical training. Broader research on cognitive load theory in adults has suggested that learning is most effective when individuals are asked to apply knowledge and skills to real-life scenarios [3, 4]. Accordingly, both thought leaders and researchers in medical education have advocated for the use of actual real-world clinical cases in CBL [5,6,7,8,9]. Using authentic patient cases in CBL allows curricular case-based discussions to more closely emulate the complexity of real-world clinical cognition, including exposing students to ambiguous or conflicting clinical data and to unexpected factors that impact clinical care (e.g., personality differences between patients and providers, loss to follow-up) [5, 6]. Furthermore, multiple studies have demonstrated that utilization of cases in curricula is effective in increasing basic science and clinical knowledge of medical students and enhancing their skills in applying that knowledge towards patient cases [10,11,12].

One of the discrepancies between cognitive skills applied in real-world clinical practice versus those illustrated through the contemporary CBL format is the presence of hindsight bias in the latter, especially pertaining to instruction by faculty. Hindsight bias occurs when those who know the correct diagnosis overestimate the likelihood that they would have been able to determine the diagnosis had they been asked to do so in a prospective setting [13,14,15,16,17,18,19]. Although the specific logistics of CBL are variable between different institutions, one common feature of CBL reported in the literature that also applies to our institution’s preclinical curriculum is the full disclosure of case information to the faculty that facilitate CBL sessions [20,21,22]. As groups of students work through cases, faculty typically facilitate the discussion, especially by guiding students to discuss particular topics and potentially navigating students if they appear to be leading towards the incorrect diagnosis [20,21,22,23,24]. By granting faculty the benefit of knowing the full content of the case, the traditional CBL format introduces hindsight bias, which carries the risk of preventing preclinical students from being exposed to the errors or uncertainties that can occur when considering between competing diagnostic possibilities in the setting of ambiguous clinical data [13,14,15,16, 25]. In the clinical setting, the nature of patient care compels both attendings and trainees to approach cases prospectively without the benefit of knowing how the case will unfold in advance. During clinical rotations, medical students benefit from this real-world setting as they learn from their team’s approach to clinical scenarios in the absence of hindsight bias. Through a retrospective view of cases by preclinical instructors, CBL creates a discrepancy between preclinical and clinical education and – more notably – between preclinical education and real-world clinical practice.

One of the earlier case-based didactic methods used before the current format of CBL was the clinicopathologic conference (CPC) [26,27,28,29,30]. Massachusetts General Hospital introduced CPCs in 1910 with the objective of replacing topic-based lectures with case-based discussions [26]. Although other institutions later implemented their own variations of CPCs, the original intended format of CPCs involved physician discussants working through cases that they had not seen in advance of the conferences in front of a large audience of attendings and trainees; case information was provided to discussants in real time during CPCs by presenters with access to the cases. Through the use of blinded discussants, CPCs allowed the audience to learn from the clinical cognition of the discussant in the absence of hindsight bias. Clinicopathologic conferences have become less utilized over time, with stated reasons including the pressure on the discussant of arriving at the correct diagnosis, the excessive focus on the final diagnosis and disease process, and the neglect to consider other factors pertinent to the case, including psychological and social factors [29]. This decline of CPCs has led to the loss of the benefit of prospective clinical cognition by a blinded faculty discussant in medical education.

At our institution, we piloted an optional curriculum from 2015 to 2018 that aimed to expose preclinical students to prospective real-world clinical reasoning by integrating components of both more contemporary CBL methods and the less commonly used CPC format. During course sessions, an unblinded facilitator provided a physician discussant and a group of students with case information during and not in advance of course sessions, allowing both the discussant and students to be blinded to the cases and to reason through clinical material in the absence of hindsight bias. While traditional case-based teaching methods involve small group sessions (ranging from 1 to 30 students, and on average involving 2 to 15 students per group) [2] and traditional CPC sessions involve larger audiences of both trainees and attendings [26], we piloted this course with intermediate-sized groups of 45 to 59 students per year, including both first- and second-year students. All curricular cases were based on real patients treated in the outpatient setting at the Department of Medicine at our institution, and the facilitator presented case information in the chronologic sequence encountered by the treating physician. This format allowed the case-based discussion to closely emulate a real-life clinical scenario. Furthermore, we integrated topics in not only interpretation (e.g., deliberating a differential diagnosis) but also management, patient education, and the impact of socioeconomic factors on patient care (when relevant) into each case discussion. We also conducted a prospective quasi-experimental study to compare self-reported clinical skills between course participants and non-participants.

Methods

Setting and participants

The Primary Care Presentations course was offered to both first- and second-year medical students at our institution, Stanford University School of Medicine. From 2015 to 2018, out of 750 preclinical students eligible to participate in the course, 170 (22.7%) participated in the course, including 99 first-year and 71 second-year medical students. New groups of students participated in the course each year from the cohorts of first- and second-year medical students. We conducted a prospective quasi-experimental study design comparing self-reported clinical skills between participants of the 2016 course iteration and a control convenience sample of medical student respondents of the same academic years at the same institution that did not participate in the curriculum. The course participants and the control students participated in the same required medical school curricula throughout both the first and second preclinical years, with the reported elective course being the only difference in curricular experience between the two groups.

Curriculum design

The course was offered on a quarterly basis (3-month period each year) and consisted of 27 one-hour sessions from 2015 to 2018, with 6–7 sessions held during each annual quarter. Case presentations developed for course sessions were based on patients treated in the Department of Medicine at Stanford Health Care. Cases were chosen that would allow not only interpretation of reported history, physical exam, and diagnostic data but also discussion of management steps and patient education opportunities. Clinical information was retrieved through electronic medical records (EMR) and compiled into a presentation format (using Microsoft PowerPoint) by one author (WC) and edited by two other authors (SW and LO). The case material was organized in the chronologic sequence encountered by the treating primary care physician (PCP), as recommended in the literature on case-based teaching [5]. Clinical content included information 1) available in the EMR prior to the first visit, 2) gathered over that initial consultation, and 3) from subsequent visits to the PCP and other health care professionals. Content for each visit comprised relevant data available in the EMR, including history, physical exam findings, lab values, other diagnostic results, treatment plan, and patient education steps pursued by the treating clinicians. Patient identifiers such as names, medical record numbers, and dates of birth were not provided in presentations.

Examples of diagnoses covered by the course cases included the following: alcoholic cirrhosis with hepatic encephalopathy; polycystic kidney disease with secondary hypertension; complications associated with type 2 diabetes mellitus in the setting of medication noncompliance; and cardiovascular disease after Hodgkin’s lymphoma treatment. Presentations included content over a median of 5 patient visits (range: 2–10) spanning over multiple months to incorporate instruction in longitudinal patient care. Many cases involved discussions in managing chronic conditions such as diabetes, heart disease, hypertension, chronic kidney disease, and long-term complications of cancer treatment. Longitudinal management topics included medication regimen changes for diabetes, post-splenectomy vaccinations, managing neuropathic pain, and others. Longitudinal patient education topics included weight loss counseling and addressing medication nonadherence.

Faculty were recruited from the Department of Medicine of Stanford School of Medicine to participate as discussants in the course sessions. Out of 15 attendings (ranging from chief residents to professors of medicine) invited to participate as discussants, 12 (80%) accepted the invitations; out of the three that declined, two cited scheduling conflicts and one cited reluctance to participate as a blinded discussant. During each course session, a prepared physician facilitator (WC or LO) presented a case to a discussant and to student participants; both the discussant and students were blinded to the case before the session, including the eventual diagnosis. The facilitator prompted the discussant to explain his/her interpretation of each set of information while moving sequentially through the case (see Fig. 1), role-modeling the spontaneous “think aloud” clinical cognition of the discussants [31, 32]. The facilitator and discussants also asked interactive questions to consistently engage students in the case-based discussion throughout the presentation. The facilitator had prepared specific prompting questions to ask students to ensure student participation and engagement in the case-based discussions. Discussants recruited for the course were also instructed in advance to consistently ask questions to engage students in the process of working through the cases.

Fig. 1
figure 1

Faculty- and student-blinded prospective case-based teaching method

The case presentation format allowed the case to unfold according to how the patient had presented in the actual clinical setting while students learned from the blinded, unrehearsed discussant’s clinical cognition. Clinical skills that discussants were prompted to demonstrate included 1) interpretation of history, physical exam findings, and diagnostic results, 2) formulation and iterative reevaluation of problem lists and differential diagnoses, 3) recommendations for short- and long-term therapeutics and management, and 4) opportunities for patient education (see examples of case topics in Table 1). Depending on the case, the course facilitator also prompted discussants to describe how to approach social and psychological factors that could be pertinent to the care of that patient, such as providing motivational interviewing for weight loss or counseling patients with a fear of going to the Emergency Department. Furthermore, when relevant, the facilitator and discussant acknowledged ambiguities or uncertainties in interpreting certain diagnostic data, providing a more realistic illustration of the diagnostic process and placing less emphasis on simply reaching the correct diagnosis. After the discussant had explained his/her treatment recommendations and potential opportunities for patient education and information sharing, the facilitator discussed the actual treatments (and educational topics if documented in clinical notes) that had been pursued by the patient’s treating physician.

Table 1 Examples of cases used in curriculum with topics in interpretation, short- and long-term management, and patient education

Assessment methods

The course curriculum was evaluated in two ways: 1) a quasi-experimental pre- and post-intervention study comparing self-reported clinical skills of course participants and a control sample of non-participants and 2) evaluation of the course by student course participants.

Survey of self-reported clinical skills

We developed a 14-item survey administered electronically to assess students’ self-reported clinical skills in the following domains: reporter (2 items), interpreter (4 items), manager (3 items), educator (1 item), and other skills (4 items) [33]. The reporter-interpreter-manager-educator (RIME) scheme was used to structure the survey since clinical students at our institution are evaluated on clerkship performance by a RIME-based assessment method [34]. A 5-point Likert scale was used for all items, with 1 indicating “Strong Disagreement” and 5 indicating “Strong Agreement” with the statement. The Stanford School of Medicine Institutional Review Board determined that this study did not meet the federal definition of human subject research and certified this study as exempt from review.

All participants of the 2016 course sessions were invited to complete the survey within 2 weeks before the first session and within 2 weeks after the last session. During the same time frames, the survey was also administered to all first- and second-year non-participants of the course with respondents serving as the control group. Surveys were submitted anonymously by course participants and the control group. Students were blinded to the comparison being made between the two groups to avoid biasing the results.

Student evaluation of curriculum

Student participants submitted anonymous evaluations of the curriculum from 2015 to 2018. Students rated the “overall quality of course” on a 5-point Likert-type scale (1 = Poor to 5 = Excellent). Students were also given the option to anonymously submit qualitative written comments regarding skills or knowledge that the course provided them and suggestions for improvement.

Statistical analysis

For the survey on self-reported clinical skills, students’ post-intervention survey responses were matched to pre-intervention responses, and the change in the scores (hereafter referred to as “score change”) was calculated as post-intervention minus pre-intervention score for each of the 14 items for each respondent, with a positive score change indicating an improvement in self-reported clinical skills after the course. Two-sample student’s t-tests were used to compare the score change for each item between course participants and the control group, setting significance at P < 0.05. To determine the magnitude of differences between both groups, associated effect sizes were calculated using Cohen's d, with thresholds of 0.2, 0.5, and 0.8 being used to define small, medium, and large effect sizes, respectively [35]. Statistical analyses were performed using Statistical Analysis Systems, version 9.4 (SAS Institute, Inc., Cary, NC).

Results

Self-reported clinical skills

Out of the 49 course participants in 2016, both pre- and post-intervention surveys were completed by 29 (59.2%) participants, including 7 first-year and 22 second-year students, for academic year-specific response rates of 30.4% (7/23) among first-year students and 84.6% (22/26) among second-year students (see Fig. 2). The 29 course participants that responded to both pre- and post-intervention surveys are hereafter referred to as the “intervention group.”

Fig. 2
figure 2

Consort diagram of quasi-experimental study comparing change in self-reported clinical skills between course participants and a control convenience sample of non-participants

The survey was also sent to all course non-participants, with respondents serving as the convenience sample control group. Out of the 132 non-participants that were eligible for the course, 35 (26.5%) responded to both pre- and post-intervention surveys, including 11 first-year and 24 second-year students, for academic year-specific response rates of 16.4% (11/67) among first-year students and 36.9% (24/65) among second-year students. The 35 non-participants of the course that responded to both pre- and post-intervention surveys are hereafter referred to as the “control group.”

Among first-year respondents, the intervention group (n = 7) had a more positive average score change compared with the control group (n = 11) for 12 out of 14 items across all survey domains (see Fig. 3), including reporting (1 out of 2 items), interpretation (4/4), management (3/3), patient education (1/1), and other skills (3/4). The intervention group reported a significantly more positive score change in the following three items:

  • I understand how clinicians work through patient cases on a step-by-step level to arrive at a diagnosis (P = 0.049, d = 1.04)

  • I know how clinicians work through patient cases on a step-by-step level in a longitudinal primary care setting (P = 0.049, d = 1.02)

  • I understand how to share information with my patients (P = 0.047, d = 1.08)

Fig. 3
figure 3

Comparison of mean score change (post-course score minus pre-course score) between participants and non-participants among first-year medical students. * for p < 0.05 and d > 1.0

Among second-year respondents (see Fig. 4), the intervention group (n = 22) had a more positive score change compared with the control group (n = 24) for 7 out of 14 items across all domains, including reporting (1 out of 2 items), interpretation (2/4), management (1/3), patient education (1/1), and course-specific objectives (2/4). The intervention group reported a significantly more positive score change (P = 0.04, d = 0.66) in the following item: I understand how clinicians work through patient cases on a step-by-step level to arrive at a diagnosis.

Fig. 4
figure 4

Comparison of mean score change (post-course score minus pre-course score) between participants and non-participants among second-year medical students. * for p < 0.05 and d > 1.0

Course evaluations

Anonymous course evaluations were submitted by 62 (36.5%) out of 170 participants (2015: n = 16 [response rate: 35.6%]; 2016: n = 16 [32.7%]; 2017: n = 20 [40.8%]; 2018: n = 10 [37.0%]). The mean (standard deviation) ratings for “overall quality of course” were 4.69 (0.46) in 2015; 4.50 (0.52) in 2016; 4.50 (0.60) in 2017; and 4.80 (0.40) in 2018. Since qualitative assessments were submitted as mostly brief comments by only 2 to 7 students each year from 2015 to 2018, a thematic analysis of the comments was not done. Instead, all comments submitted by students have been included in Table 2.

Table 2 Qualitative assessment of course submitted by student participants (n=19)

Discussion

Case-based learning (CBL) is a widespread pedagogical method in medical school curricula. Case-based teaching methods have been demonstrated to lead to improved satisfaction of health professional students with their clinical education [12, 36, 37]. Furthermore, integrating cases into curricula has been shown to enhance basic science or clinical knowledge as evaluated by self-assessment surveys [12, 38,39,40] or objective measures of skills [10,11,12]. Since adult learning theory suggests that learning is most effective when individuals are asked to apply the newly acquired knowledge to real-life scenarios [3, 4], thought leaders in medical education have encouraged the use of real clinical cases in CBL, allowing curricular teaching to emulate authentic clinical cognition [5, 6]. However, a major deficit to the current format of case-based teaching in the medical school curriculum is the presence of hindsight bias in the setting of full disclosure of case information to faculty instructors. This feature of case-based instruction creates a discrepancy between the clinical reasoning that students are exposed to in the preclinical curriculum and the prospective clinical cognition inherent in real-world medical care. To avoid hindsight bias in preclinical instruction, we developed a curriculum that implemented blinding of both faculty discussants and students during case-based discussions.

Earlier research done to understand cognitive processes of clinicians was largely based on analyses of transcripts of physicians thinking aloud while working through real clinical cases, either through simulated patient encounters [41] or through a format similar to our course, with clinical content provided to physician subjects by researchers in the chronologic sequence encountered by the treating physician [25]. These studies provided insight into nuanced, iterative clinical reasoning skills that clinicians use to generate and evaluate diagnostic hypotheses and to deliberate between management options [25, 41,42,43,44]. Findings from these studies guided pioneers in medical education in proposing how to teach clinical problem solving skills [25, 43, 44]. Our curricular format of physician instructors working prospectively through cases without advance knowledge of the cases similarly allows students to benefit from the “think aloud” thought process of clinicians, exposing them to potentially more nuanced clinical reasoning methods. By prompting faculty to discuss their reasoning prospectively while being presented with case information in the sequence originally encountered by the treating physicians, the course role modeled for students the step-by-step process of working through cases to arrive at a diagnosis, leading to significantly greater confidence among course participants compared with non-participants in working through cases in a step-by-step manner.

The predominant preclinical pedagogical format of using curricular cases known to instructors in advance of teaching sessions is limited by the phenomenon of hindsight bias [13, 18]. This bias can occur when instructors are aware of the final diagnosis and overestimate the likelihood that they would have arrived at that diagnosis had they been asked to predict it beforehand. Previous studies have demonstrated hindsight bias among physicians. For example, Arkes et al. [17] divided 75 practicing physicians into five groups and provided each group with the same case history. The “foresight group” was given four possible diagnoses and asked to estimate the probability of each of the diagnoses. The four “hindsight groups” were given the same four options, but each was told in advance that one of those four diagnoses was correct. Compared with the foresight physicians, the hindsight groups assigned significantly greater probability estimates to the diagnoses that they were told were correct.

Another study also demonstrated hindsight bias among 160 physician audience members at four case conferences [18]. Physicians at those conferences were divided randomly into a foresight and hindsight group, with only the latter being informed of the correct diagnosis. After both groups were presented with the clinical information, they were instructed to rank the likelihood of five diagnostic possibilities, with the hindsight group asked to rank the diagnoses the way they would have if they had not already been informed of the final diagnosis. Compared with the foresight group, significantly more members of the hindsight group ranked the correct diagnosis as first (30% vs 50%). The results of those studies suggest that clinical instructors aware of diagnoses in advance of teaching sessions may overestimate the likelihood that they would have reached those same conclusions in real time.

In traditional case-based learning sessions, hindsight bias may skew instruction and facilitation of discussions by faculty, reducing student exposure to the inherent uncertainty that accompanies clinical reasoning when physicians consider between competing diagnostic possibilities in the actual clinical setting. Uncertainty in clinical decision making is an infrequently studied source of distress for medical students and physicians [45,46,47,48]. Causes of medical uncertainty include technical sources pertaining to uncertainty about medical information; personal sources relating to obscurity of patients’ wishes; and conceptual sources pertaining to ambiguity of applying guidelines or past experiences to the care of current patients [15]. Past studies have demonstrated a high prevalence of intolerance for uncertainty among medical students, with greater intolerance for uncertainty associated with an aversion to fields such as primary care and psychiatry [47, 48]. Contrary to what might be expected, level of intolerance for ambiguity has been shown not to vary over the 4 years of medical school [48]. Exposure to cases involving ambiguous clinical information is currently not well-integrated in preclinical medical education; the focus on retrospective case-based discussions in the curriculum may lead to overconfidence among medical students in making clinical decisions and less tolerance for navigating diagnostic uncertainty [15, 17, 19]. Our course model of blinded clinicians role modeling their clinical reasoning while working through cases prospectively aimed to increase student exposure to the process of navigating ambiguous clinical presentations. These prospective case-based discussions allowed a more authentic illustration of problem-solving methods in the absence of hindsight bias in a manner more similar to actual clinical cognition.

In addition to the blinding of faculty discussants to the case content, another important distinction between our curriculum and the traditional CBL format is the number of students involved; while CBL as described in the literature usually involves groups of 2 to 15 students per group (ranging up to 30 students) [2], we piloted our curriculum with a group of 40–50 students/year. Compared with CBL, our faculty discussant- and student-blinded teaching format involved more time allocated to role-modeled clinical reasoning by the discussant than to discussion by students. To avoid the sessions from mirroring a lecture format and to promote discussion by students, during all sessions, the course facilitator actively engaged students during the case-based discussions, and discussants were also instructed in advance to involve students in the discussion. Future attempts can be made to implement the blinded-faculty format of this course towards CBL sessions, with an unblinded facilitator providing case information sequentially in “chunks” or through an iterative process to both a blinded discussant and small blinded groups of students; the role of the facilitator would be focused on providing the case information with the discussant’s role being to guide and moderate the case-based discussions. This format of two-way blinding of the faculty instructor and students would allow a prospective approach to cases without risk of hindsight bias impacting the discussion.

In previous decades, many medical schools or training programs used clinicopathologic conferences as a common didactic session [26,27,28,29,30]. The original intended format of clinicopathologic conferences (CPCs) had a similar pedagogical format as our course, with blinded discussants working through cases provided to them by unblinded presenters [26, 27]. A 1994 study on the status of CPCs at academic medical centers found that 80% of surveyed internal medicine residency programs held those conferences, with a primary objective of teaching clinical problem-solving skills and providing a more detailed discussion on a specific topic in internal medicine [26].

The few formal reports that exist describing CPCs suggest that these didactic sessions have become less utilized over time due to excessive focus on the final diagnosis and disease process and the neglect to consider complexities such as social factors pertinent to the cases [29]. Our course implemented a blinded discussant format while imparting not only diagnostic skills (e.g., deliberating a differential diagnosis) but also topics in short- and long-term management, patient education, and socioeconomic determinants of health. By illustrating the treatments and patient education topics recommended by both the discussant and the actual treating physician, the course demonstrated for students the varying possible approaches to the same clinical problems.

Another strength of our course was the emphasis on aspects of longitudinal patient care, such as chronic care management, which is often neglected in case-based learning and other preclinical curricula. Both first- and second-year course participants reported a greater score change than the control group in knowing how clinicians work through cases on a step-by-step level in a longitudinal primary care setting, with a statistically significant difference found between first-year course participants and control students. Clinical and preclinical education typically provide more instruction in managing acute conditions with less time dedicated towards longitudinal care and chronic illness management. Due to an increasing population of patients with chronic conditions [49, 50] and studies suggesting physician dissatisfaction with their training in chronic illness care [51], medical educators have called for more instruction in longitudinal medical care and chronic disease management [51,52,53,54]. Via cases involving content over a median of 5 patient visits spanning over multiple months, our course involved nuanced discussions in managing conditions such as diabetes, hypertension, chronic kidney disease, and complications of cancer treatment. Increasing preclinical instruction in chronic illness management via case-based discussions is feasible and provides medical students with early exposure to this critical component of medical care.

Limitations

A limitation of the blinded-discussant format of our curriculum is the potential pressure on discussants to arrive at the correct diagnosis. However, out of the three faculty that declined to participate as discussants (out of 15 invited), only one cited the blinded-discussant format as the reason for declining the invitation. This pedagogical method would need to be implemented at other institutions to evaluate the feasibility of involving faculty in this teaching format. The quasi-experimental design used to evaluate this curriculum also faces several limitations, including use of a non-validated assessment survey. Since clinical medical students at our institution are evaluated by skills in reporter-interpreter-manager-educator (RIME)-based competencies [34], we used the same scheme to evaluate our course. Although RIME-based tools have been developed and studied for the purpose of evaluating medical students, to the best of our knowledge, no similar RIME-based instrument exists for self-reported assessments of clinical skills. We developed our survey items based on literature on the RIME scheme [33, 55] and based on our institution’s RIME-centered evaluation method for clinical rotations [34]. Other limitations of our study design include the low survey response rate and possible selection bias since both the intervention and control groups voluntarily participated in the study, with survey respondents potentially being different than the non-respondents. It is possible that the results would have been different with randomized samples of students from our institution. Furthermore, we were unable to evaluate whether our curricular method led to long-term improvements in clinical skills among course participants compared with non-participants lasting beyond completion of medical school. Future studies evaluating this curricular format should compare objective measures of clinical skills between course participants and non-participants.

Further studies with larger sample sizes and a randomized design are needed to evaluate the effect of a faculty- and student-blinded pedagogical model on cognitive skills of students. This teaching model should also be attempted with smaller groups of students to evaluate whether faculty instructor blinding can be implemented towards traditional small-group case-based learning sessions.

Conclusions

Case-based teaching continues to grow as an instrumental pedagogical model in preclinical education with an objective of imparting real-world clinical cognitive skills. We developed an elective curriculum that promoted prospective case-based discussions and avoided hindsight bias by having a blinded discussant and blinded group of students work through real case information in the chronologic sequence encountered by the treating physician. We piloted this course with intermediate-sized groups of students with significant improvements in self-assessed understanding of step-by-step clinical reasoning. Future attempts can be made to implement this format of two-way blinding of faculty and students in smaller-sized CBL sessions to further evaluate the feasibility and outcomes of prospective case-based teaching.

Abbreviations

CBL:

Case-based learning

CPC:

Clinicopathologic conference

EMR:

Electronic medical records

PCP:

Primary care physician

RIME:

Reporter-interpreter-manager-educator

References

  1. Hartling L, Spooner C, Tjosvold L, Oswald A. Problem-based learning in pre-clinical medical education: 22 years of outcome research. Med Teach. 2010;32(1):28–35.

    Article  Google Scholar 

  2. Thistlethwaite JE, Davies D, Ekeocha S, et al. The effectiveness of case-based learning in health professional education. A BEME systematic review: BEME guide no. 23. Med Teach. 2012;34(6):421–44.

    Article  Google Scholar 

  3. van Gog T, Ericsson KA, Rikers RMJP, Paas F. Instructional design for advanced learners: establishing connections between the theoretical frameworks of cognitive load and deliberate practice. Educ Technol Res Dev. 2005;53(3):73–81.

    Article  Google Scholar 

  4. Van Merriënboer JJG, Sweller J. Cognitive load theory and complex learning: recent developments and future directions. Educ Psychol Rev. 2005;17(2):147–77.

    Article  Google Scholar 

  5. Kassirer JP. Teaching clinical reasoning: case-based and coached. Acad Med. 2010;85(7):1118–24.

    Article  Google Scholar 

  6. Allchin D. Problem- and case-based learning in science: an introduction to distinctions, values, and outcomes. CBE Life Sci Educ. 2013;12(3):364–72.

    Article  Google Scholar 

  7. Williams SM. Putting case-based instruction into context: examples from legal and medical education. J Learn Sci. 1992;2(4):367–427.

    Article  Google Scholar 

  8. Dammers J, Spencer J, Thomas M. Using real patients in problem-based learning: students’ comments on the value of using real, as opposed to paper cases, in a problem-based learning module in general practice. Med Educ. 2001;35(1):27–34.

    Article  Google Scholar 

  9. Williams B. Case based learning - a review of the literature: is there scope for this educational paradigm in prehospital education? Emerg Med J. 2005;22(8):577–81.

    Article  Google Scholar 

  10. Beech DJ, Domer FR. Utility of the case-method approach for the integration of clinical and basic science in surgical education. J Cancer Educ. 2002;17(3):161–4. https://doi.org/10.1080/08858190209528825.

    Article  Google Scholar 

  11. Drakeford PA, Davis AM, Van Asperen PP. Evaluation of a paediatric asthma education package for health professionals. J Paediatr Child Health. 2007;43(5):342–52. https://doi.org/10.1111/j.1440-1754.2007.01078.x.

    Article  Google Scholar 

  12. Jamkar A, Yemul V, Singh G. Integrated teaching programme with student-centred case-based learning. Med Educ. 2006;40(5):466–7. https://doi.org/10.1111/j.1365-2929.2006.02438.x.

    Article  Google Scholar 

  13. Fischhoff B. Hindsight ≠ foresight: the effect of outcome knowledge on judgment under uncertainty. Qual Saf Health Care. 2003;12:304–12.

    Article  Google Scholar 

  14. Dawson NV. Physician judgement in clinical settings: methodological influences and cognitive performance. Clin Chem. 1993;39(7):1468–80.

    Google Scholar 

  15. Hall KH. Reviewing intuitive decision-making and uncertainty: the implications for medical education. Med Educ. 2002;36(3):216–24.

    Article  Google Scholar 

  16. Henriksen K, Kaplan H. Hindsight bias, outcome knowledge and adaptive learning. Qual Saf Health Care. 2003;12(Supplement 2):ii46–50.

    Google Scholar 

  17. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252–4.

    Article  Google Scholar 

  18. Dawson NV, Arkes HR, Siciliano C, Blinkhorn R, Lakshmanan M, Petrelli M. Hindsight Bias: an impediment to accurate probability estimation in Clinicopathologic conferences. Med Decis Mak. 1988;8:259–64.

    Article  Google Scholar 

  19. Detmer DE, Fryback DG, Gassner K. Heuristics and biases in medical decision-making. J Med Educ. 1978;53:682.

    Google Scholar 

  20. Srinivasan M, Wilkes M, Stevenson F, Nguyen T, Slavin S. Comparing problem-based learning with case-based Learning: effects of a major curricular shift at two institutions. Acad Med. 2007;82(1):74–82.

    Article  Google Scholar 

  21. Barrows HS. A taxonomy of problem-based learning methods. Med Educ. 1986;20(6):481–6.

    Article  Google Scholar 

  22. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc. 2007;2(2):115–25.

    Article  Google Scholar 

  23. Maudsley G. Roles and responsibilities of the problem based learning tutor in the undergraduate medical curriculum. Br Med J. 1999;318:657–61.

    Article  Google Scholar 

  24. Savery JR. Overview of problem-based learning: definitions and distinctions. Interdiscip J Probl Learn. 2006;1(1):9–20.

    Google Scholar 

  25. Kassirer JP. Clinical problem solving: a behavioral analysis. Ann Intern Med. 1978;89(2):245–55.

    Article  Google Scholar 

  26. Hassan S. About Clinicopathological conference and its practice in the School of Medical Sciences, USM. Malaysian J Med Sci. 2006;13(2):7–10.

    Google Scholar 

  27. Heudebert GR, Mckinney WP. The status of the Clinicopathologic conference in academic medical centers. J Gen Intern Med. 1999;14:60–2.

    Article  Google Scholar 

  28. Hajar R. The Clinicopathologic conference. Hear Views. 2015;16(4):170–3.

    Article  Google Scholar 

  29. Harris NL, Scully RE. The Clinicopathological conferences (CPCs). In: Louis D, Young R, editors. Keen Minds to Explore the Dark Continents of Disease. Beverly, Massachusetts: Memoirs unlimited, Inc; 2010. p. 349–62.

    Google Scholar 

  30. Lipkin M. The CPC as an anachronism. N Engl J Med. 2010;301(20):1113–4.

    Article  Google Scholar 

  31. Lee JE, Ryan-Wenger N. The “think aloud” seminar for teaching clinical reasoning: a case study of a child with pharyngitis. J Pediatr Heal Care. 1997;11(3):101–10.

    Article  Google Scholar 

  32. Banning M. The think aloud approach as an educational tool to develop and assess clinical reasoning in undergraduate students. Nurse Educ Today. 2008;28(1):8–14.

    Article  Google Scholar 

  33. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74(11):1203–7.

    Article  Google Scholar 

  34. Stuart E, Curet M, Trumbull R, et al. Stanford School of Medicine Clerkship Evaluation Tutorial: Individual Evaluators? MedEdPORTAL. 2006.

  35. Cohen J. A power primer. Psychol Bull. 1992;112(1):155–9.

    Article  Google Scholar 

  36. Bowe CM, Voss J, Thomas Aretz H. Case method teaching: an effective approach to integrate the basic and clinical sciences in the preclinical medical curriculum. Med Teach. 2009;31(9):834–41. https://doi.org/10.1080/01421590902922904.

    Article  Google Scholar 

  37. Critchley LAH, Kumta SM, Ware J, Wong JW. Web-based formative assessment case studies: role in a final year medicine two-week anaesthesia course. Anaesth Intensive Care. 2009;37(4):637–45.

    Google Scholar 

  38. Demarco R, Hayward L, Lynch M. Nursing students’ experiences with and strategic approaches to case-based instruction: a replication and comparison study between two disciplines. J Nurs Educ. 2002;41(4):165–74.

    Google Scholar 

  39. Lechner SK, Thomas GA, Bradshaw M, Lechner KM. Planning oral rehabilitation: case-based computer assisted learning in clinical dentistry. Br Dent J. 2001;191(3):152–6. https://doi.org/10.1038/sj.bdj.4801125a.

    Article  Google Scholar 

  40. Patterson JS. Increased student self-confidence in clinical reasoning skills associated with case-based learning (CBL). J Vet Med Educ. 2006;33(3):426–31. https://doi.org/10.3138/jvme.33.3.426.

    Article  Google Scholar 

  41. Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: an analysis of clinical reasoning. In: Medical problem solving: an analysis of clinical reasoning. Cambridge, Massachusetts: Harvard University Press; 1978.

    Chapter  Google Scholar 

  42. Kuipers B, Kassirer JP. Knowledge acquisition by analysis of verbatim protocols. In: Kidd A, editor. Knowledge Acquisition for Expert Systems: a practical handbook. New York, NY: Plenum Press; 1987. p. 363–85.

    Google Scholar 

  43. Nendaz MR, Bordage G. Promoting diagnostic problem representation. Med Educ. 2002;36(8):760–6.

    Article  Google Scholar 

  44. Chang R, Bordage G, Connell K. The importance of early problem representation in case presentations. Acad Med. 1998;73(10 (supplement)):S109–11.

    Article  Google Scholar 

  45. Fox RC. Training for uncertainty. In: The student-physician: introductory studies in the sociology of medical education. Cambridge, Massachusetts: Harvard University Press; 1957. p. 207–41.

    Google Scholar 

  46. Rizzo JA. Physician uncertainty and the art of persuasion. Soc Sci Med. 1993;37(12):1451–9.

    Article  Google Scholar 

  47. Merrill JM, Camacho Z, Laux LF, Lorimor R, Thornby JI, Vallbona C. Uncertainties and ambiguities: measuring how medical students cope. Med Educ. 1994;28(4):316–22. https://doi.org/10.1111/j.1365-2923.1994.tb02719.x.

    Article  Google Scholar 

  48. Geller G, Faden RR, Levine DM. Tolerance for ambiguity among medical students: implications for their selection, training and practice. Soc Sci Med. 1990;31(5):619–24. https://doi.org/10.1016/0277-9536(90)90098-D.

    Article  Google Scholar 

  49. American Hospital Association. AHA Hospital Statistics. 8th Editio. Chicago, Illinois: Health Forum LLC, American Hospital Association; 2018.

  50. Center for Medicare & Medicaid Services. Chronic conditions among Medicare beneficiaries. Baltimore, Maryland; 2012.

  51. Darer JD, Hwang W, Pham HH, Bass EB, Anderson G. More training needed in chronic care: a survey of US physicians. Acad Med. 2004;79(6):541–8.

    Article  Google Scholar 

  52. Shi CR, Nambudiri VE. Time for an acute focus on chronic Care in Undergraduate Medical Education. Acad Med. 2018;93(6):835–8.

    Article  Google Scholar 

  53. Pershing S, Fuchs VR. Restructuring medical education to meet current and future health care needs. Acad Med. 2013;88(12):1798–801.

    Article  Google Scholar 

  54. Bogetz JF, Rassbach CE, Bereknyei S, Mendoza FS, Sanders LM, Braddock CH. Training health care professionals for 21st-century practice: a systematic review of educational interventions on chronic care. Acad Med. 2015;90(11):1561–72.

    Article  Google Scholar 

  55. DeWitt D, Carline J, Paauw D, Pangaro L. Pilot study of a ‘RIME’-based tool for giving feedback in a multi-specialty longitudinal clerkship. Med Educ. 2008;42(12):1205–9.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Abraham Verghese, MD, for his support of this course and the following faculty for participating as course discussants: Baldeep Singh, MD, Maja Artandi, MD, Kathleen Kenny, MD, Peter Pompei, MD, Steven Lin, MD, Korina DeBruyne, MD, John Edwin Atwood, MD, Kate Weaver, MD, Catherine Forest, MD, MPH, Meera Sheffrin, MD, Benjamin Laniakea, MD, and Tanya Gupta, MD.

Funding

This course was funded by the Stanford Department of Medicine and Stanford Program for Bedside Medicine.

Availability of data and materials

The dataset analyzed during the current study is available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

LO, WC, SW and ST conceived the design of the course described in the manuscript and led the study. LO, WC, SW, and NJ acquired the data. SW analyzed the data, and LO, SBM and WC contributed to quality assurance and interpretation of results. SW drafted the manuscript, and all authors LO, WC, SW, SBM, ST and NJ revised the manuscript and approved the final version for publication.

Corresponding author

Correspondence to Lars Osterberg.

Ethics declarations

Ethics approval and consent to participate

The study was granted exemption from ethics review by the Stanford School of Medicine institutional review board.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Waliany, S., Caceres, W., Merrell, S.B. et al. Preclinical curriculum of prospective case-based teaching with faculty- and student-blinded approach. BMC Med Educ 19, 31 (2019). https://doi.org/10.1186/s12909-019-1453-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-019-1453-x

Keywords