A novel bedside cardiopulmonary physical diagnosis curriculum for internal medicine postgraduate training

Background Physicians spend less time at the bedside in the modern hospital setting which has contributed to a decline in physical diagnosis, and in particular, cardiopulmonary examination skills. This trend may be a source of diagnostic error and threatens to erode the patient-physician relationship. We created a new bedside cardiopulmonary physical diagnosis curriculum and assessed its effects on post-graduate year-1 (PGY-1; interns) attitudes, confidence and skill. Methods One hundred five internal medicine interns in a large U.S. internal medicine residency program participated in the Advancing Bedside Cardiopulmonary Examination Skills (ACE) curriculum while rotating on a general medicine inpatient service between 2015 and 2017. Teaching sessions included exam demonstrations using healthy volunteers and real patients, imaging didactics, computer learning/high-fidelity simulation, and bedside teaching with experienced clinicians. Primary outcomes were attitudes, confidence and skill in the cardiopulmonary physical exam as determined by a self-assessment survey, and a validated online cardiovascular examination (CE). Results Interns who participated in ACE (ACE interns) by mid-year more strongly agreed they had received adequate training in the cardiopulmonary exam compared with non-ACE interns. ACE interns were more confident than non-ACE interns in performing a cardiac exam, assessing the jugular venous pressure, distinguishing ‘a’ from ‘v’ waves, and classifying systolic murmurs as crescendo-decrescendo or holosystolic. Only ACE interns had a significant improvement in score on the mid-year CE. Conclusions A comprehensive bedside cardiopulmonary physical diagnosis curriculum improved trainee attitudes, confidence and skill in the cardiopulmonary examination. These results provide an opportunity to re-examine the way physical examination is taught and assessed in residency training programs.


Background
Sir William Osler stated that "Medicine is learned by the bedside and not in the classroom." [1] However, this tenet is being challenged in the modern hospital. The time that residents spend in direct contact with patients has decreased from over 20% of their workday in the 1990's to less than to 10% in recent years [2][3][4][5]. Many factors contribute to this shift away from the bedside, including the electronic health record (EHR), duty hour regulations, and operational pressures in academic medical centers [6][7][8][9][10][11][12]. It is not surprising that less time at the bedside has contributed to a measurable decline in physical exam skills [13][14][15][16][17], in part due to a decreased emphasis on physical diagnosis teaching and practice [7,15,18,19]. Alarmingly, some studies have shown that physical exam skills, particularly cardiovascular exams skills, peak during medical school and decline during residency and beyond [20][21][22]. Physical exam findings directly and immediately affect patient outcomes; a decline in exam skills could have adverse effects on patient care [23,24]. In addition to its enduring importance in patient care, the physical exam is a ritual which plays an integral role in developing a meaningful and therapeutic relationship with a patient [25,26]. This relationship is threatened by less time and lack of emphasis on the bedside encounter [10].
The usual approach to teaching the physical exam involves an introduction to basic techniques during the first two years of medical school, followed by more focused bedside experiences during clinical years. The United States Medical Licensing Examination Clinical Skills (USMLE-CS) examination requires medical students to examine standardized patients but does not directly assess their ability to correctly identify abnormalities on real patients [27]. There is no standardized curriculum or formal assessment of physical examination skills mandated by the Accreditation Council for Graduate Medical Education (ACGME) for US residency training programs. Many internal medicine residency programs do not have physical examination curricula, and instead rely on individual attendings to provide instruction in physical exam technique and interpretation, leading to wide variability in trainee experience.
Given the high prevalence of cardiopulmonary disease in United States hospitals [28], improving cardiopulmonary exam skills among trainees has the potential to meaningfully impact a large number of patients. Cardiovascular physical exam skills peak during medical school and decline thereafter in non-cardiologists [20][21][22], providing an important opportunity for an educational intervention. The curriculum Advancing Bedside Cardiopulmonary Examination Skills (ACE), was developed to improve the cardiopulmonary physical diagnosis skills of trainees in a large internal medicine training program.

Physical examination instruction for residents prior to ACE
Before the introduction of ACE, there was no formal physical diagnosis curriculum for internal medicine residents in our program. There are several activities in which residents are exposed to physical diagnosis teaching, most notably during teaching rounds which occurs with an attending physician every morning, and during weekly activities with assigned faculty members. However, this experience is limited by the preferences of the attending and the findings of patients who are admitted to the service. There was no standardized approach to ensure that all residents received instruction in the same techniques and were able to accurately elicit and interpret physical exam findings.

Curriculum development of ACE
ACE was designed using a formal curriculum development process for medical education [29]. The goals of ACE are to increase trainees' appreciation for the importance of time spent at the bedside and to improve trainees' use of the physical exam to diagnose cardiopulmonary disease. Table 1 lists the objectives of ACE. Objectives 1-3 are the focus of the current manuscript. All interns were invited to participate in ACE, including 53 from July 2015-June 2016, and 52 from July 2016-June 2017. 81.1% were in the categorical program (i.e. the standard 3year internal medicine program), with the remainder in preliminary, primary care or combined medicinepediatrics programs. Table 1 Objectives of the ACE curriculum After participating in the ACE curriculum, learners will: 1. Demonstrate improved understanding of the relationship between cardiopulmonary physical exam findings and physiology by achieving a higher post-ACE score on a validated cardiovascular assessment. 2. Demonstrate improved accuracy in the detection of cardiopulmonary exam findings by achieving a higher score on a validated online assessment that is administered pre-and post-ACE. 3. Demonstrate increased confidence and an increased appreciation for the importance of the bedside physical examination in patient evaluation as measured by higher scores on a self-assessment survey administered pre-and post-ACE. 4. Illustrate proper cardiopulmonary exam techniques on a cardiac simulator, healthy volunteers and hospitalized patients while being observed by a faculty preceptor. 5. Demonstrate more cost-effective use of chest radiography and echocardiography as measured by reduced ordering of inpatient computerized tomography (CT) scans and echocardiograms.

Educational delivery methods
Interns participated in ACE as they rotated through a general medicine service at The Johns Hopkins Hospital in Baltimore, Maryland. The service is staffed by one hospitalist attending, two PGY-2s and four interns. All interns rotated on the service in either one four-week block or two separate two-week blocks.  Table 2 outlines a typical 2-week ACE schedule. Subsequent 2 week blocks replaced the introductory sessions with additional bedside cardiac and pulmonary sessions, depending on the learners present. In addition, learners accessed optional online materials including an interactive tutorial and cases from Blaufuss. 10;11;22 Supplemental optional readings emphasized proper technique, the evidence behind maneuvers, and their relationship to physiology [30,31].

Survey instrument
The authors searched the literature and were unable to find an existing attitudinal instrument regarding the cardiopulmonary exam. A 14-item survey was designed to assess attitudes and confidence surrounding the cardiopulmonary exam. A team of content experts reviewed the final instrument to enhance content validity (BG, SD, EK, MC). Each item used a 5-point Likert scale to rate agreement or disagreement with a statement about the cardiopulmonary examination. The survey was ad-

Cardiovascular skills assessment
The Blaufuss Cardiovascular Examination (CE) consists of 50 questions divided into four categories: physiology, auditory, visual and integration. Questions contain recordings of heart sounds as well as videos of the neck and precordium. Blaufuss developed the assessment by reviewing a 1993 published survey of internal medicine (IM) program directors and Accreditation Council for Graduate Medical Education requirements for IM and cardiology fellowship training. The assessment was reviewed and modified by six academic cardiologists. It has been delivered to over one thousand medical students, graduate trainees, and practicing physicians as a measure of cardiac exam skill. In general, performance on the assessment peaks during the third year of medical school, compared to other medical school years, internal medicine residency and general practice. Only cardiology fellows and cardiology faculty outperform other groups on the assessment [20][21][22]

Statistical analysis
Survey responses were compared using Mann-Whitney rank sum tests and Kruskal-Wallis one-way analysis of variance on ranks using the Likert scale median response. Dunn's method was used for pairwise comparisons. CE data was analyzed using Mann-Whitney rank sum tests for intern and PGY-2 results and paired t-tests for intern mid-year CE assessments (where pre-and post-tests were available). Multilinear models stratified by exposure to ACE were run to compare overall test scores (post vs. pre) for each strata while adjusting for intern year, designation, and total pre-score. Generalized estimating equations were used to account for the repeated measures in the data (pre-and post-test score for each observation). Mutivariate linear regression was used to examine the effect of individual variables such as pre-test score, categorical versus other intern status, year of internship, gender, weeks participating in ACE, and weeks on ICU services on change in CE score at the midpoint of the year. A final adjusted model was fit including covariates that were significant at the alpha = 0.5 level in crude models. Hopkins residents' pre-ACE performance on the CE was also compared to data reported in the literature on internal medicine residents who took the CE assessment [20,22], using independent samples t-tests with pooled variances. A p-value less than 0.05 was considered statistically significant for all comparisons. Analyses were conducted using Sigmaplot (Systat Software, San Jose, CA) and SAS (SAS Institute, Inc., Cary, NC).

Baseline assessment of interns and PGY-2s
All 53 interns from 2015-2016 (100%), all 52 interns from 2016-2017 (100%), and 29 PGY-2s from 2015-2016 (60.4%) who had not previously participated in ACE completed the survey. Results are shown in Table 3. Interns and PGY-2s "strongly agreed" that the cardiopulmonary exam is an important part of patient assessment and that improving exam skills is an important goal for the next year of training. Both groups "somewhat agreed" they had received adequate training in the cardiopulmonary exam. Compared to the interns, PGY-2s felt more confident in their ability to distinguish systolic from diastolic murmurs (p = 0.023) and to characterize a systolic murmur as holosystolic or crescendodecrescendo (p = 0.01). PGY-2s were also more comfortable with the jugular venous pressure (JVP) examination (p < 0.001), and in distinguishing 'a' waves from 'v' waves (p < 0.001). 52 interns from 2015-2016 (98%), 48 interns from 2016-2017 (92%) and 21 PGY-2s from 2015-2016 (44%) completed the cardiopulmonary examination (CE). Intern and PGY-2 scores were similar overall, and in all individual categories (See Fig. 1). There was no significant difference between intern and PGY-2 overall scores in our program compared to 451 internal medicine residents for whom scores were available in the literature (see Table 4) [20,22].

Mid-year assessment of PGY-1s
The mid-year self-assessment survey was completed by 38 interns from 2015-2016 (72.0%) and 19 interns from 2016-2017 (36.5%) (see Table 5). At mid-year, 36 (63.2%) had participated in ACE ("ACE interns"), and 21 (36.8%) had not ("non-ACE interns"). Compared to non-ACE interns, ACE interns more strongly agreed that they had received adequate training in the cardiopulmonary exam (p = 0.001). Non-ACE interns agreed less strongly with this statement at the midpoint compared to the beginning of the year (p = 0.001).
ACE interns were more confident in their ability to perform a cardiac exam (p = 0.039), assess the JVP (p < 0.001), distinguish 'a' waves from 'v' waves (p < 0.001), and classify systolic murmurs as holosystolic or crescendo-decrescendo (p = 0.022). ACE interns felt more confident in their ability to distinguish a pleural effusion from consolidation on exam (p = 0.048).
38 interns from 2015-2016 (72.0%) and 36 interns from 2016-2017 (69%) completed the CE at the midpoint of the year. 71 had completed the pre-year assessment and were included in paired analyses (see Table 6 for demographic data of mid-year participants). A total of 51 interns had rotated through ACE by the mid-point assessment compared to 20 who had not yet rotated through ACE. Results are shown in Fig. 2 In a multilinear model using generalized estimating equations, ACE interns had post-ACE scores that were on average 4.98 points higher than their pre-ACE score (p = 0.001). Holding all other factors constant, ACE interns in the second year of the curriculum had scores that were on average 2.88 points lower than interns who participated in the first year (p = 0.0077). ACE categorical interns had scores that were on average 5.95 points higher than non-categorical interns (p = 0.0098). ACE pre-score was a significant predictor of post-test score (p < 0.001). Non-ACE interns had post-test scores that were on average 5.4 points higher than their pre-year scores, but this difference was not significant. Non-ACE pre-score was a significant predictor of post-test score (See Table 7).
In an adjusted multivariate linear regression model, two variables significantly predicted overall change in score for all interns: pre-test score and categorical status. The difference between the pre and post-test scores became smaller for every unit increase in pre-test score   The cardiac exam is less important now that echocardiography is widely available. The pulmonary exam is less important now that CT imaging is widely available.     I am able to distinguish "a" from "v" waves on a jugular venous pressure examination.   (p < 0.001). Categorical interns had a change in score that was on average 7.61 points greater that noncategorical interns (p = 0.0185). Participation in the ACE curriculum was not a significant predictor of change in score in this model (see Table 8).

Discussion
In the current academic medical center, many factors pull physicians, particularly trainees, away from the bedside. As physicians spend less time with patients, opportunities to practice and teach core skills such as the physical exam have declined [32]. This has contributed to a decline in skill among both trainees and practitioners, and contributes to diagnostic error [24]. While technological advances have dramatically improved our ability to diagnose disease, the physical exam still outperforms technology in a number of important instances [33]. In addition to its diagnostic importance, the physical exam is a vital component of the patient-physician relationship [10]. It helps to build rapport and trust, and if performed properly, can even contribute to a patient's overall sense of well-being. Spending time at the bedside of patients also provides meaning for a physician's work. The alarming rise in physician burnout in recent years may partly reflect this shift away from the bedside [34,35]. There is a growing international movement to bring physicians and trainees back to the bedside [36]. The present study is a direct outgrowth of that movement, and supports the hypothesis that a dedicated curriculum coupled with regular practice improves physical diagnosis skills.   Table 5 Mid-year Self-Assessment Survey Comparing ACE Interns to Non-ACE Interns The cardiopulmonary examination is an important part of patient assessment.
Pre The cardiac exam is less important now that echocardiography is widely available.
Pre  It was encouraging that both interns and PGY-2s strongly agreed that improving their physical examination skills was an important goal for the next year of their training. This suggests that the introduction of the ACE curriculum was a timely intervention for the residency program. Interestingly, interns and PGY-2s reported a similar level of confidence in their ability to perform a cardiac and pulmonary exam, but reported differences in confidence surrounding specific maneuvers such as auscultation of murmurs and the jugular venous pulse examination. The fact that PGY-2s in this study were more confident on certain exam skills than incoming interns but performed at the same level on the CE might reflect a lack of emphasis on physical diagnosis teaching and practice prior to the start of the ACE curriculum. This gap between confidence and competence is an important one to address for trainees, as it directly impacts the safety and quality of patient care [37,38].
Interns who participated in ACE had improved confidence in their physical exam skills. Interns who had not yet participated in ACE had diminished confidence compared to their peers but also compared to the start of internship. This might reflect a realization  that non-ACE interns had not received adequate physical examination training to that point in their careers. The areas in which confidence differed between interns and PGY-2s as well as between ACE and non-ACE interns tended to be areas of the exam that are more technically challenging (e.g. distinguishing 'a' waves from 'v' waves). This might provide some insight into the design of future curricula that focus on more difficult exam maneuvers and techniques. The finding that ACE interns had a significant improvement in their mid-year CE scores provides evidence that the curriculum was effective in improving exam skills. The fact that ACE interns had a significant improvement in their mid-year CE scores compared to PGY-2s who had completed intern year prior to ACE, further suggests that the ACE curriculum was more effective than the previous approach to physical exam teaching in our residency program. Since most training programs utilize the "traditional" approach to physical exam teaching, this could inform the development of similar curricula at other institutions.
In the stratified multilinear models with generalized estimating equations, only ACE interns had a significant increase in their post-test scores. However, in our regression model that combined both ACE and non-ACE interns, ACE participation did not significantly predict change in CE score. Since this was a not a randomized trial, but a quasi-randomized study based on pre-determined intern schedules, other factors may have limited our ability to see a significant effect of the ACE curriculum in the final regression model. It is not surprising that categorical status predicted a higher change in score, since categorical interns spend more time on the internal medicine services during their intern year. It is also not surprising that pre-year score was a predictor of change in score. Lower performers at the outset had more room to improve compared to those interns with a higher pre-year score.
This study has several limitations. This was a single center experience so the results may not be applicable to other institutions. However, the fact that interns and residents from our program performed similarly to other internal medicine residents as reported in the literature before exposure to ACE, suggests that our trainees were starting from a similar knowledge and skill level before introduction to the curriculum.
Our survey, while developed and reviewed by faculty who are experts in the physical exam, has only been used on a single population and lacks robust validity evidence. It was not possible to correlate individual responses with CE performance as the survey was anonymous.
We captured nearly 100% of the interns on the initial assessments, but only 60% and 44% of PGY-2s from 2015-2016 participated in the survey and CE, respectively. We also lost several interns at the midyear assessments since their clinical schedules made it  difficult for some interns to participate. This may have led to our final analyses being underpowered to detect differences in overall scores, particularly for the non-ACE interns. Non-categorical residents were over-represented in the non-ACE population at the mid-year point of the year, which may have also affected our results. Interns and teaching attendings rotate with one another on other services. It is likely that exam skills emphasized during the general medicine rotation where ACE is delivered were taught to non-ACE interns by faculty and peers. This spillover effect probably increased intern skills independent of ACE participation, and is a recognized benefit of the current residency training model. Since the ACE curriculum was not blinded to participants, it is possible that the survey results were biased by the expectation that participation in a dedicated physical exam curriculum would improve confidence and skill.
Familiarity with the Blaufuss software used during ACE may have allowed ACE interns to score higher on the mid-year assessment. Since the Blaufuss software was used only once per rotation, it seems unlikely that this played a major role in the final results. It is also possible that taking the CE a second time might have led to increased scores independent of participation in ACE. However, in previous studies using the CE, control groups did not have improvements in scores upon taking the test a second time [21].
It is possible that the improvements in ACE intern performance could have been due to other aspects of the general medicine teaching service that were not part of the ACE curriculum. Rounds on the ACE service were similar to rounds on other hospitalist-led services. Aside from the ACE curriculum, the teaching activities on that service were shared activities with other rotations such as daily noon conference and intern report. Hospitalist attendings rotated through the ACE service as well as the other hospitalist teaching services. The patient population on the ACE service was similar to other general medicine services. As a result, it seems more likely that any changes in ACE intern performance were due to the ACE curriculum.

Conclusions
Implementation of a bedside cardiopulmonary physical diagnosis curriculum improved the attitudes, confidence and skill of interns in the cardiopulmonary examination. The effectiveness of the ACE curriculum can inform further improvements in physical exam teaching and assessment, and lay the groundwork to examine the effect of these interventions on important metrics such as test utilization, cost of care, patient and provider satisfaction, diagnostic error, and clinical outcome.