Skip to main content
  • Research article
  • Open access
  • Published:

Relationship between students’ perceptions of the adequacy of M1 and M2 curricula and their performance on USMLE step 1 examination

Abstract

Background

Performance on United States Medical Licensing Exam® (USMLE®) Step 1 examination (Step 1) is an important milestone for medical students. It is necessary for their graduation, and selection to interview for the National Resident Match Program®. Success on Step 1 examination requires content alignment, and continuous evaluation and improvement of preclinical curriculum. The purpose of this research was to observe the association between students’ perceptions of deficits in the curriculum based on core disciplines and organ systems in relation to students’ performance in those disciplines and systems on USMLE® Step 1 examination.

Methods

An anonymous survey with closed-ended and open-ended questions was sent to 174 medical students, the class of 2018 (77), and 2019 (97) within 2–3 weeks of taking Step 1 examination. Students’ feedback as well as students’ performance on Step 1 examination were organized into disciplines and organ systems to allow for more specific curriculum analyses. The closed-ended questions provide three selections (yes, no and not sure) regarding students’ agreement to the adequacy of M1 and M2 curricula to prepare students for Step 1 examination. Students’ responses on the closed-ended questions were reviewed in conjunction with their Step 1 performance. The open-ended feedback was qualitatively analyzed for emergent themes or similarity with closed-ended questions in identifying any shortcoming of the curriculum.

Results

The data show an apparent relationship between students’ evaluations and students’ performance on Step 1 examinations. A high percentage of students’ disagreement of the curriculum adequacy was also reflected in a lower performance on Step 1 examination. Additionally, the themes that emerged from the qualitative analysis have confirmed the areas of curricular deficiency.

Conclusion

The data collected from this research provides insight into the degree of usefulness of students’ evaluations as a way of assessing curriculum deficits in preparing students for their Step 1 examination.

Peer Review reports

Background

Although different comprehensive evaluation models [1,2,3] are available for curriculum evaluation, medical schools are constantly looking for innovative ways to evaluate their curricula, especially if the evaluation method responds efficiently in making recommendations and changes. For instance, Chang et al. [4] utilized medical student graduates’ ability to match in highly regarded residency programs as a measure of undergraduate medical education program quality. They also reported that key factors in students finding “elite” residency programs were “clerkship grades and Step 1 scores” [4]. This finding concerning United States Medical Licensing Exam® (USMLE®) Step 1 (“Step 1″) scores is consistent with the fact that Step 1 score is a top factor used for the National Resident Match Program® to select applicants to interview [5], despite calls to reevaluate the role of Step 1 in residency selection [6]. To this end, this study aims to evaluate the degree of usefulness of students’ evaluation as a way of efficiently assessing curriculum deficits in preparing students for taking Step 1 examination.

Medical education has evolved since the revolutionary report on medical education by Flexner in 1910 [7] with new recommendations for medical education recently written [8]. In addition, to these seminal documents, schools have been trying to improve medical education curricula to integrate cultural competence [9], palliative education [10], population health [11], and use collaborative approaches to evaluate and transform medical curriculum [12]. While medical school curricula are evolving to address changing needs, students must complete Step 1 which measures basic science knowledge [13]. Due to the stated importance of Step 1 examination and content covered, a primary goal for medical schools within the first two years of the curriculum is preparing students for Step 1 examination. With that said, multiple data points are important in the development of curriculum as expressed in the six-step approach for curriculum development [14]. The six-step approach for curriculum development has been used to address specific aspects of the curriculum [15,16,17]. Denney-Koelsch et al. [17] surveyed clerkship and course directors, and used students’ evaluations of courses to determine if topic areas are covered in the curriculum. Day et al. [18] reported curricular changes to improve musculoskeletal curriculum following student feedback, while Baroffio and Gerbase [19] reported students’ perceptions being used for changes in problem-based learning. However, detailed reports of curricular revision specific to global (i.e. all content areas) Step 1 performance is limited.

One direct measure of global performance on Step 1 is provided in March of the year following student examination (i.e. March of 2018 for students taking Step 1 in calendar year 2017). Although relatively timely, this can be up to nine months following examination by a cohort of students and curricular development for the upcoming academic year has typically been completed by this time for many schools. In addition, the information provided is limited. Currently, the annual report of student performance on Step 1 includes a performance summary, score histogram, and a score plot. The score plot divides the performance of examinees taking Step 1 into three content areas: 1) physician task, 2) discipline, and 3) system. The score plot for an individual school provides the mean performance for test takers in each of the content areas, and one standard deviation from the schools’ mean compared to the national mean in each content area. Although the content areas are provided in the feedback from the USMLE®, the association between data reported by the USMLE® and students’ perceptions of preparedness is lacking. As such, feedback from individual learners is an important measure that should be considered during program level evaluations [14]. Therefore, the aim of this study was to 1) determine students’ perceptions of preparedness for Step 1 examination, and 2) determine if there is association between student’s perceptions of preparedness for the Step 1 examination in various disciplines and organs systems and their performance reported on the score plot provided to schools for Step 1 examination.

Methods

Participants

Second-year (M2) students from the classes of 2018 (n = 77), and 2019 (n = 97) were recruited to participate in the study within 2–3 weeks of their first attempt in taking Step 1 examination. Ninety-nine students responded to the anonymous survey for a response rate of 57% for both classes. Participants were 55% females and 45% males. Their ages ranged from 21 to 34 years with an average age of 23 years. Participants’ average UGPA was 3.65 on a 4-point scale, and their overall MCAT percentile rank was 70%.

Data collection

An anonymous survey with closed-ended and open-ended questions was sent to 174 medical students within 2–3 weeks of taking Step 1 examination. Two closed-ended questions assessed students’ agreement to the adequacy of M1 and M2 curricula to prepare students for Step 1 examination. Students selected between yes, no and not sure for each question to report if they feel the curriculum sufficiently covers disciplines and organ systems within the M1 and M2 curriculum. At the end of the survey, students were also asked to identify the specific content areas that were not sufficiently covered by M1 and M2 curricula. Performance of students in disciplines and organ systems on the Score Plot reports provided by the National Board of Medical Examiners® taking the Step 1 for first-time takers were used to identify the least performing disciplines and organ systems.

Data analysis

Students’ feedback as well as first-time test taker performance on Step 1 examination were organized into curricular disciplines and organ systems to allow for more specific analyses of the curriculum. Students’ responses on the survey were analyzed to identify the top curricular disciplines and organ systems identified by students as insufficiently covered by the curriculum. Thereafter, these identified shortcomings were compared to the least performing disciplines and organ systems on Step 1 examination (i.e., disciplines and organ systems performing at or less than − 0.2 standard deviation (SD) on the score plot of our medical school relative to the distribution for all US/Canadian schools). The open-ended feedback was qualitatively analyzed to identify emergent themes or patterns of students’ perceptions. A search was conducted for meaningful word repetition (e.g., pharmacology for discipline; cardiovascular for organ system) across all student responses. The number of agreements in identifying insufficient disciplines and organ systems were compiled and presented as percentages to indicate word frequencies in relation to the total responses. The open-ended feedback was reviewed with closed-end feedback and student performance for triangulation of data.

Results

Based on the 99 student responses, the top disciplines identified as not adequately covered by the curriculum are: biochemistry (71%), biostatistics and epidemiology (56%), aging (52%), embryology (49%), cell biology (47%), genetics (42%), and pharmacology (41%) (Fig. 1). The top organ systems identified as insufficiently covered by the curriculum are: pregnancy, childbirth and puerperium (31%), behavioral health (22%), cardiovascular (21%), and immune (18%) (Fig. 2).

Fig. 1
figure 1

Students’ perceptions of the adequacy of the curriculum to sufficiently cover curricular disciplines using closed-ended questions (i.e. “yes, no, not sure” scale). Legend: Percent of students responding yes, no, or not sure to closed-ended questions on disciplines covered within the first two years of the undergraduate medical education curriculum. N = 99 students, Behavioral Sci = Behavioral Sciences, Biostat-Epidem = Biostatistics and Epidemiology

Fig. 2
figure 2

Students’ perceptions of the adequacy of the curriculum to sufficiently cover organ systems using closed-ended questions (i.e. “yes, no, not sure” scale). Legend: Percent of students responding yes, no, or not sure to closed-ended questions on organ systems covered within the first two years of the undergraduate medical education curriculum. N = 99 students, Preg-Childbirth-Puerperium = Pregnancy-Childbirth-Puerperium

Areas identified as insufficiently covered by M1 and M2 curricula based on students’ responses to an open-ended question are: biochemistry (59%), anatomy-embryology (35%), pharmacology (32%), behavioral sciences (28%), and physiology (21%). This qualitative analysis with examples of student’s responses is summarized in Table 1.

Table 1 Qualitative analyses of students’ shared perceptions regarding insufficient disciplines and organ systems

On the score plot, Step 1 score distribution of our medical school relative to the distribution for all US/Canadian schools showed that behavioral sciences, biochemistry, genetics, gross anatomy and embryology, histology and cell biology, nutrition, pathology, pharmacology, and physiology are the lowest performing disciplines at − 0.2 SD or lower. Immune, behavioral health & nervous systems/special senses, cardiovascular, respiratory, gastrointestinal, and endocrine systems are the lowest performing organ systems at − 0.2 SD or lower.

Comparing the content areas of insufficiency identified by students’ responses to the (yes, no, not sure) scale, and their responses to the open-ended question showed comparable results (Table 2). Interestingly, areas of insufficiencies coincide with the least performing disciplines and organ systems on Step 1 examination (Table 2).

Table 2 Comparison of USLME Step 1 disciplines scoring ≤ − 0.20 standard deviations (SD) below the national mean and areas of curricular insufficiency identified by students using the closed-ended scale “yes, no, not sure”-, and their response to the open-ended question to students’ performance on Step 1 examination

Discussion

In the present study, we evaluated the adequacy of our curriculum to cover the contents tested in the USMLE® Step 1 examination from the medical students’ perspectives, and we assessed if inadequacy is associated with students’ performance. After taking the United States Medical Licensing Exam® (USMLE®) Step 1 examination, medical students identified the disciplines and organ systems that they perceived were insufficiently covered by our curriculum. These identified inadequate contents were also shown to be the low performing disciplines and organ systems on Step1 results.

USMLE® Step 1 examination is a major milestone for the progression of medical students through the medical school curriculum, and their success to be selected for an interview and to match for residencies [5]. Many medical schools have tried to develop evaluation models to predict students’ success on this major examination [20,21,22]. Most of these models relied on pre-matriculation and internal students’ academic performance data, as well as students’ behavior and acquisition of learning strategies skills [23]. However, faculty effectiveness, learning environments and medical school curricula can also be contributing factors that affect students’ performance on Step 1 examination. In addition, these models do not address methods used to evaluate the medical education curriculum. Our findings suggest student performance should be one of several metrics used for curricular revision.

Curricular insufficiencies identified by students in the closed-ended questions are consistent with those insufficiencies identified in the open-ended question for the disciplines within our curriculum. Biostatistics-epidemiology, biochemistry, embryology, genetics, and pharmacology were seen as the major disciplines that were not sufficiently covered by our curriculum. These insufficiencies were reflected in lower scoring disciplines reported on Step 1 score plot provided by the USMLE®. An interesting finding is that students’ perceptions of insufficiency in our program corresponded with Step 1 students’ performance in the disciplines reported on Step 1 score plot. Despite the notion that medical students heavily utilize external study resources while preparing for Step 1 examination [24], our study suggests that perceived inadequacies within the medical school curriculum may be related to student performance. In addition, it was previously reported that in our curriculum, the overall weighted basic science performance explains 43.56% (M1), 51.41% (M2), and 57.31% (M1 and M2) of Step 1 score variations [23]. This confirms the importance of the basic science curriculum in preparing students for Step 1 examination. Indeed, a national basic science curriculum has been proposed for schools to address Step 1 necessary contents [25].

The magnitude of students’ perceptions of the organ systems insufficiencies were not as obvious as the disciplines insufficiencies. However, with the exception of pregnancy, child birth and puerperium (31%), the top listed insufficiently covered organ systems identified by closed-ended question: behavioral health (22%), cardiovascular (21%), and immune (18%) coincide with students’ low performance on Step 1 in the disciplines reported on Step 1 score plot. It is possible that the difference in magnitude between students’ perception of insufficiency in disciplines versus systems may be due to the design of our curriculum. Currently, two modules in the first year (totaling half of the academic year) and all modules in the second year are organ-systems based. Specific disciplines such as biochemistry, genetics, gross anatomy and embryology, histology and cell biology, and physiology are taught in the first year and are embedded in the integrated modules. Pharmacology, microbiology, and pathology are disciplines taught throughout all organ-systems based modules in the second year. This layout may give the perception to students that there is more coverage of a specific organ-system and less coverage of a specific discipline. However, we also observed discrepancies between students’ perceptions of the adequacy of disciplines and their correlated performance on Step 1 examination. For example, the discipline of pathology was among those in which the students performed below the national average by at least 0.2 SD, but it was rated as sufficiently covered in the curriculum. Although there is no clear explanation for this discrepancy, the integrated nature of Step 1 questions that include many disease processes might not be recognized as pathology questions. These limitations warrant further research.

Content analyses to identify gaps in achieving the intended goals and objectives of the medical curriculum can be a tedious task. Alternatively, the collection of student perceptions regarding the adequacy of the preclinical curriculum soon after taking Step 1 examination may prove insightful. Although collection of course and module-level students’ evaluations is a routine practice, evaluation of student perception of curriculum adequacy to prepare them for Step 1 examination is a potentially valuable supplemental tool in the continuous quest to prepare medical students for their licensing examinations. As such, the analyses of this study were shared with faculty during our annual curriculum retreat. The information was perceived useful by faculty in re-examining the selection of content and topics to be delivered during their modules. We continue to collect this data to evaluate the impact of the corresponding curricular changes on students’ Step 1 examination performance. The dynamic effort of engaging faculty to reflect on students’ perceptions is an excellent model of student-faculty partnership for an on-going curricular assessment. The idea of involving students in the development and evaluation of the curriculum has been introduced in medical education [26,27,28], and is believed to enhance engagement, motivation and enthusiasm [26], and help in making real-time improvement [28].

The collection of students’ perceptions on the adequacy of M1 and M2 curriculum to prepare them for Step 1 examination provide a valid and timely assessment of the curriculum. Due to the fact that medical schools receive the complete analyses of students’ performance on Step 1 examination after up to nine months following examination dates, it’s difficult to address any curricular shortcomings in a timely fashion. However, receiving students’ feedback within two weeks of taking Step 1 examination is very helpful in making necessary changes to the curriculum for the upcoming academic year.

There are several limitations to this study. First, the research was conducted in one medical school using student perceptions that were self-reported. However, as we continue to collect the data annually, the year-to-year comparison of curricular deficit in response to change provides continuous quality improvement of our curriculum. Second, the study did not evaluate the statistical correlation between individual students’ performance and their perceptions due to the anonymous nature of the survey used to collect the data. Finally, student perceptions of curricula and student performance may be influenced by multiple factors (e.g. teaching methods, learning environments, and medical school curricula); however, these factors were not a focus of this survey.

Conclusion

The data collected from this research provides insight into the degree of usefulness of students’ evaluation as a way of assessing curriculum deficits in preparing students for taking Step 1 examination. The association between students’ perception of curriculum adequacy and students’ performance indicates that students’ evaluation is a worthy means of assessing curriculum efficacy, and a valuable tool to be utilized for the improvement of curriculum in preparing students for their Step 1 examination.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Behavioral Sci:

Behavioral Sciences

Biostat-Epidem:

Biostatistics and Epidemiology

Preg-Childbirth-Puerperium:

Pregnancy-Childbirth-Puerperium

SD:

standard deviation

Step 1:

Step 1 examination

USMLE®:

United States Medical Licensing Exam®

References

  1. Kirkpatrick D. Revisiting Kirkpatrick's four-level model. Train Dev. 1996;50(1):54–5.

    Google Scholar 

  2. Frechtling J. Logic modeling methods in program evaluation. San Francisco: John Wiley & Sons; 2007.

    Google Scholar 

  3. Stufflebeam D, Shinkfield A. Evaluation theory, models, & applications. San Francisco: Jossey Bass/John Wiley & Sons, Inc; 2007.

    Google Scholar 

  4. Chang LL, Nagler A, Rudd M, Grochowski CO, Buckley EG, Chudgar SM, Engle DL. Is it a match? A novel method of evaluating medical school success. Med Educ Online. 2018;23.

    Article  Google Scholar 

  5. National Resident Matching Program, Data Release and Research Committee: Results of the 2018 NRMP program director survey. National Resident Matching Program, Washington, DC. 2018. http://staging-nrmp.kinsta.com/wp-content/uploads/2018/07/NRMP-2018-Program-Director-Survey-for-WWW.pdf. Accessed 15 Aug 2018.

  6. Prober CG, Kolars JC, First LR, Melnick DE. A plea to reassess the role of United States medical licensing examination step 1 scores in residency selection. Acad Med. 2016;91:12–5.

    Article  Google Scholar 

  7. Flexner A. Medical education in the United States and Canada: a report to the Carnegie Foundation for the Advancement of Teaching. New York, NY, USA: The Carnegie Foundation for the Advancement of Teaching; 1910.

    Google Scholar 

  8. Cooke M, Irby DM, O'Brien BC. Educating physicians: a call for reform of medical school and residency: John Wiley & Sons; 2010.

  9. Rios EV, Simpson JM. Curriculum enhancement in medical education: teaching cultural competence and women's health for a changing society. Journal of the American Medical Women's Association (1972). 1998;53(3 Suppl):114–20.

    Google Scholar 

  10. Wood EB, Meekin SA, Fins JJ, Fleischman AR. Enhancing palliative care education in medical school curricula: implementation of the palliative education assessment tool. Acad Med. 2002;77(4):285–91.

    Article  Google Scholar 

  11. Kerkering KW, Novick LF. An enhancement strategy for integration of population health into medical school education: employing the framework developed by the healthy people curriculum task force. Acad Med. 2008;83(4):345–51.

    Article  Google Scholar 

  12. Fetterman DM, Deitz J, Gesundheit N. Empowerment evaluation: a collaborative approach to evaluating and transforming a medical school curriculum. Acad Med. 2010;85(5):813–20.

    Article  Google Scholar 

  13. USMLE Step 1 Content Description and General Information. A Joint Program of the Federation of State Medical Boards of the United States, Inc., and the National Board of Medical Examiners®; 2019:1–11. Available from: https://www.usmle.org/pdfs/step-1/content_step1.pdf. Accessed 19 July 2019.

  14. Kern DE, Thomas PA, Howard DM, Bass EB. Curriculum development for medical education: as six-step approach. Baltimore: Johns Hopkins University Press; 1998.

    Google Scholar 

  15. Symons AB, McGuigan D, Akl EA. A curriculum to teach medical students to care for people with disabilities: development and initial implementation. BMC Med Educ. 2009;9:78.

    Article  Google Scholar 

  16. Clark ML, Hutchison CR, Lockyer JM. Musculoskeletal education: a curriculum evaluation at one university. BMC Med Educ. 2010;10:93.

    Article  Google Scholar 

  17. Denney-Koelsch EM, Horowitz R, Quill T, Baldwin CD. An integrated, developmental four-year medical school curriculum in palliative care: a longitudinal content evaluation based on national competency standards. J Palliat Med. 2018. https://doi.org/10.1089/jpm.2017.0371.

    Article  Google Scholar 

  18. Day CS, Ahn CS, Yeh AC, Tabrizi S. Early assessment of a new integrated preclinical musculoskeletal curriculum at a medical school. Am J Orthop (Belle Mead NJ). 2011;40:14–8.

    Google Scholar 

  19. Baroffio A, Vu NV, Gerbase MW. Evolutionary trends of problem-based learning practices throughout a two-year preclinical program: a comparison of students’ and teacher’s perceptions. Adv Health Sci Educ. 2013;18:673–85.

    Article  Google Scholar 

  20. Donnon T, Paolucci EO, Violato C. The predictive validity of the MCAT for medical school performance and medical board licensing examinations: a meta-analysis of the published research. Acad Med. 2007;82(1):100–6.

    Article  Google Scholar 

  21. Dunleavy DM, Kroopnick MH, Dowd KW, Searcy CA, Zhao X. The predictive validity of the MCAT exam in relation to academic performance through medical school: a national cohort study of 2001–2004 matriculants. Acad Med. 2013;88(5):666–71.

    Article  Google Scholar 

  22. Stegers-Jager KM, Themmen AP, Cohen-Schotanus J, Steyerberg EW. Predicting performance: relative importance of students’ background and past performance. Med Educ. 2015;49(9):933–45.

    Article  Google Scholar 

  23. Khalil MK, Hawkins HG, Crespo LM, Buggy J. The design and development of prediction models for maximizing students’ academic achievement. Med Sci Educ. 2018;28(1):111–7. https://doi.org/10.1007/s40670-017-0515-0.

    Article  Google Scholar 

  24. Burk-Rafel J, Santen SA, Purkiss J. Study behaviors and USMLE step 1 performance: implications of a student self-directed parallel curriculum. Acad Med. 2017;92(11S):S67–74. https://doi.org/10.1097/ACM.0000000000001916.

    Article  Google Scholar 

  25. Moynahan KF. The current use of United States medical licensing examination step 1 scores: holistic admissions and student well-being are in the balance. Acad Med. 2018;93(7):963–5. https://doi.org/10.1097/ACM.000000000000210.

    Article  Google Scholar 

  26. Bovill C, Cook-Sather A, Felten P. Students as co-creators of teaching approaches, course design, and curricula: implications for academic developers. Int J Acad Dev. 2011;16(2):133–45.

    Article  Google Scholar 

  27. Goldfarb S, Morrison G. Continuous curricular feedback: a formative evaluation approach to curricular improvement. Acad Med. 2014;89(2):264–9.

    Article  Google Scholar 

  28. Scott KW, Callahan DG, Chen JJ, Lynn MH, Cote DJ, Morenz A, Fisher J, Antoine VL, Lemoine ER, Bakshi SK, Stuart J. Fostering student–faculty partnerships for continuous curricular improvement in undergraduate medical education. Acad Med. 2019;94(7):996–1001.

    Article  Google Scholar 

Download references

Funding

The authors report no external funding source for this study.

Author information

Authors and Affiliations

Authors

Contributions

MK, KS and AG designed the study and data collection instruments. MK and WW analyzed, interpreted the data, and wrote the first draft. All authors read and approved the manuscript.

Corresponding author

Correspondence to Mohammed K. Khalil.

Ethics declarations

Ethics approval and consent to participate

The study was approved by USC IRB and all participants provided written consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khalil, M.K., Wright, W.S., Spearman, K.A. et al. Relationship between students’ perceptions of the adequacy of M1 and M2 curricula and their performance on USMLE step 1 examination. BMC Med Educ 19, 358 (2019). https://doi.org/10.1186/s12909-019-1796-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-019-1796-3

Keywords