Factors affecting residency rank-listing: A Maxdiff survey of graduating Canadian medical students
© Wang et al; licensee BioMed Central Ltd. 2011
Received: 13 March 2011
Accepted: 25 August 2011
Published: 25 August 2011
In Canada, graduating medical students consider many factors, including geographic, social, and academic, when ranking residency programs through the Canadian Residency Matching Service (CaRMS). The relative significance of these factors is poorly studied in Canada. It is also unknown how students differentiate between their top program choices. This survey study addresses the influence of various factors on applicant decision making.
Graduating medical students from all six Ontario medical schools were invited to participate in an online survey available for three weeks prior to the CaRMS match day in 2010. Max-Diff discrete choice scaling, multiple choice, and drop-list style questions were employed. The Max-Diff data was analyzed using a scaled simple count method. Data for how students distinguish between top programs was analyzed as percentages. Comparisons were made between male and female applicants as well as between family medicine and specialist applicants; statistical significance was determined by the Mann-Whitney test.
In total, 339 of 819 (41.4%) eligible students responded. The variety of clinical experiences and resident morale were weighed heavily in choosing a residency program; whereas financial incentives and parental leave attitudes had low influence. Major reasons that applicants selected their first choice program over their second choice included the distance to relatives and desirability of the city. Both genders had similar priorities when selecting programs. Family medicine applicants rated the variety of clinical experiences more importantly; whereas specialty applicants emphasized academic factors more.
Graduating medical students consider program characteristics such as the variety of clinical experiences and resident morale heavily in terms of overall priority. However, differentiation between their top two choice programs is often dependent on social/geographic factors. The results of this survey will contribute to a better understanding of the CaRMS decision making process for both junior medical students and residency program directors.
In Canada, the vast majority of graduating medical students secure their residency placement through the Canadian Residency Matching Service (CaRMS) . After the students have undergone the application and subsequent interview process, a match is conducted using an algorithm that considers both student and program rank lists. Rank preferences of the former are given priority over the latter . Through better understanding of this decision process, changes may be made to the benefit of both applicants and residency programs.
Many factors influence how students rank residency programs, including geographic, social, lifestyle, and academic factors [2–13]. However, the relative significance of these factors has not been well studied in Canada. Previous studies in this area were limited by small sample sizes and usually restricted to one specialty [3–8, 11]. Furthermore, it is largely unknown how students specifically differentiate between their top two program choices. In 2009, approximately 63% of applicants matched to their top choice program while only 13% and 7.6% matched to their second and third choice programs respectively . Therefore, the characteristics that distinguish top choice from second choice programs have the greatest absolute impact on where an applicant is most likely to match. An individual program's ability to modify these distinguishing factors will vary, and this issue has not been previously explored.
A few existing studies have examined potential gender influences but these have been limited to subspecialty populations [2, 3, 5]. We hypothesize that in a general applicant pool, there will be minimal significant differences between male and female applicant priorities. There are also no studies to our knowledge that have looked at differences between family medicine and specialty applicants. Given differences in eventual patient care roles, it is theorized that specialty residents will, as a group, place greater emphasis on academic factors such as research and academic reputation.
All previous studies on this topic employed surveys using Likert-style rating scales [2–14]. While prevalent in survey research due to ease of use, Likert style rating possess many systematic biases that adversely affect statistical analysis and overall validity. Respondents often cluster their answers and rarely utilize the full spectrum of the scale; these tendencies are known as level and dispersion bias respectively . Level bias also contributes to a non-standardized distribution of "mean" answers; thus those who cluster their responses at the extreme ends of the scale will be disproportionately represented in the final numerical analysis. Lastly, the Likert scale is ordinal, meaning it only conveys order and not magnitude; however, in medical literature it is often analyzed using t-tests or ANOVA as though it is an interval scale [2, 3, 9, 10]. This practice, while widespread, is statistically controversial [16–18].
This study employs the discrete-choice methodology known as "Maxdiff" or "best-worst" scaling to determine the relative importance of various factors. Maxdiff methodology is based on the theoretical framework of discrete choice mathematics developed by Nobel laureate Daniel McFadden [19–23]. This technique provides a solution to most of the problems of rating scale surveys through inherent standardization of mean responses and utilization of the full spectrum of the final numerical scale. It also forces respondents to make choices and compromises, which ultimately leads to better differentiation between factors. The main drawbacks to Maxdiff methodology are that surveys are more time consuming to construct and analyze than with traditional rating scales, but we felt its strengths justified its use.
The primary focus of our study is the overall relative importance of various factors students consider when deciding between residency programs. We will also investigate how students distinguish between their top two program choices, as well as the influence of demographics and career choices on applicant preferences. The goal of our study is to provide those involved in the post-graduate education process with a better understanding of applicant priorities. We hope to identify areas where residency programs may make improvements so that they can be more appealing to applicants.
Our study population included graduating medical students from all six Ontario medical schools who were participating in the CaRMS 2010 match cycle. Ontario was chosen since it is the province with the most number of medical schools and students. We also hoped students would perceive a provincial level study to be more relevant and thus be more eager to participate. Approval for the study was obtained from the Institutional Human Research Ethics Boards associated with the six schools (University of Ottawa, University of Toronto, University of Western Ontario, Queen's University, McMaster University, and Northern Ontario School of Medicine).
Beginning three weeks prior to the CaRMS match date (March 8, 2010), graduating medical students from all six Ontario medical schools were invited by email to participate in the study. At that time, students were in the process of ranking their program choices. Each student was provided an anonymous and unique response code for completing an online survey administered through the services of Survey Gizmo (Boulder, CO, USA). Informed consent was obtained electronically. An incentive was provided in the form of a single randomly drawn $200 cash prize. Completion of the survey was not necessary for an invitee to be eligible for the draw. One week prior to the survey completion date, a single set of reminder emails was sent to all participants to increase response rates. The survey ended on March 8, 2010, prior to the release of CaRMS results so that the match results would not bias responses.
Basic demographic data was collected, including questions about age, gender, marital status and specialty choice. The bulk of our survey focused on 13 factors selected based on a literature review. We chose factors that were commonly cited as influential and that had minimal co-variance with other factors. Some studies have used terms such as "fit for program" or "geographic location" which are obviously rated highly but can be ambiguous in meaning [4, 5, 10]. We attempted to decrease ambiguity by avoiding generic terms such as "geographic location"; instead, we separated the factor into social aspects (friends/family) and city characteristics. We also excluded factors that were previously found to be unimportant, such as benefits package, moonlighting opportunities, and amount of interaction with medical students .
The survey also inquired about the principle reason why respondents did not choose their second-ranked program, using drop-list style questioning. The same 13 factors were built into the drop-list but stated in a negative sense. For example, the second ranked program was not chosen because "it had less variety of clinical experiences".
The survey was piloted with a subset of six volunteer final year medical students from a single school. Their suggestions helped us revise instructions and better clarify potentially confusing aspects of the survey. The scientific methodology and content was not significantly changed.
Only respondents who completed the demographics and Maxdiff sections in full were included in the analysis. Maxdiff data was analyzed using a scaled simple count method, which is easy to comprehend and has been previously validated [19, 21]. Each time a factor was chosen as "most important", its count was incremented by one. Conversely, when a factor was chosen as "least important", its count was decremented by one. This allowed each respondent to generate a "score" for each factor. Since each factor appears in exactly four subsets, the scores ranged from +4 to -4. These scores were averaged over the sample to obtain mean scores for each factor, demonstrating their relative importance overall. To examine subgroups, we separated the sample between males and females, as well as between family medicine applicants versus specialty applicants. Given the ordinal nature of the scale, statistical significance was determined using the Mann-Whitney test. P-values ≤0.05 corresponded to statistical significance. Analysis was performed using the software PAST (Oslo, Norway).
Data was also analyzed to elucidate the top reasons why applicants did not choose their second ranked program. As an individual respondent could only choose one answer from the drop-list menu, the results were organized as percentages. The factors were subcategorized as "modifiable", "potentially modifiable", and "non-modifiable" based on the potential control a program may exert over that factor.
Respondent characteristics in comparison to total CaRMS pool
No. of Respondents (%)
CaRMS applicants (%)
Overall scaled Maxdiff survey scores for various factors
Factor (Short form)
Mean Maxdiff Score
(± Standard error)
Variety of clinical experiences (Variety)
Resident morale (Morale)
Distance to relatives (Relatives)
City characteristics (City)
Academic reputation (Academic)
Spouse/partner's preferences (Spouse)
Quality of faculty (Faculty)
Interview experience (Interview)
Intensity of work schedule (Schedule)
Hospital facilities (Hospital)
Research opportunities (Research)
Financial incentives (Finance)
Parental leave attitudes (Parental)
Comparison of Maxdiff scaled scores by gender and by specialty choice
Mean Maxdiff Score (±SE)
Applicants choosing family medicine were compared with other specialties (Table 3). Numerous statistically significant differences were found. Family medicine applicants appeared to rate variety of clinical experiences higher and resident morale lower in importance by comparison. Specialty applicants were more concerned with academic reputation, the quality of the faculty, and research opportunities.
Top reasons for why the respondents' 2nd ranked program was not their 1st choice
Top reason second choice program was not first choice (%)
Greater distance to relatives
Less desirable city
Less preferred by spouse/partner
Fewer financial incentives
Less variety of clinical experiences
Less academically reputable
Less impressive faculty
Less impressive hospital facilities
Less desirable work schedule
Poorer resident morale
Less impressive interview experience
Fewer research opportunities
Less suitable parental leave attitudes
Our survey has demonstrated that applicants highly value "variety of clinical experiences" but nonetheless often distinguish between top programs based on social/geographic factors. There were no major differences based on gender, but there were several differences between family medicine and specialty applicant priorities. Most of these results are novel in this area of research as well as in this population and shed some light on CaRMS applicant decision making.
From the results of the Maxdiff section (Table 2), the top three factors overall were the variety of clinical experiences, resident morale, and distance to relatives. Conversely, most lifestyle factors such as financial incentives, work schedule, and parental leave were not considered important. It would seem that applicants favour program quality and social factors over lifestyle factors. These general trends are consistent with other studies [3, 6, 10, 12].
There were very few differences between male and female applicants, except that males tended to put greater emphasis on academic reputation and research (Table 3). While females placed more emphasis on parental leave attitudes, both groups ranked that factor last. Thus, overall, male and female medical students have similar priorities when selecting a residency program, which is consistent with other studies [2, 3].
Family medicine and specialty applicants do display several statistically significant differences (Table 3). Family physicians certainly require a broad range of knowledge and thus it is not surprising that family medicine applicants put greater emphasis on the variety of clinical experiences. The greater emphasis by specialty applicants on resident morale may reflect the fact that they spend considerably more time on-service and usually have longer residency programs. There is a disproportionate ratio of specialists to family physicians with appointments at academic centers as staff physicians . Our results were consistent with this work environment preference, as specialty residents on average placed greater emphasis on academic factors such as quality of faculty, research opportunities and academic reputation. Specialty applicants also gave the interview experience a lower priority. It is plausible that applicants to more competitive specialties may have felt content to get into their chosen specialty regardless of location or program; thus leading to less emphasis on the interview experience.
Table 4 displays respondent selections of the principal reason why they didn't choose their second ranked program, also known as the distinguishing factor. Given that most people match to their first choice program, the choice to rank a program second causes the greatest statistical drop in match probability. It is interesting to note that the frequency of the distinguishing factors has a different order than their overall priorities. For instance, while variety of clinical experiences was ranked first overall, it was the distinguishing factor only 9.33% of the time. Most applicants chose factors that were entirely outside the control of the program, such as social and geographical factors. This discrepancy suggests that while respondents highly valued factors such as variety of experiences and positive resident morale, many programs fulfilled their expectations in these areas. Thus, when it came down to a decision between their top choices, most chose the program that better suited their social and geographical situations. It may seem discouraging to program directors that they appear to have limited control over this final decision. Nonetheless, the high overall values placed on many controllable factors indicate that programs need to meet those criteria to be seriously considered. Some studies on resident burnout, work hours, and morale have found positive benefits from options such as hiring physician assistants and the limitation of resident work hours [26, 27]. In Canada, not all provincial regulatory bodies have set maximum duty hours and programs do have some influence in the work schedules and hours of their residents. Clinical variety is a more difficult factor to modify, as some centers are simply limited by the patient volumes they see. However, innovations such as use of simulations and the creation of dedicated "medical procedure rotations", have been demonstrated to increase resident confidence in scenarios they otherwise seldom encounter [28, 29].
There are limitations to our study that affect the way the results may be interpreted. Our sample was drawn from the most populous province in Canada; thus geographic factors may become more of an issue when considering a national sample. Quantitative and qualitative differences in the application process, post-graduate training programs, and the health care system make international comparisons difficult. For instance, students in the United States would have a greater number of residency programs to choose from, including programs outside of their national matching service. This can certainly affect the factors that influence their decisions. Our response rate of 41% is another limitation, but this is comparable to other published student surveys on this topic of similar scale [2–5, 9–11]. Also, the demographic similarity of our sample to that of the CaRMS applicant pool indicates that the survey respondents were representative of the population of interest (Table 1). Lastly, our survey focused on 13 factors, while certainly other issues may influence applicants. However, as previously mentioned we chose a range of factors found to be of potential importance by previous literature and excluded those previously found to be of very low priority. We aimed to create a comprehensive questionnaire while at the same time avoiding low yield questions that may increase participant fatigue and drop-out.
The strengths of our study include its novel methodology as well as several unique findings. Maxdiff methodology leads to a standardized numerical scale with results that disperse across the full spectrum of the scale. This eliminates the systematic level and dispersion biases that affect the Likert style rating scales used in almost all other studies . Maxdiff also forces respondents to make decisions between a set of factors, thus providing better differentiation between factors when compared to rating scales [10, 14]. To our knowledge, this is the only survey to date on this topic which asks how applicants distinguish between their top programs. Moreover, this study provides unique insights into the different values of family medicine and specialty applicants.
This survey has contributed to a better understanding of which aspects of the selection process are emphasized by graduating medical students. Residency program quality issues, such as variety of clinical experiences and resident morale, are important considerations, but social and geographic characteristics tend to separate the top choice from the second choice. Male and female applicants have similar priorities in program selection. While family medicine applicants especially value clinical variety, specialty applicants emphasize academic factors such as research opportunities and program reputation. Lifestyle factors such as financial incentives or work schedule have little impact on applicants. Residency programs will hopefully make efforts to improve on the modifiable characteristics in order to better appeal to applicants.
The authors wish to thank Drs. Anthony Sanfilippo, Ian Johnson, and Francis Chan for their advice and support; Ms. Wilma Hopman for her scientific review of our protocol; Mr. Laurent Le for his translation services; as well as the administrative staff at all sites who helped to distribute and implement the study.
- Canadian Residency Matching Service: Reports and Statistics. [http://www.carms.ca/]
- Aagaard E, Julian K, Dedler J, Soloman I, Tillisch J, Perez-Stable E: Factors affecting medical student' selection of an internal medicine residency program. J Natl Med Assoc. 2005, 97 (9): 1264-1270.Google Scholar
- Crace P, Nounou J, Engel A, Welling R: Attracting medical students to surgical residency programs. Am Surg. 2006, 72: 485-490.Google Scholar
- Davydow D, Bienvenu O, Lipsey J, Swartz K: Factors influencing the choice of a psychiatric residency program: a survey of applicants to the Johns Hopkins residency program in psychiatry. Acad Psychiatry. 2008, 32: 143-146. 10.1176/appi.ap.32.2.143.View ArticleGoogle Scholar
- DeSantis M, Marco C: Emergency medicine residency selection: factors influencing candidate decisions. Acad Emerg Med. 2005, 12 (6): 559-561. 10.1111/j.1553-2712.2005.tb00899.x.View ArticleGoogle Scholar
- Flynn T, Gerrity M, Berkowitz L: What do applicants look for when selecting internal medicine residency programs?. J Gen Intern Med. 1993, 8: 249-254. 10.1007/BF02600091.View ArticleGoogle Scholar
- Fonseca M, Pollock M, Majewski R, Tootla R, Murdoch-Kinch C: Factors influencing candidates' choice of a pediatric dental residency program. J Dent Educ. 2007, 71: 1194-1202.Google Scholar
- Khan K, Levstik M: Ranking in Canadian gastroenterology residency match: what do residents and program directors want?. Can J Gastroenterol. 2010, 24: 369-372.View ArticleGoogle Scholar
- Laskin D, Lesny R, Best A: The residents' viewpoint of the matching process, factors influencing their program selection, and satisfaction with the results. J Oral Maxillofac Surg. 2003, 61: 228-233. 10.1053/joms.2003.50032.View ArticleGoogle Scholar
- Nuthalapaty F, Jackson J, Owen J: The influence of quality-of-life, academic, and workplace factors on residency program selection. Acad Med. 2004, 79 (5): 417-425. 10.1097/00001888-200405000-00010.View ArticleGoogle Scholar
- Pretorius ES, Hrung J: Factors that affect national resident matching program rankings of medical students applying for radiology residency. Acad Radiol. 2002, 9: 75-81. 10.1016/S1076-6332(03)80298-2.View ArticleGoogle Scholar
- Raymond M, Sokol R, Vontver L, Ginsburg K: Candid candidate comments: The relationship between residency program selection factors and match list placements from ranked applicants. Am J Obstet Gynecol. 2005, 193: 1842-1847. 10.1016/j.ajog.2005.07.060.View ArticleGoogle Scholar
- Sanfilippo JA, Sharkey PF, Parvizi J: Criteria used by medical students to rank orthopaedic surgery residency programs. Am J Orthop. 2006, 35: 512-514.Google Scholar
- Sawtooth Software: The MaxDiff/Web System Technical Paper. 2007, [http://www.sawtoothsoftware.com/download/techpap/maxdifftech.pdf]Google Scholar
- Dawis R: Scale construction. J Counsel Psychol. 1987, 34: 481-489.View ArticleGoogle Scholar
- Jamieson S: Likert scales: how to (ab)use them. Med Educ. 2004, 38: 1217-1218. 10.1111/j.1365-2929.2004.02012.x.View ArticleGoogle Scholar
- Knapp T: Treating ordinal scales as interval scales: an attempt to resolve the controversy. Nurs Res. 1990, 39: 121-123.View ArticleGoogle Scholar
- Kuzon W, Urbanchek M, McCabe S: The seven deadly sins of statistical analysis. Ann Plast Surg. 1996, 37: 265-272. 10.1097/00000637-199609000-00006.View ArticleGoogle Scholar
- Auger P, Devinney T, Louviere J: Using best-worst scaling methodology to investigate consumer ethical beliefs across countries. J Bus Ethics. 2007, 70: 299-326. 10.1007/s10551-006-9112-7.View ArticleGoogle Scholar
- Coast J, Flynn T, Salisbury C, Louviere J, Peters T: Maximising responses to discrete choice experiments. Appl Health Econ Health Policy. 2006, 5: 249-260. 10.2165/00148365-200605040-00006.View ArticleGoogle Scholar
- Finn A, Louviere J: Determining the appropriate response to evidence of public concern: The case of food safety. J Public Policy Mark. 1992, 11: 12-25.Google Scholar
- Jaeger S, Jorgensen A, Aaslyng M, Bredie W: Best-worst scaling: An introduction and initial comparison with monadic rating for preference elicitation with food products. Food Qual Preference. 2008, 19: 579-588. 10.1016/j.foodqual.2008.03.002.View ArticleGoogle Scholar
- Marley A, Louviere J: Probabilistic models of best, worst, and best-worst choices. J Math Psychol. 2005, 49: 464-480. 10.1016/j.jmp.2005.05.003.View ArticleGoogle Scholar
- Prescott P, Mansson R: Robustness of diallel cross designs to the loss of one or more observations. Comput Stat Data Anal. 2004, 47: 91-109. 10.1016/j.csda.2003.10.020.View ArticleGoogle Scholar
- Barzansky B, Etzel S: Educational programs in US medical schools, 2004-2005. JAMA. 2005, 294 (9): 1068-1074. 10.1001/jama.294.9.1068.View ArticleGoogle Scholar
- Victorino G, Organ C: Physician assistant influence on surgery residents. Arch Surg. 2003, 138: 971-976. 10.1001/archsurg.138.9.971.View ArticleGoogle Scholar
- Jagsi R, Shapiro J, Weissman J, Dorer D, Weinstein D: The educational impact of ACGME limits on resident and fellow duty hours: A pre-post survey study. Acad Med. 2006, 81 (12): 1059-1068. 10.1097/01.ACM.0000246685.96372.5e.View ArticleGoogle Scholar
- Marshall R, Gorman P, Verne D, Culina-Gula S, Murray W, Haluck R, Krummel T: Practical training for postgraduate year 1 surgery residents. Am J Surg. 2000, 179 (3): 194-196. 10.1016/S0002-9610(00)00305-6.View ArticleGoogle Scholar
- Smith C, Gordon C, Feller-Kopman D, Huang G, Weingart S, Davis R, Ernst A, Aronson M: Creation of an innovative inpatient medical procedure service and a method to evaluate house staff competency. J Gen Intern Med. 2004, 19: 510-513. 10.1111/j.1525-1497.2004.30161.x.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/11/61/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.