Our study is the first to comprehensively survey internal medicine subspecialty fellow satisfaction across multiple programs and compare perceptions between procedural and non-procedural fellows. Overall satisfaction with VA training is similar between procedural and non-procedural fellows in internal medicine, but differences exist at the item and domain-level.
The LPS is a validated survey instrument with robust psychometric properties and content and face validity that has been used to evaluate trainee satisfaction across a large, relatively uniform healthcare system. Previous studies have demonstrated its usefulness in comparing trainee perceptions across disciplines. Keitz et al found that the LPS functioned well in discriminating differences between different types of learners in VA . Cannon et al extended the scope of the LPS by comparing satisfaction between medical students and residents . Analysis of differences in subspecialty fellow satisfaction provides valuable information to GME leaders for program development and extends the utility of the LPS for evaluating satisfaction in graduate medical education.
There were similarities between our findings and those of Keitz et al  and Cannon et al  with respect to overall satisfaction and domain satisfaction. Overall satisfaction was similar for fellows, different types of residents [1, 2] and medical students . With the exception of physical environment, procedural and non-procedural fellows had similar domain satisfaction, as was seen with both medical students and residents . Like medical students  and residents [1, 2], fellows reported highest satisfaction with the domain of clinical faculty and preceptors. All three studies found that learning environment contributed highly to overall satisfaction, although in our study this was more so for procedural fellows. Unlike in the previous studies utilizing the LPS, our study examined the personal experience domain and found that personal experience was the domain most strongly associated with overall satisfaction for both procedural and non-procedural fellows, with both reporting similarly high levels of satisfaction with relationship with patients, appreciation of respondent's work by patients, personal reward and personal responsibility for patient care.
While domain satisfaction was similar across a wide spectrum of learners in all three studies, individual items within domains and their contribution to overall domain score provided more distinction between different types of learners. For example, procedural fellows reported lower satisfaction with several items in the clinical faculty/preceptor domain including accessibility/availability, timeliness of feedback, fairness of evaluation and patient orientation. Keitz et al found similar results for surgical residents who reported lower satisfaction with accessibility and availability of faculty and preceptors as compared to less procedural residents . For subspecialty fellows, item differences were seen in equipment maintenance and food on-call, with procedural fellows expressing lower satisfaction. These results may reflect differing needs of procedural fellows who use diagnostic and therapeutic equipment more frequently and may be more likely to be on-call overnight in the hospital.
Within the working environment, non-procedural fellows reported higher satisfaction with ancillary/support staff morale, laboratory services, and ancillary/support staff. In a previous study, differences were also found between medical students and residents , and among residents, with internal medicine residents least satisfied with these items . The relatively lower levels of satisfaction with these items among procedural fellows and internal medicine residents may reflect higher intensity of interactions with these services and therefore a different level of expectation compared to other types of learners. Whereas past studies suggested that such differences in satisfaction may be related to the differences in types of training programs , current results showing differences between procedural and non-procedural fellows may reflect degrees to which the same parts of the VA healthcare system intersect differently with the training goals of different program types and available training infrastructure.
Levels of satisfaction may reflect those attitudes and values that influenced physicians' choice of specialty training. A number of studies have evaluated predictors of subspecialty choice among residents and satisfaction with career choice among practicing physicians. In a study of factors associated with subspecialty choice of Canadian internal medicine residents, Horn et al found that residents pursuing non-procedural fellowships were more concerned with issues related to lifestyle, stress, work hours, leisure hours, and patient populations than those pursuing procedural fellowships . Other studies have found that lifestyle [4–7], mentorship , faculty influence [4, 5, 8], role models [3, 9, 10], resident clinical experience [3–5, 8] and high sense of satisfaction of fellows  are important factors in trainee selection of specialty training. Our study showed that personal experience (including lifestyle, stress and fatigue) and clinical faculty/preceptors (including mentoring and role models) contributed most significantly to overall satisfaction for both types of trainees, suggesting that improvements in these areas could lead to higher learner satisfaction and possibly more successful recruitment. Furthermore, differences in satisfaction with career choice have been noted between primary care and specialty residents  as well as procedure and non-procedure-based practicing physicians , suggesting that data on fellow satisfaction may provide useful information for residents in guiding career choices.
Quality in graduate medical education programs is complex and has many crucial components such as curriculum, trainee competency, and faculty development. Learner satisfaction with training is another crucial component for individual training programs , hospital systems , and national organizations . With significant national focus on both changes in health care systems and regulatory requirements, particularly given forthcoming changes in Accreditation Council of Graduate Medical Education's Common Program Requirements, careful analysis of trainees' satisfaction with educational and work environments is essential to improving quality. Byrne et al advocated for the use of a comprehensive survey tool to examine residents' satisfaction with training, and demonstrated the use of such a tool to effect improvements within affiliated hospitals from one GME program . The authors argued for the importance of a national, comprehensive survey tool to monitor trainee experience with the expressed goal of improving the training environment. In addition, monitoring the experience of trainees may provide advantages to individual programs in the form of longer accreditation cycle length . Changes in trainee satisfaction should be monitored over time, and ideally both aggregate and facility-level data would be available to allow for analysis of variation between facilities.
Our study has several strengths. First, the LPS is a validated, comprehensive survey tool, providing key information regarding the work and learning environments in which the majority of physicians train. Second, the LPS targets learners within one large healthcare system, potentially limiting the degree to which the differences between healthcare systems in which trainees learn may affect the measured outcomes. Third, the LPS is designed to assess trainee perceptions across a full complement of subspecialty training programs, program training years and academic years. Finally, the results of this study are likely representative of fellow perceptions throughout the VA, since the distribution of respondents across subspecialties was nearly identical to the distribution of VA funded positions nationally for each subspecialty.
The study has several limitations. First, there were multiple comparisons performed in this study, which may have led to statistical error. For this reason, we set a threshold for statistical significance at p ≤ 0.001. Secondly, because the data are collected anonymously, we were unable to evaluate the changes in individual respondents over time, limiting the precision of our results. Thirdly, we were unable to determine the total number of fellows within the system and made the survey available but did not distribute the survey to them directly. Consequently, the estimated response rate was relatively low, and this strategy could result in selection bias. However, as mentioned, the distribution of fellow respondents mirrors the distribution of VA funding for each specialty. In addition, a comparison of common questions on the LPS and the American Academy of Medical Colleges (AAMC) medical student survey, which has a greater than 90% response rate, have demonstrated similar results suggesting that the sampling method for the LPS may not be subject to major selection bias. Finally, we collected the LPS data only for VA experiences, which may limit the applicability of the findings to other training settings. While most physicians train in a VA setting, further information about non-VA training settings would allow better understanding of how VA experience compares to the other sites where fellows train.