Skip to main content

Medical undergraduates’ self-evaluation: before and after curriculum reform



In 2013, Taiwan launched a curriculum reform—the 7-year undergraduate medical education program was shortened to 6 years. This study explored the evaluation results from students regarding the curriculum reform and investigated graduates’ perceptions regarding the curriculum organization of the two academic training programs affected by this curricular reform.


A cross-sectional survey was conducted from May 14 to June 12, 2019. The 315 graduates from both the 7-year and 6-year curriculum programs in the same medical school in Taipei were invited to participate in this study. In total, 197 completed questionnaires were received, representing a response rate of 62.5%. The results of the principal component analysis confirmed the validity of the constructs employed in this self-administered questionnaire.


The t-test results yielded two main findings. First, the graduates from the 6-year program had significantly lower scores for preparedness for the upcoming postgraduate-year residency training than did their 7-year program counterparts. Additionally, the male graduates had significantly higher scores in terms of perceptions regarding curriculum organization and preparedness for postgraduate-year residency training than the female graduates. The results of stepwise regression also indicated that the sex difference was significantly correlated with graduates’ readiness for their postgraduate-year residency training.


To avoid sex disparities in career development, a further investigation of female medical students’ learning environment and conditions is necessary. In addition to the cross-sectional study of students’ perceptions, further repeated measurements of the objective academic or clinical performance of graduates in clinical settings are desirable.

Peer Review reports


According to Kirkpatrick’s model, the most direct evaluation of a training program is the participants’ feedback [1]. Therefore, medical education entities have relied on students’ evaluations to measure the quality and effectiveness of their educational practices and programs [2,3,4,5,6,7]. Lockwood et al. and Pugnaire et al. used questionnaires to survey the graduates of the Association of American Medical Colleges and discovered that students’ perceptions of their medical program were consistent and reliable [8, 9]. Schools have even been able to use students’ input in the classroom environment to predict their learning outcomes [10, 11]. With reference to questionnaires such as the Dundee Ready Educational Environment Measure or Undergraduate Clinical Education Environment Measure, which assess interpersonal interactions and social factors within medical educational environments, we developed a questionnaire that only focuses on students’ views on their previous academic learning as well as the upcoming training program [4, 12, 13]. Other Taiwanese medical educators, such as Chan et al., have also collected students’ feedback on their satisfaction rate in terms of their confidence in their medical education through surveys to improve the training programs’ quality [14]. In the survey results from three countries—the United States, Australia, and Taiwan—medical students exhibited similar satisfaction rates (i.e., 70.7%–86.6%) toward their training curricula. However, the self-confidence of Taiwanese students (55.9%) regarding participation in a residency program was markedly lower than that of American students (88.6%), which might indicate the insufficiency of Taiwanese medical students’ clinical training [14].

Chan’s survey was conducted prior to the medical program reform in Taiwan. At that time, the medical schools in Taiwan offered a 7-year program leading to the awarding of a Doctor of Medicine (MD) degree in the direct entry system format. The 7-year curriculum included 2 years of premedical courses, 2.5 years of clinical courses, and 2.5 years of clerkship and internship training. Students were required to attend clinical courses in hospitals for a minimum of 3 days per week in years 5 and 6 of their training. Year-7 students participated in a full-time internship to receive placement training while performing clinical procedures and examinations on real patients under the supervision of senior staff [14, 15].

During the 2003 severe acute respiratory syndrome epidemic in Taiwan, many Year-7 medical students were assigned as first responders alongside postgraduate-year (PGY) residents to accommodate the urgent demands of the workforce. This experience revealed some curricular shortcomings of the medical training programs in Taiwan, leading to calls for reform in the field. In particular, the previous curriculum aimed at training medical specialists at the beginning of the postgraduate training year instead of providing sufficient clinical training in terms of general medicine [16]. An initial phase of reform was subsequently undertaken to focus on general medicine training in the postgraduate years [17].

In 2013, the 7-year undergraduate medical education program in Taiwan was shortened to 6 years to implement a complete 2-year PGY residency program following undergraduate medical training [15, 16]. Because of the rapid development of medical technology and changes in the medical environment, medical education reform is a major global concern [18]. Successful experiences in medical education reform in Western countries have been widely disseminated; however, they may not be directly applicable to Asian countries because of differences in social and cultural dispositions [19]. Taiwan’s curriculum reform adopted the concept of a foundation program in the United Kingdom and was officially launched in 2013 and immediately implemented in all medical schools (see Fig. 1) [20]. The initiation and process of medical education reform in Taiwan has been discussed previously [21]; however, no difference was observed in the national Objective Structured Clinical Examination scores between 6-year and 7-year curriculum graduates [22]. In 2019, the medical field welcomed the last graduates of the 7-year training program and the first graduates of the 6-year training program since the reform. In this study, we compared students’ feedback on the quality and effectiveness of each curriculum system to consider students’ perceptions of which system better prepares them for postgraduate training.

Fig. 1
figure 1

Development of Undergraduate Medical Education and Professional Training Program for 2000–2020 in Taiwan


Participants and procedures

A cross-sectional survey was conducted in the spring of 2019 to 315 students who graduated from the two curriculum systems of the same medical school in Taipei. After providing signed informed consent, the participants completed a self-administered questionnaire during their learning feedback meetings before graduation.


To align with the general competency domains of the Accreditation Council for Graduate Medical Education (ACGME), which is widely adopted to frame medical education objectives in Taiwan, we embedded the following six domains in the questionnaire: patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and system-based practice [17, 23]. A 5-point Likert scale (from 1 = strongly disagree to 5 = strongly agree) was used for students to evaluate items pertaining to the first level of Kirkpatrick’s four-level training evaluation model. Data on the other levels were not available and thus were not included.

In addition to the demographic variables (sex/age/year of graduation), the design and development of this questionnaire incorporated Kirkpatrick’s model and Azjen’s theory. The first part of the questionnaire focused on graduates’ perceptions of curriculum organization; the second part focused on Azjen’s concept of “perceived behavioral control” to investigate graduates’ readiness for clinical practice. Participants were asked to reflect on their learning status against each of the aforementioned six core competencies of the ACGME for physicians when responding to the questions [23].

Because the notion of “student satisfaction” can be regarded as the outcome of a learning process or the requirement that contributes to successful learning,” we included three items in the questionnaire to distinguish the two: “I am provided with sufficient meaningful tasks to acquire ACGME core competencies,” “The training program helps develop my expertise in ACGME core competencies,” and “What I am required to learn is relevant to enhance my core competencies” [4, 24]. The participants responded to these three questions in relation to each of the six ACGME competencies. Therefore, this part of the questionnaire had 18 items.

The theory of planned behavior (TPB), proposed by Fishbein and Ajzen [25], has been used extensively and successfully to investigate the associations between perceived behavioral control and intentions for not only the field of health promotion [26, 27] but also medical education [28,29,30]. This theory has also been applied systematically to examine and clarify the factors associated with attitude, perceived behavior control, and intention during postgraduate medical training [31]. According to an individual’s desire to reach a goal and the feasibility of achieving that goal, reaching an intended outcome is the core component of effective preparation work [23]. Goals are most likely to be established when the anticipated result is perceived as both desirable and feasible [32]. According to the TPB, feasibility relates to individuals’ perceptions of the difficulty in enacting an intended behavior, that is, perceived behavioral control [33]. To investigate students’ readiness for upcoming clinical practice, we employed two statements to examine each of the six ACGME core competencies (yielding 12 items in total) to assess respondents’ self-efficacy in completing future clinical training [34]. The two statements were as follows: “Based on the medical training I have received so far, I am confident in practices relating to” these listed core competencies (items 19–24), and “For my PGY residency training, I am not worried about practices relating to” these listed core competencies (items 25–30). Items were deliberately worded in positive tones because the use of alternating positive and negative wordings was reported to be confusing [35]. All items are summarized in Table 1.

Table 1 Questionnaire items

Statistical methods

Item analysis and factor analysis

The extreme group design for item analysis was first used to examine the validity and reliability of this questionnaire [36]. Next, a principal component analysis (PCA) of the responses was conducted, and a scree plot analysis was used to determine the minimum number of factors, accounting for a large proportion of correlations between the responses. Measures of internal consistency (Cronbach’s alpha) were evaluated for responses to the statements. A low alpha value can be caused by low item-wise correlations among pairs of items; hence, some items may be deleted to increase the coefficient value [37]. In the development of research instruments, trivial items are commonly removed to improve the alpha value [38,39,40]. In this study, items with a corrected item-total correlation of > 0.5 were considered acceptable [37]; this value indicates that the items measure the same underlying concept. An exploratory factor analysis using PCA and varimax rotation was conducted to determine the factor structure among the items in the final study. To minimize ambiguity, items were only included in the final version if their factor loadings were > 0.5 and no cross-factor loading of > 0.5 was noted in two or more components.

Data analysis

The descriptive results of categorical variables, such as respondents’ sex and clinical training system in medical school, are expressed as the number and percentage of each category. Continuous variables, such as age and perceptions of clinical training, are expressed as the mean ± standard deviation (SD). For univariate analysis, a two-sample hypothesis-testing approach was used to assess differences in the mean value for the perceptions of clinical training of categorical variables. The Pearson correlation coefficient was also used to assess the correlation between continuous variables. A stepwise multiple regression analysis was used to identify predictors of medical students’ preparedness for PGY residency training. The independent variables were sex, age, clinical training system in medical school, and respondents’ perceptions of curriculum organization. p < 0.05 was considered significant. All statistical analyses were performed using SPSS version 20.0 (SPSS, Chicago, IL, USA).


Descriptive information

The descriptive results are presented in Table 2. In total, 197 of the 315 graduates completed the survey (response rate: 62.5%). The respondents’ mean age was 25.08 years (SD = 1.58); 60.4% of them were men, and 54.8% had graduated from the new 6-year clinical training program.

Table 2 Descriptive information of respondents’ demographics data (N=197)

Results of item analysis

Table 3 presents the results of the item analysis of the two investigated scales. The Cronbach’s alpha of Scale A—Perceptions Regarding Curriculum Organization—was 0.945, and all 18 statements had a corrected item-total correlation of > 0.5; these items were reserved for further PCA. One of the 12 statements listed in Scale B—Preparedness for PGY Residency Training—was “Based on the medical training I have received so far, I am confident in practice on medical knowledge” (Item 20). This item had a corrected item-total correlation (0.442) < 0.5 and was thus deleted to improve the Cronbach’s alpha value from 0.912 to 0.913.

Table 3 Item analysis for the Perceptions Regarding Curriculum Organization and Preparedness for PGY Residency Training scales

Results of PCA

PCA with varimax rotation was conducted separately for both investigated scales. Table 4 presents the factor loadings for each item. In Scale A, 3 of 18 items satisfied the Kaiser–Meyer–Olkin (KMO) criterion with an eigenvalue of > 1 (7.972, 1.220, and 1.017) and accounted for 68.06% of the variance (KMO = 0.906; Bartlett sphericity test result = 0.000). Their eigenvalues were 3.687, 3.598, and 2.924, respectively, rotated using the varimax method (Cronbach’s alpha: 0.876, 0.902, and 0.851, respectively). After varimax rotation, the three components (rotated factors) accounted for 24.580%, 23.989%, and 19.492% of the variance, respectively. These three components were A1 “perceived sufficiency of medical training,” A2 “perceived usefulness of medical training,” and A3 “perceived appropriateness of the educational setting.” Three items (items 10, 13, and 2) were subsequently deleted because their cross-factor loadings were > 0.5 in two or more components.

Table 4 Factor loading for the contributing items in the questionnaire

For Scale B, 11 of the original listed 12 items were subjected to further principal factor analysis. The PCA results loaded onto two factors, which together accounted for 70.54% of the variance in the data (KMO = 0.876; Bartlett sphericity test result = 0.000). The eigenvalues of the two components were 3.962 and 3.092 rotated using the varimax method, (Cronbach’s alpha: 0.904 and 0.881, respectively). After varimax rotation, two components—B1 (“unworried about PGY residency training”) and B2 (“confidence in practice”)—accounted for 39.617% and 30.920% of the variance, respectively. One item (Item 19) was deleted because its cross-factor loading was > 0.5 in both components.

Results of data analysis

Table 5 presents the results of univariate analyses using the t test for categorical variables (sex and clinical training systems) and Pearson’s correlation coefficient for continuous variables. Male graduates had a significantly higher score on both Scale A (58.78 vs. 55.67, p = 0.010) and Scale B (33.52 vs. 30.43, p = 0.001). The graduates from the new 6-year clinical training system had a significantly lower score on Scale B (30.63 vs. 34.36, p < 0.001) but not on Scale A. Age was not significantly correlated with scores in either subscale. The respondents’ Scale A scores demonstrated a significant positive correlation with Scale B scores (Pearson R = 0.490, p < 0.001).

Table 5 Univariate analysis of the scores on Scales A and B

Table 6 presents the results of stepwise multiple regressions of medical students’ preparedness for PGY residency training. In the stepwise regression model (adjusted R2 = 0.469, p < 0.001) for graduates’ self-confidence, four factors were included. Factor A1, “perceived sufficiency of medical training” (R2 = 0.411), is the first included in the stepwise regression model, followed sequentially by factor A2, “perceived usefulness of medical training” (R2 = 0.032), sex (R2 = 0.021), and curricular setting (R2 = 0.016). Regarding the graduates’ unworried state, two factors were included in the final stepwise regression model (adjusted R2 = 0.205, p < 0.001), namely “perceived sufficiency of medical training” (R2 = 0.157) followed by “curricular setting” (R2 = 0.056).

Table 6 Stepwise regressions of medical students’ perceptions of preparedness for PGY residency training

Discussion and conclusion

Studies on medical students’ perceptions of their undergraduate education have focused on students’ evaluations of curriculum quality and their readiness for future clinical practice [5, 14, 41,42,43,44,45]. In the present study, we focused on these two indispensable domains to compare the effectiveness of a 7-year versus a 6-year training program. We investigated whether the curriculum reform resulted in distinct evaluations by students from the two academic training programs. The PCA confirmed the validity of our 25-item questionnaire.

Five items were excluded from the analysis. Two items were removed because of the participants’ inability to distinguish between having confidence in medical knowledge (item 20) and having sufficient medical knowledge (item 2). The respondents also struggled to answer the following two questions: “To what extent is the training for interpersonal communication sufficient (item 10)?” and “To what extent is the teaching of patient care sufficient (items 13 and 19)?” because of their little experience in interpersonal practice and knowledge of primary patient care. Thus, these three items were removed.

The Pearson correlation analysis results also indicated that both main constructs—the perceptions regarding curriculum organization and preparedness for PGY residency training—were moderately correlated with each other.

The t-test results revealed that our graduates from the 6-year program had significantly lower scores for their preparedness for PGY residency training than their counterparts who graduated from the 7-year program. Because of the curriculum reform, the original number of compulsory credits in the medical school where the survey was conducted was reduced from 219 to 199 credits, divided among several clinical learning courses. According to the implementation guidelines for the clinical placement of medical students in the new medical curriculum, the daily working hours for medical clerks may not exceed 12 h [46]. This requirement was absent in the previous 7-year curriculum. Clerks in the 6-year program can have a maximum of three patients in their primary care at each rotated department, whereas clerks in the 7-year program could have up to 10 primary care patients. These protective measures for clinical placement are progressive in terms of social justice and enable clerks to appreciate every aspect of clinical learning. Our results indicated no significant difference in the perceptions regarding curriculum organization between the students in the 6-year and 7-year programs; however, those in the 7-year program reported greater preparedness for residency training. This disparity may be explained by the revised Bloom’s taxonomy of education proposed by Anderson for the four knowledge levels, namely practical knowledge, theoretical knowledge, procedural knowledge, and metacognitive knowledge, the highest level [47]. Students of the 6-year program lacked a 1-year internship, which mostly involves “learning by doing” [48] in the workplace, resulting in a shorter clinical learning period; therefore, students in the 7-year program were able to develop greater confidence in their clinical competency [21]. Other potential factors driving the lower rating of the 6-year curriculum include the challenges associated with transitioning to a new curriculum, available teaching resources, the lack of longer-term follow-up data, or further in-depth qualitative interview results.

This study had some limitations. First, we did not include some factors to investigate whether the differences between the two curricula also resulted in academic performance disparities. Vokes et al. measured the rate of honor grades in clerkships at different medical schools in the United States to examine the utility of clerkship grades in evaluating orthopedic surgery residency applicants and found that a standardized method for grading medical students during clinical clerkships does not exist, resulting in a high degree of interinstitutional variability [49]. Surgery clerkship grades are unreliable for comparing orthopedic surgery residency applicants from different medical schools [49]. However, medical educators in Taiwan lack the ability to specifically identify the cause of differing perceptions or areas needing improvement. Future studies should investigate whether the same situation is applicable to Taiwan.

Second, Newton et al. used factor analysis to explore nursing students’ perceptions of factors related to the clinical learning environment [43]; the results revealed that educational strategies should be developed to sustain a student-centered approach in clinical practice [50]. Therefore, a more comprehensive theoretical framework with comprehensive descriptive items that serves as the basis of the standardized measure of applicant evaluation might be helpful in the future.

Third, the results of the independent t test indicated that the male graduates had a significantly higher score on both scales than did the female graduates. The results of stepwise regression also revealed that sex difference significantly correlated with graduates’ readiness for PGY residency training. This might be due to a significant gap between real and perceived preparedness in terms of knowledge and skills among female students. A previous Canadian study indicated that female students’ self-assessment scores were significantly lower than the scores they received from their peers, whereas no significant difference was observed between self-assessment and peer assessment scores for male examinees [51]. American female medical students also reported more anxiety and less self-confidence in their abilities than their male counterparts [52]. Therefore, anxious emotions may also reduce the perceived self-confidence of female students [51, 53]. In another study, female physicians had significantly lower self-reported self-efficacy than their male counterparts [54], negatively affecting the willingness to take on leadership roles in hospitals [33]. Therefore, to avoid sex disparities in career development, female medical students’ learning environment and conditions merit further investigation.

Finally, our cross-sectional questionnaire survey results only reflect the subjective perceptions of medical undergraduates’ regarding the curriculum and preparation for residency training before and after the medical reform. Further quantitative studies with repeated measurements of detailed survey questions or qualitative studies with open-ended interview questions would more comprehensively elucidate students’ perceptions. Because our study was conducted during the transition between the two curricula, the graduates from both undergraduate programs simultaneously participated in PGY residency training. Close monitoring of our ongoing follow-up study is necessary to assess graduates’ objective academic outcomes or clinical performance in the workplace.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to protection of participant confidentiality. To request the data, please contact the corresponding author.



Accreditation Council for Graduate Medical Education




Principal component analysis




Standard deviation


Theory of planned behavior


  1. Kirkpatrick JD, Kirkpatrick WK. Kirkpatrick’s four levels of training evaluation: Association for Talent Development; 2016.

  2. Öhman E, Alinaghizadeh H, Kaila P, Hult H, Nilsson GH, Salminen H. Adaptation and validation of the instrument Clinical Learning Environment and Supervision for medical students in primary health care. BMC Med Educ. 2016;16(1):308.

    Article  Google Scholar 

  3. Soemantri D, Herrera C, Riquelme A. Measuring the educational environment in health professions studies: a systematic review. Med Teach. 2010;32(12):947–52.

    Article  Google Scholar 

  4. Strand P, Sjöborg K, Stalmeijer R, Wichmann-Hansen G, Jakobsson U, Edgren G. Development and psychometric evaluation of the undergraduate clinical education environment measure (UCEEM). Med Teach. 2013;35(12):1014–26.

    Article  Google Scholar 

  5. Roff S, McAleer S, Harden RM, Al-Qahtani M, Ahmed AU, Deza H, Groenen G, Primparyon P. Development and validation of the Dundee ready education environment measure (DREEM). Med Teach. 1997;19(4):295–9.

    Article  Google Scholar 

  6. Jakobsson U, Danielsen N, Edgren G. Psychometric evaluation of the Dundee ready educational environment measure: Swedish version. Med Teach. 2011;33(5):e267–74.

    Article  Google Scholar 

  7. Jalili M, Mirzazadeh A, Azarpira A. A survey of medical students’ perceptions of the quality of their medical education upon graduation. Annals Academy of Medicine Singapore. 2008;37(12):1012.

    Google Scholar 

  8. Lockwood JH, Sabharwal RK, Danoff D, Whitcomb ME. Quality improvement in medical students’ education: the AAMC medical school graduation questionnaire. Med Educ. 2004;38(3):234–6.

    Article  Google Scholar 

  9. Pugnaire MP, Purwono U, Zanetti ML, Carlin MM. Tracking the longitudinal stability of medical students’ perceptions using the AAMC graduation questionnaire and serial evaluation surveys. Acad Med. 2004;79(10):S32–5.

    Article  Google Scholar 

  10. Fraser BJ. Classroom environment instruments: Development, validity and applications. Learning Environ Res. 1998;1(1):7–34.

    Article  Google Scholar 

  11. Fraser BJ, Treagust DF, Dennis NC. Development of an instrument for assessing classroom psychosocial environment at universities and colleges. Stud High Educ. 1986;11(1):43–54.

    Article  Google Scholar 

  12. Salih KM, Idris ME, Elfaki OA, Osman NM, Nour SM, Elsidig HA, Toam RM, Elfakey WE. MBBS teaching program, according to DREEM in College of Medicine, University of Bahri, Khartoum, Sudan. Adv Med Educ Pract. 2018;9:617–22.

    Article  Google Scholar 

  13. Imran N, Khalid F, Haider, II, Jawaid M, Irfan M, Mahmood A, IjlalHaider M, Sami ud d. Student’s perceptions of educational environment across multiple undergraduate medical institutions in Pakistan using DREEM inventory. JPMA The Journal of the Pakistan Medical Association 2015, 65(1):24–28.

  14. Chan WP, Wu TY, Hsieh MS, Chou TY, Wong CS, Fang JT, Chang NC, Hong CY, Tzeng CR. Students’ view upon graduation: a survey of medical education in Taiwan. BMC Med Educ. 2012;12:127.

    Article  Google Scholar 

  15. Chou JY, Chiu CH, Lai E, Tsai D, Tzeng CR. Medical education in Taiwan. Med Teach. 2012;34(3):187–91.

    Article  Google Scholar 

  16. Chu TS, Weed HG, Yang PC. Recommendations for medical education in Taiwan. J Formos Med Assoc. 2009;108(11):830–3.

    Article  Google Scholar 

  17. Lo WL, Lin YG, Pan YJ, Wu YJ, Hsieh MC. Faculty development program for general medicine in Taiwan: Past, present, and future. Tzu Chi Medical Journal. 2014;26(2):64–7.

    Article  Google Scholar 

  18. Boelen C. Medical education reform: the need for global action. Acad Med. 1992;67(11):745–9.

    Article  Google Scholar 

  19. Lam TP, Lam YY. Medical education reform: the Asian experience. Acad Med. 2009;84(9):1313–7.

    Article  Google Scholar 

  20. Lyons OT, Smith C, Winston JS, Geranmayeh F, Behjati S, Kingston O, Pollara G. Impact of UK academic foundation programmes on aspirations to pursue a career in academia. Med Educ. 2010;44(10):996–1005.

    Article  Google Scholar 

  21. Cheng WC, Chen TY, Lee MS. Fill the gap between traditional and new era: The medical educational reform in Taiwan. Ci Ji Yi Xue Za Zhi. 2019;31(4):211–6.

    Google Scholar 

  22. Wu JW, Cheng HM, Huang SS, Liang JF, Huang CC, Yang LY, Shulruf B, Yang YY, Chen CH, Hou MC, et al. Comparison of OSCE performance between 6- and 7-year medical school curricula in Taiwan. BMC Med Educ. 2022;22(1):15.

    Article  Google Scholar 

  23. Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach. 2007;29(7):648–54.

    Article  Google Scholar 

  24. Roff S. The Dundee Ready Educational Environment Measure (DREEM)–a generic instrument for measuring students’ perceptions of undergraduate health professions curricula. Med Teach. 2005;27(4):322–5.

    Article  Google Scholar 

  25. Fishbein M, Ajzen I. Predicting and changing behavior: The reasoned action approach: Taylor & Francis; 2011.

  26. Godin G, Kok G. The theory of planned behavior: a review of its applications to health-related behaviors. Am J Health Promot. 1996;11(2):87–98.

    Article  Google Scholar 

  27. Mtenga SM, Exavery A, Kakoko D, Geubbels E. Social cognitive determinants of HIV voluntary counselling and testing uptake among married individuals in Dar es Salaam Tanzania: Theory of Planned Behaviour (TPB). BMC Public Health. 2015;15:213.

    Article  Google Scholar 

  28. Hadadgar A, Changiz T, Masiello I, Dehghani Z, Mirshahzadeh N, Zary N. Applicability of the theory of planned behavior in explaining the general practitioners eLearning use in continuing medical education. BMC Med Educ. 2016;16(1):215.

    Article  Google Scholar 

  29. Archer R, Elder W, Hustedde C, Milam A, Joyce J. The theory of planned behaviour in medical education: a model for integrating professionalism training. Med Educ. 2008;42(8):771–7.

    Article  Google Scholar 

  30. Tian J, Atkinson NL, Portnoy B, Lowitt NR. The development of a theory-based instrument to evaluate the effectiveness of continuing medical education. Acad Med. 2010;85(9):1518–25.

    Article  Google Scholar 

  31. de Jonge L, Mesters I, Govaerts MJB, Timmerman AA, Muris JWM, Kramer AWM, van der Vleuten CPM. Supervisors’ intention to observe clinical task performance: an exploratory study using the theory of planned behaviour during postgraduate medical training. BMC Med Educ. 2020;20(1):134.

    Article  Google Scholar 

  32. Doménech-Betoret F, Gómez-Artiga A, Abellán-Roselló L. The Educational Situation Quality Model: A New Tool to Explain and Improve Academic Achievement and Course Satisfaction. Front Psychol. 2019;10:1692.

    Article  Google Scholar 

  33. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

    Article  Google Scholar 

  34. Salles A. Self-Efficacy as a Measure of Confidence. JAMA Surg. 2017;152(5):506–7.

    Article  Google Scholar 

  35. Dornan T, Boshuizen H, Cordingley L, Hider S, Hadfield J, Scherpbier A. Evaluation of self-directed clinical education: validation of an instrument. Med Educ. 2004;38(6):670–8.

    Article  Google Scholar 

  36. Preacher KJ. Extreme groups designs. The encyclopedia of clinical psychology 2014:1–4.

  37. Sharma B. A focus on reliability in developmental research through Cronbach’s Alpha among medical, dental and paramedical professionals. Asian Pacific Journal of Health Sciences. 2016;3(4):271–8.

    Article  Google Scholar 

  38. Shelby LB. Beyond Cronbach’s alpha: Considering confirmatory factor analysis and segmentation. Hum Dimens Wildl. 2011;16(2):142–8.

    Article  Google Scholar 

  39. Kocak C, Egrioglu E, Yolcu U, Aladag CH. Computing Cronbach alpha reliability coefficient for fuzzy survey data. American Journal of Intelligent Systems. 2014;4(5):204–13.

    Google Scholar 

  40. Biggs J, Kember D, Leung DY. The revised two-factor Study Process Questionnaire: R-SPQ-2F. Br J Educ Psychol. 2001;71(Pt 1):133–49.

    Article  Google Scholar 

  41. Chan DS. Validation of the clinical learning environment inventory. West J Nurs Res. 2003;25(5):519–32.

    Article  Google Scholar 

  42. Boor K, Scheele F, Van Der Vleuten CP, Teunissen PW, Den Breejen EM, Scherpbier AJ. How undergraduate clinical learning climates differ: a multi-method case study. Med Educ. 2008;42(10):1029–36.

    Article  Google Scholar 

  43. Newton JM, Jolly BC, Ockerby CM, Cross WM. Clinical learning environment inventory: factor analysis. J Adv Nurs. 2010;66(6):1371–81.

    Article  Google Scholar 

  44. Dornan T, Muijtjens A, Graham J, Scherpbier A, Boshuizen H. Manchester Clinical Placement Index (MCPI). Conditions for medical students’ learning in hospital and community placements. Advances in Health Sciences Education 2012, 17(5):703–716.

  45. Pai PG, Menezes V, Srikanth AMS, Shenoy JP. Medical students’ perception of their educational environment. Journal of clinical and diagnostic research: JCDR. 2014;8(1):103.

    Google Scholar 

  46. MOE. Implementation guidelines of clinical placements for medical students in the new medical curriculum. In: 1040050919. Edited by Ministry of Education RoCT. Taiwan Ministry of Education, Republic of China (Taiwan) 2015.

  47. Wilson LO. Anderson and Krathwohl–Bloom’s taxonomy revised. Understanding the New Version of Bloom’s Taxonomy 2016.

  48. Williams MK. John Dewey in the 21st century. Journal of Inquiry and Action in Education. 2017;9(1):7.

    Google Scholar 

  49. Vokes J, Greenstein A, Carmody E, Gorczyca JT. The Current Status of Medical School Clerkship Grades in Residency Applicants. J Grad Med Educ. 2020;12(2):145–9.

    Article  Google Scholar 

  50. Newton JM, Jolly BC, Ockerby CM, Cross WM. Student centredness in clinical learning: the influence of the clinical teacher. J Adv Nurs. 2012;68(10):2331–40.

    Article  Google Scholar 

  51. Madrazo L, Lee CB, McConnell M, Khamisa K. Self-assessment differences between genders in a low-stakes objective structured clinical examination (OSCE). BMC Res Notes. 2018;11(1):393.

    Article  Google Scholar 

  52. Blanch DC, Hall JA, Roter DL, Frankel RM. Medical student gender and issues of confidence. Patient Educ Couns. 2008;72(3):374–81.

    Article  Google Scholar 

  53. Wu JH, Du JK, Lee CY, Lee HE, Tsai TC. Effects of anxiety on dental students’ noncognitive performance in their first objective structured clinical examination. Kaohsiung J Med Sci. 2020;36(10):850–6.

    Article  Google Scholar 

  54. Ziegler S, Zimmermann T, Krause-Solberg L, Scherer M, van den Bussche H. Male and female residents in postgraduate medical education - A gender comparative analysis of differences in career perspectives and their conditions in Germany. GMS journal for medical education 2017, 34(5):Doc53.

Download references


We are grateful to Ms. Vera Tang for the language review on an earlier version of this manuscript.

Role of the funder/sponsor

The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.


This study was supported by research grants from Taipei Medical University, Taipei Medical University Hospital (108TMU-TMUH-27, 110TMU-TMUH-10) and Ministry of Science and Technology (110–2628-H-038–002-MY3).

Author information

Authors and Affiliations



Concept and design: JC Wu and KP Tang; acquisition, analysis, or interpretation of data: YK Lin and YT Yang; drafting of the manuscript: JC Wu, KP Tang, and WH Hou; critical revision of the manuscript for important intellectual content: KP Tang, JS Chu, and WH Hou; statistical analysis: YK Lin and YHE Hsu; Funding acquisition: JC Wu, YK Lin, and WH Hou. The author(s) read and approved the final manuscript.

Corresponding authors

Correspondence to Jan-Show Chu, Yen-Kuang Lin or Wen-Hsuan Hou.

Ethics declarations

Ethics approval and consent to participate

The study was conducted in accordance with the Declaration of Helsinki, and it was approved by the Institutional Review Board of Taipei Medical University (TMU-JIRB No.: N201904068). Written informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, JC., Tang, KP., Hsu, YH.E. et al. Medical undergraduates’ self-evaluation: before and after curriculum reform. BMC Med Educ 22, 296 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: