The impact of external accreditation on the program’s quality
The journey of developing accreditation standards is continuous, where the focus on improving the quality of the program become holistic and multidimensional to include the acceptable national or international level of basic medical education, emphasis on the need for a student-centered curriculum, qualified teaching staff, healthy learning environment, and meeting the society’s needs with the ultimate objective of improving patient’s’ care [7, 28, 29]. This driving force towards assuring and improving medical education quality via external accreditation is further inspired by the World Federation of Medical Education’s (WFME) efforts for international standardization of medical education and recognition of accreditors; accreditation of accreditors [28, 29]. WFME triggered these international standardizations to guarantee an acceptable quality of medical education throughout the system of medical schools [29]. Despite these international efforts to standardize accreditation, the percentage of countries with an undergraduate program of medicine enforcing the national accreditation process remains sub-optimum while their processes vary widely [30].
The role of external accreditation in program improvement can be viewed from different perspectives; while the regulator or commissioning agencies to view it as an essential tool to meet the standards and assure a pre-set level of program quality, other stakeholders may view the accreditation process as a source of exhausting resources and efforts unfavorable balance for its cost-effectiveness. Although, accreditation may result in the improvement of the program’s administration and organization, its direct or indirect positive impact on students remains questionable [7, 10, 31,32,33]. The reason for this controversy is the paucity of research studies exploring such potential impact [34]. For instance, in a scoping review, Tackett et al.36 investigated the evidence base of medical school undergraduate program accreditation and found limited evidence to support existing medical school accreditation practices to guide the creation or improvement of accreditation systems. Only 30 cross-sectional or retrospective research were found [33]. Among their findings, the Middle East region is one of the areas with the least published research on medical school accreditation until 2019, which indicates the need for further evidence of accreditation’s impact on the undergraduate medical program in our region [33].
Moreover, upon further reflection, another reason for the paucity of research tackling this relation between accreditation and its impact on the undergraduate medical program is accreditation processes’ variable practice despite common themes of accreditation standards. Thus, this variability resulted in the lack of an agreed-upon framework for such research that can be adapted internationally with reasonable generalizability in different countries or regions [30]. Moreover, these different research viewpoints of accreditation on the undergraduate program led most publications reflecting the impact of accreditation to use a single indicator such as document analysis or participants’ perception regarding accreditation. For instance, linking accreditation with students’ performance in exams is a relatively widely adopted approach [15].
However, it inherits the limitation of such a cross-sectional and singular approach compared to an approach with a longitudinal (pre-post) assessment or approach that considers reproducibility over more than one accreditation cycle [2]. Blouin et al.10 sought to generate such a framework and explore potential indicators of accreditation effectiveness, value, and impact on medical education utilizing qualitative research design. They surveyed 13 Canadian medical schools who participated in national accreditation [35]. The study suggested general framework themes with direct impact and others with indirect impact. Theme with direct impact includes program processes, quality assurance, and continuous quality improvement program quality.
Furthermore, four other themes were considered indirect indicators of accreditation effectiveness, including student performance, stakeholder satisfaction, stakeholder expectations, and engagement. Therefore, considering this framework, our study focused on assessing scaled students’ satisfaction as an indirect measure of accreditation impact on medical programs. We also adapted pre- and post- longitudinal research design over two accreditation cycles, which is considered the most rigorous design of impact evaluation if experimental with-without comparison designs are not feasible [32, 36]. The before-after comparison is based on data collected at baseline (pre), intermediate (during), and after (post) the accreditation.
Impact of accreditation on students’ satisfaction
In this study, both cycles were associated with an increased score of students’ satisfaction scale when considering the (pre-post) approach. Although the absolute difference between both scores might be perceived less meaningful, it is important to consider the context of the variability in students ‘scores, when not every student scored the average mean, which could help in understanding the scale of the change. Equally important to consider that the Likert scale is a calculated indices with no intrinsic meaning compared to an outcome with meaningful intrinsic values such as percentage of survival [26]. Therefore, we opted to provide the calculation of Cohen’s d test to demonstrate the meaningfulness and magnitude of change beyond the absolute difference and statistical significance [27]. The preparatory phase activities and navigation through the self-study assessment while challenging the program’s competencies are essential triggers for quality improvement practices associated with accreditation. The reinforcement of an internal quality improvement system is another major driving force to have a meaningful impact on accreditation [10, 28]. The difference in the sustained improvement post accreditation in both cycles is interesting. While improvement in the students’ satisfaction sustained longer post the first cycle, it was not apparent in the second cycle. However, the short follow up of one year post second cycle compared to 3 years follow up post first cycle makes it difficult and relatively premature to interpret such findings. The themes’ analysis of the survey revealed interesting results. The positive impact of accreditation on students’ satisfaction in course conduction and practical/clinical experience was evident and reproducible over both cycles. Thus, our study reinforces the early study by Al Mohaimeed et al.12 following their first cycle of the NCAAA accreditation, which described a positive experience with accreditation in educational processes, administration, and curriculum implementation.
Moreover, in our study, the second cycle was associated with a significant impact on most of the survey themes compared to the first cycle. Upon reviewing the self-reported study of both accreditation cycles, this could be related to restructuring some of the college’s facilities and significant enhancement of students’ support services and temporal relationship with college building expansion during the second accreditation. Also, the review of the self-study report and preparation documents revealed that the second cycle was accompanied by higher engagement of teaching staff by creating departmental and college-wide permanent committees focusing on academic quality and fostering continuous development. Another interesting aspect of the students’ satisfaction association with accreditation in this study is the sustainable high satisfaction related to teaching staff performance over the study period, which was statistically significant and carried the highest effect size during the second cycle. Although this could be multifactorial, the teaching staff’s engagement during the preparation process, which may run over an average of two to three years, could play an essential role in this aspect. Furthermore, a broader scope of awareness and preparation campaigns among teaching staff were carried on during the second cycle to emphasize the culture of academic quality improvement. The teaching staff’s perspective and reframing of external accreditation result in higher acceptability of the accreditation as an ongoing improvement tool and strengthening the internal quality improvement system. In a recent qualitative study following the NCAAA accreditation cycle, Alrebish et al.2 elicited an essential theme of accreditation experience related to the perspective towards accreditation and its impact on the sustainability of quality improvement in undergraduate medical education. For instance, the perspective of accreditation as an external audit, and whether the program would pass the exam or not, is less likely to result in a sustainable positive impact on internal quality improvement practices [2].
Viability and utilization of students’ satisfaction as a quality tool
There is no doubt that the utilization of student satisfaction as a quality improvement tool is widely debatable. This ongoing debate resulted in significant research on students’ evaluation of teaching with a history dating back to the 1950s until recently [16, 18, 25, 37,38,39,40,41,42,43,44]. For instance, a special volume of New Directions for Institutional Research, which was devoted to this debate, suggested the preponderance of evidence towards the validity of students’ evaluation of teaching [38, 40, 42]. Many factors contribute to this controversy that follows the pendulum movement towards underrating and overeating its validity. From faculty point of view, the student satisfaction might be criticized for the following: variable students’ attitude, confounding effect of students’ performance, low response rate, reliability, and validity of survey as an evaluation tool for instruction, vulnerability to recall bias or bias due to instructor’s gender, personality, ethnic background, technical aspects of data collection, analysis, and construction of survey items [39, 45,46,47,48].
Moreover, along with the negative perception of students’ evaluation, students may view the survey as a futile effort and burden rather than a way to improve the course instruction, particularly when the quality improvement loop is not closed appropriately. On the flip side, authorities tend to overrate the students’ evaluation and view them as a truly objective measure; they may use it alone or with others as a summative assessment, rather than a formative tool, for course instruction and decisions related to instructors’ hiring or promotion [16]. These misperceptions about students’ satisfaction by different stakeholders are likely the result of misuse or misinterpretation of students’ satisfaction. Not only that, these misperceptions could trigger a vicious circle of mistrust and resistance among program stakeholders. For instance, faculties tend to resist the notion of students being empowered to evaluate the faculty, while reframing the student evaluation to be a type of formative input or feedback to improve the students’ experience and course instruction can lead to higher acceptance among teaching staff. Similarly, the misinterpretation by authorities of student satisfaction and using it as a surrogate marker for learning effectiveness during course evaluation needs to be reframed by separating the two issues and realizing that student learning does not equal student experience or satisfaction [35].
The notion that student learning does not equal student satisfaction should not undermine the student’s experience and its vital role in the learning environment. Both need to complement each other, students will be more satisfied when they learn better, and they will learn more if they are highly satisfied. There is currently more emphasis on keeping end-user needs or customer experience at the center of every professional business model or accreditation [29]. To summarize the result of this debate about students’ evaluation and satisfaction, it is clear that student evaluation is currently and likely will remain an essential component of teaching and learning quality improvement. However, the appropriate interpretation and wise use are of paramount importance for its positive impact. In this study, we found a clear association between the timing of accreditation and an increase in student satisfaction scores when comparing pre-and-post accreditation. It demonstrates that accreditation has positively impacted the students’ satisfaction with this range of 10-year data. Although this positive correlation remains difficult to be labeled as a causality effect, the evident temporal relationship during two cycles suggests a clear direct or indirect impact of accreditation on the students’ satisfaction. This impact on students’ satisfaction highlights a very interesting aspect of accreditation’s impact on the medical program. It reflects the self-reported perception of the emotional dimension and its interaction within the learning environment, which is not easily measured otherwise infrequently considered or encountered in longitudinal accreditation research [34].
This study also illustrates that there was a drop in the scores of students’ satisfaction in-between accreditation cycles. Although this drop was relative and could be within an acceptable range, it illustrates the difficulty in maintaining the momentum associated with accreditation. Thus, there is a need to enhance the continuous internal quality improvement system to fill in this gap and bridge accreditation consecutive cycles together. Adapting student satisfaction as an essential component of this internal quality improvement system can play an important role in developing a timely and well-integrated quality system that can longitudinally sustain program improvement. The relatively small magnitude and narrow range of change in students’ satisfaction scores over the study period should be interpreted with caution, as each year’s average value reflects the average of a large pool of students’ responses to all program courses.
Strengths and limitations
One of the strengths of this study is being responsive to the needs verbalized within the medical education community nationally and in the international literature to answer an important question [34, 35]. The outcome measured is hypothesis-driven and in accordance with the previously proposed research framework to explore the accreditation impact. The design of pre-and-post intervention analysis, and longitudinal data collection over a range of 10 years, to cover the range of two cycles of accreditation, are of added value to this study. Although the data included is large, being from a single institution is a relative limitation. This study’s generalizability may also be considered with caution, given the national perspective of the NCAAA accreditation standard and the potential effect of cultural differences related to students’ satisfaction.