Skip to main content

Assessment approaches in undergraduate health professions education: towards the development of feasible assessment approaches for low-resource settings

Abstract

Background

Feasible and effective assessment approaches to measuring competency in health sciences are vital in competency-based education. Educational programmes for health professions in low- and middle-income countries are increasingly adopting competency-based education as a strategy for training health professionals. Importantly, the organisation of assessments and assessment approaches must align with the available resources and still result in the fidelity of implementation. A review of existing assessment approaches, frameworks, models, and methods is essential for the development of feasible and effective assessment approaches in low-resource settings.

Methods

Published literature was sourced from 13 electronic databases. The inclusion criteria were literature published in English between 2000 and 2022 about assessment approaches to measuring competency in health science professions. Specific data relating to the aims of each study, its location, population, research design, assessment approaches (including the outcome of implementing such approaches), frameworks, models, and methods were extracted from the included literature. The data were analysed through a multi-step process that integrated quantitative and qualitative approaches.

Results

Many articles were from the United States and Australia and reported on the development of assessment models. Most of the articles included undergraduate medical or nursing students. A variety of models, theories, and frameworks were reported and included the Ideal model, Predictive Learning Assessment model, Amalgamated Student Assessment in Practice (ASAP) model, Leadership Outcome Assessment (LOA) model, Reporter-Interpreter-Manager-Educator (RIME) framework, the Quarter model, and the model which incorporates four assessment methods which are Triple Jump Test, Essay incorporating critical thinking questions, Multistation Integrated Practical Examination, and Multiple Choice Questions (TEMM) model. Additional models and frameworks that were used include the Entrustable Professional Activities framework, the System of Assessment framework, the Reporter-Interpreter-Manager-Educator (RIME) framework, the Clinical Reasoning framework (which is embedded in the Amalgamated Student Assessment in Practice (ASAP) model), Earl’s Model of Learning, an assessment framework based on the Bayer–Fetzer Kalamazoo Consensus Statement, Bloom's taxonomy, the Canadian Medical Education Directions for Specialists (CanMEDS) Framework, the Accreditation Council for Graduate Medical Education (ACGME) framework, the Dreyfus Developmental Framework, and Miller's Pyramid.

Conclusion

An analysis of the assessment approaches, frameworks, models, and methods applied in health professions education lays the foundation for the development of feasible and effective assessment approaches in low-resource settings that integrate competency-based education.

Trial registration

This study did not involve any clinical intervention. Therefore, trial registration was not required.

Peer Review reports

Background

Implementing competency-based education in health professions education within low-resource settings presents significant challenges. Competency-based education (CBE) is a model for designing and implementing education that focuses on the desired performance characteristics or outcomes of healthcare professionals [1]. The CBE model is set to improve student competence by predefining educational outcomes, also known as Entrustable Professional Activities, that support learning and teaching activities [2, 3]. The fundamental goal of CBE is to empower students with competencies in communication, collaboration, professionalism, and health advocacy, amongst others, to cultivate unique intellectual, emotional, and physical abilities so that they become successful in their professional lives, which will ultimately culminate in the improvement of patient care [1, 4]. However, the aspirations of CBE are rarely achieved in nursing practice in low-resource settings for a range of reasons, among which are poor understanding of what constitutes CBE, unclearly formulated competencies, poor implementation strategies, inadequately prepared educators, and unfeasible assessment methods [1, 5].

Implementors of the CBE model must ensure the feasible use of robust assessment methods that result in valid and reliable scores that can be used to provide feedback on learning and make decisions about student progression. In this paper, an assessment method refers to a technique employed to collect information on the performance of a student at a specific instance, for example, a multiple-choice examination [1, 6]. Conventional assessment approaches focus on end-of-course high-stakes examinations of knowledge and may be oblivious to authentic performance [7, 8]. These assessment approaches are mainly summative, may lack detailed feedback and may be limited in how they enable the monitoring of growth in student performance. These approaches are also not aligned with the aspirations of CBE in health professions education (HPE) [7, 9]. In this study, an assessment approach refers to a set of principles that guide the implementation of assessment in an educational programme The aspirations of CBE were spelt out by the competency-based medical education collaborators.

The competency-based medical education (CBME) collaborators established a CBME Core Components framework aimed at increasing fidelity in implementing CBME [5, 10]. There are five core components of the CBME framework: an outcomes competency framework; progressive sequencing of the outcomes; learning experiences that are tailored to the competencies in the CBME; teaching that is tailored to the competencies; and assessment following the programmatic assessment (PA) approach [5, 10]. Accordingly, PA should be embedded in the design and implementation of a CBE programme.

PA takes a longitudinal and holistic approach to assessment and emphasises the learning function of assessment by using an array of assessment methods to provide feedback and make plausible assessment decisions [11,12,13]. The PA approach is systematic since it encompasses planning with deliberate choices of assessment methods, scheduling, and feedback strategies [11, 14]. There are twelve principles that underpin the PA approach, which stand to reiterate that every individual assessment method has limitations and, if used alone to gather information on student performance, compromises will be made to reach pass or fail decisions [14]. In PA, information on student competence and progression is purposively and continually collected and analysed [3, 11, 15]. The concept of ‘data point’ applied in the PA approach refers to information on student performance that is collected from a single assessment method [8, 12]. Single data points are used in formative assessment to provide constructive feedback, which guides learning [13, 16]. The progressive accumulation of multiple data points becomes the premise of pass-or-fail decisions reached by a group of assessment experts [3]. According to the PA approach, the pass-or-fail decisions should not be reached by individuals, but rather by competence committees [14]. Programmatic assessment is instrumental in the implementation of a valid CBE programme since effective assessment is a strong force behind the authenticity of CBE [17].

The implementation of PA, however, is resource-intensive [12, 18]. Increased fidelity of PA implementation, therefore, requires a number of structures to be in place. There should be a well-established support structure for educators, a supportive administrative department in the institution, and an established group of experts who will make high-stakes assessment decisions affecting students’ progression in the programme. Additional aspects that should be in place include training workshops for educators, mentors, and preceptors in the clinical area, as well as timely and constructive feedback after each assessment and the use of multiple methods of assessment for the collection of data on student performance [11, 12, 18]. A large volume of data on student performance can be collected, given the multiple methods of assessment used in PA. Therefore, an information management system needs to be established [18]. Since no set number of data collection points is stipulated, institutions often quantitatively warrant saturation of information on student performance by setting a minimum requirement for the number of data points to be collected [19]. The minimum number of data points deemed adequate for saturation differs according to the institutional context [19].

The success of PA implementation is context-dependent [13]. Instances of successful PA implementation are sparse in low-resource contexts and skewed towards institutions in high-income countries [8, 14]. Canada, the United Kingdom, the United States of America, the Netherlands, New Zealand, and Australia are reported as having successfully implemented the PA approach in their undergraduate medical programmes [14]. One low-resourced country in Africa, Uganda, reports the successful implementation of PA [20]. There are various reasons that explain why the low uptake of PA approaches exists in low-resourced contexts, including resource disparities, poor leadership and institutional governance, and the limited adoption of CBE across settings [3]. The positive skew towards countries in Europe and North America may be due to the development, implementation, and review of PA led by assessment experts from HPE institutions in those regions [3, 21].

There is also an interplay of various factors that influence the fidelity of implementing PA. On the one hand, there are factors such as the political, economic, and social context of the institution, poor organisational culture, poor leadership engagement, poor support structures for educators, which appear to negatively influence implementation [13, 21]. On the other hand, factors such as robust leadership engagement, financial support, adequate support for educators, adequate human resources, and frequent workshops for educators are reported as quintessential to the successful implementation of this approach [18]. The outcome of this interplay of factors positions the implementation of PA as resource intensive and thus unachievable for institutions that aspire to implement CBE in resource-limited contexts [12]. Failure by HPE institutions in resource-limited settings to implement the recommended PA approach for authentic CBE programmes has various implications, which have ripple effects on the fidelity of CBE. The implications include, the adoption of feasible and conventional approaches to assessment, drifting away from the PA approach (which has a negative effect on the implementation of CBE), false positive results of CBE implementation, curriculum drift, and graduates who are not competent or ready for work in the health system [3].

There is thus a need to develop a defensible and feasible assessment approach that can be implemented in CBE programmes, especially in low-resource contexts. The developed assessment approach should aim to maintain a balance between the inherent characteristics of the CBE model and enabling factors in the educational context. This article focuses on a mapping review, which reports on the assessment approaches, frameworks, models, and methods related to the implementation of assessment in undergraduate HPE to inform the development of an assessment approach for institutions implementing CBE models in low-resource contexts.

Methods

The review question for this study was:

What is known about assessment approaches, frameworks, models, and methods in undergraduate health professions education?

Study design

The mapping review study design used for this research was structured to enable the collection of literature specific to the field of assessment approaches. The aim was to develop a better understanding of the different characteristics of the assessment approaches, frameworks, models, and methods used in undergraduate health professions education. The aim was also to identify potential gaps in previous research.

The mapping review followed a stepwise approach comprising five steps including searching and screening the literature, data extraction, and analysis and presentation of results.

Step 1: Searching the literature

Searching the literature involves developing a search strategy that comprises the search string and databases to be searched.

Search string

The search string was determined by integrating keywords and synonyms gleaned from the review question.This was done through Boolean operators and modifiers. The search string was:

(assess* n2 (model* or framework* or theories or theory) and (educat* or train*) and (“health profession*” or “health science*” or nurs* or medical or clinical or medicine) and (undergraduate* or baccalaur*) and ti assess*.

Databases

The search for literature was carried out in May 2022 and covered the period from January 2000 to June 2022. The start date of the early 2000s was chosen due to an increase in the adoption of the CBE approach at that time, which saw new assessment approaches [17]. Ten databases were accessed through the EBSCOhost interface by the first author in collaboration with an information specialist at the university library. Table 1 shows the databases accessed and publication records retrieved from each.

Table 1 Databases used and abstracts retrieved

Step 2: Screening the literature

The search yielded 228 records, which were reduced to 135 by the automatic deduplication process. A further manual deduplication yielded 121 publication records excluding 14 records. The following inclusion and exclusion criteria were then used to screen the remaining 121 publication records.

Inclusion/exclusion criteria

Inclusion criteria

Studies included were peer-reviewed literature published on assessment approaches, strategies, theories, and methods in undergraduate programmes in health professions education.

Exclusion criteria

Studies were excluded if 1) their literature focused on assessment in organisations; 2) the content was about postgraduate education; 3) the assessment was in primary and secondary schools; or 4) the literature did not focus on HPEs and reviews.

Results

The three authors then screened the 121 records against the inclusion/exclusion criteria based on their titles and abstracts. The authors screened these records independently and were blinded of their screening outcomes until a consensus meeting. Discussions among the authors on the screening outcomes were held, and any discrepancies were resolved. A total of ninety (n = 90) abstracts did not meet the inclusion criteria and were eliminated.

Full-text articles for the remaining thirty-one (n = 31) abstracts were retrieved, read, and screened individually by all three authors. A further seventeen (n = 17) articles that did not meet the inclusion criteria were discarded. The remaining fourteen (n = 14) full-text articles were included in this study, as shown in Fig. 1.

Fig. 1
figure 1

Prisma flow diagram of the mapping review

Step 3: Data extraction

Data to answer the review question were extracted from the final fourteen full-text articles (n = 14). Data extraction was literatim. A Google form was designed for the data extraction. Data elements extracted from the fourteen articles included the year of publication, the country where conducted, aim of the study, population, study design, assessment models, frameworks, approaches, and methods. A summary of the extracted data can be viewed as a supplementary file in this article.

Step 4: Data analysis

Data were analysed quantitatively. Frequencies were mainly used to analyse data about the year of publication and the country where the study was conducted. Descriptive data analysis was used on the aim of the study, population, design, assessment approaches, frameworks, models, and methods. The information gathered from the data analysis was used to inform the development of an assessment approach that can be utilised in institutions implementing CBE models in low-resource contexts.

Step 5: Presentation of results

The aim of the study was, through a mapping review, to report on the assessment approaches, frameworks, models, and methods related to the implementation of assessment in undergraduate HPE to inform the development of an assessment approach for institutions implementing CBE models in low-resource contexts. The results of the mapping review are discussed next by providing information about the contextual backgrounds of the different articles retrieved in the review, the characteristics of the studies, and the factors that are essential for the development of a feasible assessment approach.

Contextual background

The contextual background of the studies refers to the number of publications per 5-year period, the geographical distribution of the research output, and the target population of the publications. The years of publication ranged from the years 2000 to 2022, as illustrated in Fig. 2. Figure 2 also illustrates how research output was almost stagnant at 2 articles per each 5-year period over the 20-year period, with spikes to 4 and 5 over the period 2000–2005 and 2011–2015, respectively.

Fig. 2
figure 2

Number of publications per five-year period

Countries where the studies were published and the number of publications over the 2000–2022 period are reflected in Table 2. The results show higher research output in high-income countries than in other countries.

Table 2 Geographical distribution of the articles

The target population of most of the articles (n = 11) were reported as undergraduate medical students, while the others (n = 3) focused on undergraduate nursing students. Regarding the study designs of the 14 articles, only Lafave, Katz, Vaughn, and Alberta (2013) reported a quasi-experimental design and the rest did not mention their study designs.

In terms of the aim of the studies, half of the articles (n = 7) reported on the implementation of assessment models that were utilised in their institutions [22,23,24,25,26,27,28]. The other articles (n = 7) reported on the development of, or proposal to develop assessment models to be implemented [29,30,31,32,33,34,35].

Components essential for the development of a feasible assessment approach

The development of a feasible assessment approach requires background knowledge of the essential components of an assessment approach. Table 3 illustrates these components as assessment approaches, frameworks, models, and methods. The assessment models can be categorised into either purely clinical assessment models, or both theory and clinical assessment models.

Table 3 Assessment models, frameworks, type, approach and methods and study sample [23,24,25,26,27,28,29, 31,32,33,34,35]

Table 3 shows that there are various assessment frameworks that are in use. Frameworks that guide assessment can be categorised into analytic, synthetic, and developmental frameworks. Examples of analytic frameworks are Bloom’s taxonomy, the CanMEDS Framework, and the ACGME framework. The analytic assessment frameworks take a specific approach in the assessment of learning outcomes where competencies are categorised into individual domains, for example, the psychomotor domain. Students are given feedback on their performance in each aspect of a specific task. The developmental framework is the Dreyfus Developmental Framework. The developmental assessment frameworks focus on the progress of students in developing skills through predetermined levels of competence. The synthetic frameworks are the RIME framework and Miller’s Pyramid. Synthetic frameworks allow for the holistic assessment of students as they apply all domains of competence in carrying out a task. One assessment approach, the programmatic assessment, was mentioned in the review.

A wide variety of assessment methods are listed in Table 3. These assessment methods include the OSCEs, scenarios, reflective reading logs, portfolios, laboratory reports, online discussion group submissions, essays, reports, projects, MCQs, problem-based oral examinations, structured short and essay questions, case studies and viva-voces. Assessment methods can be classified into theory methods and practical/clinical methods [35]. Theory assessment methods include SAQs, MCQs, extended matching questions, and oral examinations. Practical assessment methods include long cases, short cases, OSCEs, mini-clinical evaluation exercises, and objective structured long examination records.

Discussion

The mapping review explored assessment approaches, frameworks, models, and methods in undergraduate HPE over the years 2000 to 2022 as a baseline for the development of an assessment approach. The developed assessment approach could be utilised by institutions in low-resource countries that wish to implement CBE models. Important to note here is that structured assessment processes are essential in CBE curricula. Generally, the mapping review revealed that there is limited discourse around the topic of assessment approaches in HPE. In the past 22 years, there has been minimal research output on this topic, and the research that has been published is generally skewed towards high-income countries.

Indeed, the geographic and economic orientation of the research used for the mapping review showed that the majority of publications were from high-income countries like the United States of America, New Zealand, Australia, Canada and Singapore. According to Govaerts et al. [3] there is marked success in the implementation of CBE in high-income countries (HICs), hence the higher publication output. Lema, Kraemer-Mbula and Rakas [36] also concluded that health education professionals in high-income countries can afford to implement various assessment models and publish their outcomes, unlike their counterparts in low-resource countries. Lema, Kraemer-Mbula and Rakas [36] reiterate that research on innovation is generally distributed along income lines and that even though research output on innovation in low- to middle-income countries (LMICs) has grown substantially in the past two decades, it is still skewed towards upper-middle-income countries like China.

Other reasons that explain why the majority of the research on assessment approaches in HPEs is from high-income countries include funding issues in low- and middle-income countries (LMICs), which hinder research output [37, 38]. Inadequate funds lead to poor information technology, unstable power supply, and the inaccessibility of libraries and journals [37]. There appears to be insufficient human capacity in research as well as few research mentors and role models, and a lack of a research culture, which can have a negative impact on research output in HPE institutions in low-income countries. Thus, although there is a significant amount of innovation taking place in HPE institutions in LMICs, there is still limited research output. There also seems to be insufficient networking among research communities in LMICs, meaning that support among researchers is lacking. Limited use of research evidence could, in turn, demotivate researchers to engage in further research. Unlike in HICs, students in LMICs are often introduced to research late into their academic journeys. In LMIC contexts, research is mostly introduced towards the end of a student’s undergraduate degree. Additionally, limited career options in research means that potential researchers may only implement research projects as partial fulfilment of their degrees and not as a career pathway trajectory. A further issue is the poor reception of research papers in reputable journals, which may also dampen the researcher spirit [38].

Research theory, however, does provide some insight into the ways in which research in LMICs can be better enabled. Factors that can enable research include: allowing for curricula innovation and high research output, keeping class sizes relatively small, the presence of specialised assessment experts who offer support to faculty, a collegial environment, a centralised funding system, outstanding information technology resources, state-of-the-art clinical simulation centres, a shared educational vision with the leadership of the institution, stakeholder involvement, input from other departments at the university that may have already successfully implemented CBE, a centralised governance structure, and educational consultants who support the programme [37].

This mapping review also revealed that there is some stagnation in the field of assessment in CBE. This stagnation could be linked to the curriculum innovation taking place within many HPE institutions, which means that they are yet to establish feasible assessment approaches in their contexts. To illustrate this point, half of the papers (n = 7) from the review reported on the development of, or proposals to develop assessment models, which are thus yet to be implemented [29,30,31,32,33,34,35]. HPE institutions still seem to be trying to find their footing in terms of assessment in CBE.

The development of a feasible assessment approach should be supported by good assessment frameworks. The assessment approaches used for the assessment of competence in CBE can be structured around Miller’s competency pyramid as shown in Fig. 3 below [39, 40]. Miller’s Pyramid presents a framework that can be used to assess levels of clinical competence from cognitive levels of knowledge (knowing and knowing how), application of knowledge (showing), practical application of the knowledge in a practice setting (doing) [40]. Miller’s pyramid has become the beacon of assessment frameworks in CBE as it allows assessment of all facets of competence, which include knowledge, skills, and attitude. Miller’s pyramid divides the development of clinical competence into four hierarchical processes with knowledge at the lowest level, tested by written examinations and MCQs. The second level, application of the knowledge, is assessed by essays, clinical problem-solving exercises, and extended MCQs. The third level, clinical skills competency, is assessed by OSCEs. The final level, clinical performance, is assessed by direct observation in real clinical settings [40]. The use of frameworks in collaboration with assessment models will guide the development of an appropriate assessment.

Fig. 3
figure 3

Adaptation of miller’s pyramid of clinical competence (1990)

Assessment models and frameworks in CBE are centred around students’ attainment of competency. Miller’s framework guides the assessment of competence in CBE [39] and how a student progresses from novice to expert in their academic growth, as illustrated in Fig. 3 above.

Assessment approaches are associated with the use of assessment methods in collecting data during the academic journey of the student. The choice of assessment method depends on the educational justification for using the method at a particular time [41]. Some authors believe that oral assessment methods, like presentations and viva voces, can be used to allow students to better express themselves and to give the assessor an opportunity to probe further in gaining a clearer picture of the student’s understanding of the content [42]. In terms of written assessments, however, a comparison between SAQs and MCQs may give a false impression of students’ performance. SAQs give students a chance to express their cognitive capabilities and errors [43]. However, the disadvantage of SAQs is that students focus on practising examination techniques rather than on having a full understanding of the principles of the subject matter. Hence, SAQs do not prepare students for the clinical tasks they will have to complete with patients, which require them to apply their medical knowledge [43]. Therefore, the necessity of a mix of assessment methods to ensure a comprehensive assessment of the student.

One assessment method that has gained popularity in medical and healthcare education since its introduction by Harden in 1975 is the OSCE [44]. The success of OSCE is based on the specific measurements that OSCE has, which are validity, reliability, feasibility, and credibility. The inherent strength of OSCE is its objectivity because examiner and patient variations are eliminated [45]. However, OSCEs require much planning and can be resource intensive in terms of budget for training examiners, remunerating simulated patients, and setting up multiple stations in contexts with large student numbers [45, 46].

The uptake and implementation of CBE in HPE institutions have not been flawless. This has negatively impacted the assessment strategies utilised [47]. The first flaw is often disagreement in what the terms ‘competence’ and ‘competency’ mean. This disagreement has led some nations to contextualise their understanding of the terms, which has, in turn, led to varied implementation strategies of CBE and its assessment strategies. Some countries have instituted CBE but later stepped back from some or all of their curriculum reform strategies. For example, Sweden partly unraveled its earlier CBE approach in 2011. England went on to replace its competency-based curriculum in 2014 [48], and Japan, Poland [49] and the Flemish community of Belgium [50] are said to have shifted back towards more discipline-focused curricula. All the evidence seems to paint a picture that CBE, and its associated assessment strategies, are not easy to implement. Therefore, HPE institutions that wish to adopt CBE might have to develop an assessment approach that is feasible in their context, yet fulfils the mandate of assessment in CBE.

There is also a discrepancy in the fidelity of implementation due to political influences as some countries rhetorically announce that they have joined the group of nations that have adopted CBE without adopting the same curriculum reform as those countries [47]. These countries have developed hybrids of the CBE curriculum. Most countries that have been unsuccessful in rolling out a CBE curriculum have altered the ideas of CBE to fit into their national political, economic, or cultural contexts. Deng and Peng [51] have shown how China’s competency framework is adapted to its Confucian and socialist context. The United States have combined their competency framework with pragmatism. The Swedish reform led to the meshing of content-based reforms with competency-based reforms. This hybridity in approaches to curricula supports the overarching argument of this study, which is to stress the need to develop an assessment approach that is feasible for implementation in low-resource settings.

Conclusion

As HPE institutions adopt CBE, maintenance of the fidelity of the CBE is essential. To keep the fidelity of CBE implementation high, the PA approach has to be utilised in the assessment of and for learning. However, the reviewed literature has revealed that PA is a resource-intensive approach. As a result, institutions that cannot afford the implementation of PA will likely adopt CBE, but resort to affordable, traditional assessment methods. This mapping review set out to reveal the importance of establishing what factors are essential in developing an alternative and feasible assessment approach that fulfils the requirements of assessment in CBE to be used in low-resource settings. Based on the results of this mapping review, future research should seek to develop a more feasible assessment approach that can be used in CBE in those contexts where the implementation of PA is costly.

Limitations

Time was one of the limiting factors in this research since it was part of a study qualification, which had to be completed within a specific time frame. Language limited the number of articles that could be accessed in the review since English-only articles were retrieved.

Recommendations

Educators planning for the development of an assessment approach should consider a mapping review that includes other non-English articles to broaden the results. Assessment models, frameworks, and methods are essential in structuring the development of a new assessment approach. Therefore, educators should be guided by considering the selection of a model, framework, and assessment methods that are feasible in their context.

Availability of data and materials

The datasets generated and/or analysed during the current study have been included in the results section and are publicly available. A raw data file has been added in the supplementary material and is publicly accessible. The link to the raw data extraction tool is:

https://docs.google.com/spreadsheets/d/1N410YOAj6KRKFauWm0-rsI5Hf0YMIXeK0wKWMKA_UyI/edit?usp=sharing

Abbreviations

CBE:

Competency-based education

HPE:

Health profession education

MCQ:

Multiple choice questions

RIME:

Reporter-to-interpreter, to-manager/educator

TEMM:

A model consisting of 4 assessment methods: the Triple Jump Test, essay incorporating critical thinking questions, Multi-station Integrated Practical Examination, and multiple-choice questions

OSCE:

Objective structured clinical examinations

SAQ:

Short answer questions

LMICs:

Low- and middle-income countries

HICs:

High-income countries

References

  1. Tacettin A, Cem BM. Competency-based education: theory and practice. Psycho-Educ Res Rev. 2021;10(3):67–95.

    Google Scholar 

  2. Crawford L, Cofie N, McEwen L, Dagnone D, Taylor SW. Perceptions and barriers to competency-based education in Canadian postgraduate medical education in Canadian postgraduate medical education. J Eval Clinical practise. 2020;26(2020):1124–31.

    Article  Google Scholar 

  3. Govaerts M, Van der Vleuten C, Schut S. Implementation of programmatic assessment: challenges and lessons learned. Educ Sci. 2022;12(717):1–6.

    Google Scholar 

  4. van der Vleuten CPM, Schuwirth LWT. Assessment in the context of problembased learning. Adv Health Sci Educ. 2019;24(2019):903–14.

    Article  Google Scholar 

  5. Chaney KP, Hodgson JL. Using the five core components of competency-based medical education to support implementation of CBVE. Front Veterinary Sci. 2021;8(2021):1–8.

    Google Scholar 

  6. Bok HGJ, de Jong LH, O’Neill T, Maxey C, Hecker KG. Validity evidence for programmatic assessment in competency-based education. Perspectives on Medical Education. 2018;7(2018):362–72.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Asamoah D. Traditional assessment procedures, and performance and portfolio assessment procedures: an in-depth comparison. Int J Educ Res Stud. 2019;1(2):28–30.

    Google Scholar 

  8. Sherbino J, Bandiera G, Frank KD, Holroyd JR, Jones BR. The competency-based medical education evolution of Canadian emergency medicine specialist training. Can Assoc Emerg Phys. 2020;22(1):95–102.

    Google Scholar 

  9. Quansah F. Traditional of performance assessment: what is the right way in assessing learners. Res Humanit Social Sci. 2018;8(1):21–4.

    MathSciNet  Google Scholar 

  10. Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Collaborators obotICbME. A core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019;2019(94):102–9.

    Google Scholar 

  11. Van der Vlueten CPM, Heeneman S. On the issue of costs in programmatic assessment. Perspectives on Medical Education. 2016;5(2016):303–7.

    Google Scholar 

  12. Shrivastava SR, Shrivastava PS. Programmatic assessment of medical students: pros and cons. J Prim hEalth Care: Open Access. 2018;8(3):1–2.

    Google Scholar 

  13. Torre D, Rice NE, Ryan A, Bok H, Dawson LJ, Bierer B, et al. Ottawa 2020 consensus statements for programmatic assessment – 2. Implementation and practice. Med Teach. 2021;43(10):1149–60.

    Article  PubMed  Google Scholar 

  14. Heeneman S, de Jong LH, Dawson LJ, Wilkinson TJ, Ryan A, Tait GR, et al. Ottawa 2020 consensus statement for programmatic assessment – 1. Agreement on the principles. Med Teach. 2021;43(10):1139–48.

    Article  PubMed  Google Scholar 

  15. Torre DM, Schuwirth LWT, Van der Vleuten CPM. Theoretical considerations on programmatic assessment. Med Teach. 2020;42(2):213–20.

    Article  PubMed  Google Scholar 

  16. Bate F, Fyfe S, Griffiths D, Russell K, Skinner C, Tor E. Does an incremental approach to implementing programmatic assessment work? Reflections on the change process [version 1]. MEdEdPublish. 2020;9(55):1–16.

    Google Scholar 

  17. Vasquez JA, Marcotte K, Gruppen LD. The parallel evolution of competency-based education in medical and higher education. J Competency-based Educ. 2021;6(2):1–7.

    Article  Google Scholar 

  18. Ryan A, Terry J. From traditional to programmatic assessment in three (not so) easy steps. Educ Sci. 2022;12(487):1–13.

    Google Scholar 

  19. de Jong LH, Bok HGJ, Kremer WDJ, van der Vlueten CPM. Programmatic assessment: can we provide evidence for saturation of information? Med Teach. 2019;41(6):678–82.

    Article  PubMed  Google Scholar 

  20. McKenzie-White J, Mubuuke AG, Westergaard S, Munabi IG, Bollinger RC, Opoka R, et al. Evaluation of a competency based medical curriculum in a Sub-Saharan African medical school. BMC Med Educ. 2022;2022(22):1–9.

    Google Scholar 

  21. Schut S, Maggio LA, Heeneman S, Tartwijk JV, van der Vlueten C, Driessen E. Where the rubber meets the road: an integrative review of programmatic assessment in health care professionals education. Perspect Med Educ. 2021;2021(10):6–13.

    Google Scholar 

  22. Hudson JN, Tonkin AL. Evaluating the impact of moving from discipline-based to integrated assessment. Med Educ. 2004;38(2004):832–43.

    Article  CAS  PubMed  Google Scholar 

  23. Rider EA, Hinrichs MM, Lown BA. A model for communication skills assessment across the undergraduate curriculum. Med Teach. 2006;8(5):e127-134.

    Article  Google Scholar 

  24. Lafave MR, Katz L, Vaughn N, Alberta C. Application of ‘“Earl’s assessment as, assessment for, and assessment of learning model”’ with orthopaedic assessment clinical competence. Athletic Train Educ J. 2013;8(4):109–14.

    Article  Google Scholar 

  25. Wissmann J, Hauck B, Clawson J. Assessing nurse graduate leadership outcomes the typical day format. Nurse Educ. 2002;27(1):32–6.

    Article  PubMed  Google Scholar 

  26. Abraham RR, Uphadhya S, Torke S, Ramnarayan K. Student perspectives of assessment by TEMM model in physiology. Adv Physiol Educ. 2005;2005(29):94–7.

    Article  Google Scholar 

  27. Violato C, Cullen MJ, Englander R, Murray KE, Hobday PM, Boarman-Shoap E, et al. Validity evidence for assessing entrustable professional activities during undergraduate medical education. Acad Med. 2021;96(7S):S70-76.

    Article  PubMed  Google Scholar 

  28. Zasadny MF, Bull RM. Assessing competence in undergraduate nursing students: the amalgamated students assessment in practice model. Nurse Educ Pract. 2015;2015(15):126–33.

    Article  Google Scholar 

  29. Walubo A, Burch V, Parmar P, Raidoo D, Cassimjee M, Onia R, et al. A model for selecting assessment methods for evaluating medical students in African medical schools. Acad Med. 2003;78(9):899.

    Article  PubMed  Google Scholar 

  30. Taylor JA. Assessment in first year university: a model to manage transition. J Univ Teach Learn Pract. 2008;5(1):19–33.

    Google Scholar 

  31. Gupta P, Shah D, Singh T. Competency-based assessment in pediatrics for the new undergraduate curriculum. Med Educ. 2021;58(2021):775–9.

    Google Scholar 

  32. Pangaro L, Cate OT. Frameworks for learner assessment in medicine: AMEE guide no. 78. Med Teach. 2013;2013(35):e1197–210.

    Article  Google Scholar 

  33. Tham KY. Observer-reporter-interpreter-manager-educator (ORIME) framework to guide formative assessment of medical students. Annals Acad Med. 2013;42(11):603–7.

    Google Scholar 

  34. Colbert-Getz JM, Shea JA. Three key issues for determining competence in a system of assessment. Med Teach. 2020;1(2020):1–3.

    Google Scholar 

  35. Singh T, Anshu, Modi JN. The quarter model: a proposed approach for in-training assessment of undergraduate students in Indian medical schools. Indian Paediatr. 2012;49(2012):871–6.

    Article  Google Scholar 

  36. Lema R, Kraemer-Mbula E, Rakas M. Innovation in developing countries: examining two decades of research. Innov Dev. 2021;11(2–3):189–210.

    Article  Google Scholar 

  37. Casadell V, Tahi S. National innovation systems in low-income and middle-income countries: re-evaluation of indicators and lessons for a learning economy in Senegal. J Knowl Econ. 2022;2022(1):1–31.

    Google Scholar 

  38. Shumba CS, Lusambili AM. Not enough traction: barriers that aspiring researchers from low- and middle-income countries face in global health research. J Global Health Econ Policy. 2021;1(2021):1–4.

    Google Scholar 

  39. Hanks S, Neve H, Gale T. Preparing health profession students for practice in complex real world settings: how do educators respond to a model of capability? Int J Practice-based Learn Health Social Care. 2021;9(1):50–63.

    Article  Google Scholar 

  40. Rhind SM, MacKay J, Brown AJ, Mosley CJ, Ryan JM, Hughes KJ, et al. Developing Miller’s pyramid to support students’ assessment literacy. J Vet Med Educ. 2021;48(2):158–62.

    Article  PubMed  Google Scholar 

  41. van der Vlueten CPM, Heeneman S. On the isuue of costs in programmatic assessment. Prospective Medical Education. 2016;1(2016):1–5.

    Google Scholar 

  42. Theobold AS. Oral exams: a more meaningful assessment of students’ understanding. J Stat Data Sci Educ. 2021;29(2):156–9.

    Article  Google Scholar 

  43. Sam AH, Westacott R, Gurnell M, Wison R, Meeran K, Brown C. Comparing single-best-answer and very-short-answer questions for the assessment of applied medical knowledge in 20UK medical schools: cross-sectional study. BMJ Open. 2019;9(9):1–7.

    Article  Google Scholar 

  44. De Oliveira FAM, Porto FR, Ribeiro CG, Haddad AE, De Oleveira RG, Ferraz Junior AML. Objective structured clinical examination, OSCES: an advance in the teaching and learning process in the student’s perception. Rev Odontol UNESP. 2019;2019(48):1–10.

    Google Scholar 

  45. Elshama SS. How to design and apply an objective structured clinical examination (OSCE) in medical education? Iberoamerican J Med. 2021;3(1):51–5.

    Article  Google Scholar 

  46. Ruwizhu T, Nyamukapa R, Mazhandu F, Mutambara J, Mangezi W, Whitwell S. Piloting the use of objective structured clinical examinations (OSCEs) to assess undergraduate medical students’ clinical competence in psychiatry in Zimbabwe. BJPsych Int. 2021;19(3):75–7.

    Article  Google Scholar 

  47. Anderson-Levitt K, Gardinier MP. Introduction contextualising global flows of competency-based education: polysemy, hybridity and silences. Comp Educ. 2021;57(1):1–18.

    Article  Google Scholar 

  48. Marope M, Griffin P, Gallagher C. Future competences and the future of curriculum: a global reference for curricula transformation. New York: IBE; 2017.

    Google Scholar 

  49. Wisniewski J, Marta Z. Reforming education in Poland. In: Reimers FM, editor. Audacious Education Purposes: How Governments Transform the Goals of Education Systems. Cambridge: SpringerOpen; 2020. p. 181–208.

    Chapter  Google Scholar 

  50. Loobuyck P. The policy shift towards citizenship education in Flanders. J Curriculum Sudies. 2020;53(1):65–82.

    Article  Google Scholar 

  51. Deng L, peng Z. A comaprative analysis of frameworks for 21st century competencies in Mainland China and United States: implications for national policies. Comp Educ. 2020;57(1):83–98.

    Article  Google Scholar 

Download references

Acknowledgements

Prof Ruth Albertyn for her critical insight into the article. Mrs du Preez from Faculty of Health Sciences for her assistance with the literature search.

Funding

No funding was received for this study.

Author information

Authors and Affiliations

Authors

Contributions

E.M; L.H. and C.N. all contributed to the conceptualisation of the review, the interpretation of the study findings and the writing of the article.

Corresponding author

Correspondence to Eva Mukurunge.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for this study was obtained from the Health Sciences Research Ethics Committee at the University of the Free State (UFS-HSD2022/0510).

Consent for publication

N/A

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mukurunge, E., Nyoni, C.N. & Hugo, L. Assessment approaches in undergraduate health professions education: towards the development of feasible assessment approaches for low-resource settings. BMC Med Educ 24, 318 (2024). https://doi.org/10.1186/s12909-024-05264-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-024-05264-x

Keywords