Skip to main content

Co-designing Entrustable Professional Activities in General Practitioner’s training: a participatory research study

Abstract

Background

In medical education, Entrustable Professional Activities (EPAs) have been gaining momentum for the last decade. Such novel educational interventions necessitate accommodating competing needs, those of curriculum designers, and those of users in practice, in order to be successfully implemented.

Methods

We employed a participatory research design, engaging diverse stakeholders in designing an EPA framework. This iterative approach allowed for continuous refinement, shaping a comprehensive blueprint comprising 60 EPAs. Our approach involved two iterative cycles. In the first cycle, we utilized a modified-Delphi methodology with clinical competence committee (CCC) members, asking them whether each EPA should be included. In the second cycle, we used semi-structured interviews with General Practitioner (GP) trainers and trainees to explore their perceptions about the framework and refine it accordingly.

Results

During the first cycle, 14 CCC members agreed that all the 60 EPAs should be included in the framework. Regarding the formulation of each EPAs, 20 comments were given and 16 adaptations were made to enhance clarity. In the second cycle, the semi-structured interviews with trainers and trainees echoed the same findings, emphasizing the need of the EPA framework for improving workplace-based assessment, and its relevance to real-world clinical scenarios. However, trainees and trainers expressed concerns regarding implementation challenges, such as the large number of EPAs to be assessed, and perception of EPAs as potentially high-stakes.

Conclusion

Accommodating competing stakeholders’ needs during the design process can significantly enhance the EPA implementation. Recognizing users as experts in their own experiences empowers them, enabling a priori identification of implementation barriers and potential pitfalls. By embracing a collaborative approach, wherein diverse stakeholders contribute their unique viewpoints, we can only create effective educational interventions to complex assessment challenges.

Peer Review reports

Introduction

In recent years, the landscape of medical education has significantly transformed due to increasing demands of public accountability and changing patient needs. In response to these evolving demands, competency-based medical education (CBME) has emerged. CBME has been gaining popularity in medical education programs [1]. In a CBME paradigm, medical curricula are structured based on predefined competencies that physicians should have acquired upon completion of the program [2, 3]. Despite the theoretical underpinnings of CBME, its implementation has encountered various obstacles [4]. Particularly, assessing competencies in real clinical environments has been a major barrier in the effective integration of CBME into medical education systems [5]. Recognizing this challenge, the concept of Entrustable Professional Activities (EPAs) has emerged.

EPAs are essentially tasks or activities that medical professionals should be able to perform competently and independently by the time they complete their training [6, 7]. EPAs are used to assess a learner’s ability to integrate and apply the necessary competencies in real-world clinical practice. They necessitate evaluating a learner’s progress and readiness for independent practice by observing their performance in these key professional activities in clinical practice [8]. The term “entrustable” indicates that, upon graduation or completion of a specific training period, a supervising physician or mentor should be able to entrust a medical graduate with these activities without direct supervision, considering them proficient and safe for the patients to perform these tasks independently [9, 10].

Considering the immense potential, integration and implementation of EPAs has gained rapid momentum, across various health professions and medical specialties [11, 12]. Despite this progress, a significant gap notably persists, when it comes to accommodating competing needs of curriculum designers and those of users in practice, namely trainers and trainees [13]. While the promise of EPAs in facilitating CBME is promising, there is lack of comprehensive evidence incorporating users’ perceptions during the design phase [8, 11, 14]. Therefore, the aim of this study was to design an EPA framework for workplace-based assessment by actively involving clinical educators, trainees and trainers throughout the process.

Methods

Setting and participants

This study took place in the interuniversity postgraduate General Practitioner’s (GP) Training, Belgium. To standardize GP Training across Flanders, four Flemish universities (KU Leuven, Ghent University, University of Antwerp, and the Flemish Free University of Brussels) collaboratively developed a postgraduate training program. This training program consists of three different training-phases and rotations, spread through three years, two rotations are in a GP practice, while one takes place at a hospital setting.

The GP Training is overseen by the Interuniversity Centre for GP Training (ICGPT). The ICGPT plays a pivotal role in coordinating and managing various aspects of the curriculum. Among its key responsibilities, the ICGPT oversees the allocation of clinical internships, conducts examinations, facilitates regular meetings between trainees and trainers, and maintains trainees’ learning electronic (e-) portfolios.

In 2018, the ICGPT initiated a shift towards CBME. The rationale of CBME was introduced in the curriculum by integrating first the CanMEDS roles. To facilitate this transition, two clinical competence committees (CCCs), comprising medical doctors and clinical educators from the four universities were appointed. These CCCs were tasked with coordinating workplace-based learning, and curriculum and assessment, respectively.

To align the curriculum with the patient needs in primary care, the two CCCs designated and defined ten different care contexts characteristic of primary care (i.e. short-term care, chronic care, emergency care, palliative care, elderly care, care for children, mental healthcare, prevention, gender related care, and practice management). Subsequently, in 2022, we initiated the process of designing specific EPAs for the GP Training. The EPAs aimed to facilitate and improve workplace-based assessment. These two CCCs participated in the design process, while trainers and trainees were invited to share their opinion as well.

Designing the EPA framework

The design of the EPA framework was based on participatory research design to engage different stakeholders [15]. Participatory research design is a community-based methodology aiming to create solutions for and with the people who are involved [15]. This iterative research approach encompassed three fundamental design-stages in a circular relationship, namely design, evaluation and refinement (Fig. 1). We executed two distinct iterative cycles, each with a specific group of stakeholders (Fig. 2). In cycle 1, we focused on CCCs, fostering discussions and validating the framework. In cycle 2, we involved clinical trainers and trainees, ensuring cross-validation. In the following section, we describe each iterative cycle, indicated as cycle 1 and as cycle 2, respectively.

Fig. 1
figure 1

Three design phases for designing the EPA framework

Fig. 2
figure 2

Process for developing the EPA framework based on participatory design research

In cycle 1, after reviewing relevant literature, we developed a blueprint of 60 EPAs corresponding to the ten different care contexts, already integrated in the curriculum [9, 10]. By doing so, we wanted to ensure practical applicability and relevance of our framework within the established educational environment. Afterwards, we linked all EPAs to the CanMEDS competency framework [16]. We defined competencies as broad statements that describe knowledge, skills and attitudes that GP trainees should achieve during the different training phases [17]. The CanMEDS framework identifies and describes different competencies for patient-centred care, and comprises seven different roles: medical expert, communicator, collaborator, leader, health advocate, scholar, and professional. By linking EPAs to CanMEDS, we constructed a matrix that served as a structured guide for integrating the EPAs in the workplace. Also, together with the CCCs we defined behavioural and cognitive criteria to anchor entrustment levels [9]. These criteria described required knowledge, skills, and attitudes in order for an EPA to be entrusted.

In cycle 2, we aimed at operationalising the EPAs, cross validating them by interviewing trainers and trainees, and deciding entrustment levels. Specifically, to operationalise the EPAs, we developed an assessment form, called Clinical Practice Feedback form (Fig. 3). We chose to link EPA assessments not only to direct and video observations, but also for case-based discussions. Additionally, we agreed upon entrustment levels and the entrustability scale. Entrustment was anchored on criteria that were defined along the EPAs. We decided to use the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) for validity and reliability reasons (Fig. 4) [18]. The Ottawa scale requires assessors to describe how much supervision they provided to trainees while performing a specific EPA. Concretely, the scale comprises five levels of performance ranging from trainers taking over the activity to trainees performing the activity without supervision (Fig. 3) [18].

Fig. 3
figure 3

Example of Clinical Practice Feedback form available in the e-portfolio

Fig. 4
figure 4

Five levels of entrustment based on the O-SCORE scale [19]

Data collection and analysis

In cycle 1, we evaluated the EPA blueprint by employing a modified Delphi methodology, with two rounds [19]. We invited members of the two CCCs (N = 14) to give feedback on the EPA blueprint via e-mail and during meetings, scheduled by the ICGPT. Members were asked whether they thought each EPA was necessary for workplace-based assessment and needed to be included in the framework. They were also encouraged to give feedback regarding the formulation of the EPAs. Once we gathered all the comments, we refined the blueprint and sent it back to the CCC members. In cycle 2, we interviewed two trainers and two trainees using semi-structured interviews and following the ‘think-aloud protocol’ [20,21,22], where we asked them whether each EPA was necessary and whether they were comprehensible for workplace-based assessment. Participants were required to articulate their thoughts while reading the EPA framework. This enabled us to gain insights into their thought processes and perspectives [22].

Data collection took place from February 2022 until September 2022. For quantitative data analysis we calculated descriptive statistics of consensus rates using SPSS 27 (IBM SPSS Statistics 27). We analysed qualitative data from CCCs members using content analysis on Microsoft Excel. For analysing data from the interviews with the trainers and trainees, we first verbatim transcribed the interviews, and, then, analysed the data using thematic analysis in NVivo (QSR International) [23, 24]. Qualitative data were analysed by two researchers separately to achieve triangulation, while a third researcher was consulted, when discrepancies arose [25].

Reflexivity and research team

The research team was composed of members with different backgrounds. Two members had a background in education, while the other two members had a background in biomedical sciences and general practice. All authors had research training and experience in medical education research. Methodological and design decisions were in line with the available literature. We predefined methodological steps before commencing the study. To ensure adherence to our design stages, we maintained a detailed logbook to document systematically progression and modifications from our initial protocol. We regularly discussed the results to ensure that our interpretations were close to the data.

Results

In cycle 1, fourteen members of the CCCs gave feedback on the list of 60 EPAs. In the first feedback round, all members agreed that all 60 EPAs were required in the framework. Twenty comments were given regarding the formulation of the EPAs and 16 adaptations were made based on the new suggestions. Comments regarding the formulation were about the use of certain words in order to make the framework understandable. In the second feedback round, consensus was reached on the formulation of the EPAs (Table 1).

Table 1 List of EPAs for the Flemish GP Training

In cycle 2, we interviewed two trainers and two trainees. CCC members, trainers, and trainees agreed that all EPAs should be included in the framework. From these interviews, we identified three themes. Table 2 presents these three themes alongside their subthemes. Necessity of EPAs was the first theme and included shared mindsets about necessity of EPAs in order to improve workplace-based assessment and difficulties with interpreting the CanMEDS roles.

Table 2 Themes from the interviews with trainers and trainees

The EPAs are better than the CanMEDS. My trainer and I often do not know what we have to assess…He (the trainer) sometimes gives the same feedback for multiple roles.” (trainee 1).

Second theme was about the relevance of EPAs to clinical practice. Users thought that the EPA framework could easily be linked to their clinical work, promoting assessment and feedback opportunities. They agreed that EPAs were understandable and formulated in intuitive language for clinical work.

I think that it (the EPA framework) is quite intuitive. I can see a lot of links between the EPAs and my daily practice.” (trainer 2).

I like the (EPA) framework. My trainer and I already discuss some of these (activities) during our weekly feedback session. (trainee 2)

Third theme included challenges in implementation of EPAs, regarding the large number of EPAs, perception of high-stakes assessment within an e-portfolio, and limitations inherent to the current e-portfolio. First, users expressed their concern regarding the large number of EPAs. They indicated that only a limited number might be feasible because of time constraints in the clinical workplace. Also, users thought that due to the large number of EPAs, trainees would “pick and choose” EPAs where they had performed well. Along with limited functionalities of the current e-portfolio, they indicated that EPAs might be used as showcasing performance instead for workplace-based assessment and feedback purposes. Mainly trainees expressed hesitation to document EPAs where they would need further improvement. They perceived the e-portfolio as a tool more suitable for high-stakes assessments rather than for feedback purposes.

The list (of EPAs) is quite extensive… I do want to have a nice portfolio, so for sure I will try to include as many as possible. In case something happens (in my curriculum), I want to show how well I have been performing.” (trainee 1).

I normally do not include patient cases that went wrong in my portfolio. Because different people have access to it (the e-portfolio).” (trainee 2).

Discussion

The aim of this study was to design an EPA framework by actively engaging and collaborating with different stakeholders. To be established as a “good” assessment framework, EPAs should be acceptable by the different stakeholders involved in the assessment process, such as curriculum designers, trainees and trainers [26, 27]. Incorporating their opinions and understanding their different needs must be integral to the design process. However, literature regarding EPAs design has mainly focused on experts’ opinion, neglecting users in practice [8].

From our findings, it becomes apparent that direct involvement and communication among diverse stakeholders are crucial for designing a useful for everyone EPA assessment framework. When various groups are involved in developing educational interventions, competing needs can be optimally addressed [28]. This optimization fosters a cohesive approach, ensuring high applicability rates and effectiveness, when the EPA framework is used in practice. The need for users’ involvement in the development process is currently demonstrated in the most recent EPA literature [29, 30]. Users’ involvement promotes common language and expectations, enhancing the clarity and effectiveness of EPA interventions, and, most importantly, empowers the users themselves by acknowledging their perspectives [31]. Ultimately, trainees and trainers are the ones using the EPA assessment frameworks during daily clinical practice, and are potentially confronted with unforeseen obstacles.

Additionally, users’ involvement in the process can help to identify potential implementation challenges [32, 33]. Our findings indicate differences in opinions regarding implementation of EPAs. In contrast to the CCC members, users expressed their concerns about the large number of EPAs included in the framework. They were particularly concerned about how to use sufficiently and adequately EPA assessments, while juggling clinical work. This concern echoes findings from other studies as well, related to the assessment burden [34]. In particular, when challenges in assessment processes arise in the clinical workplace, assessment is most probably not performed as intended [35].

Furthermore, our results illustrate tensions between assessment of learning and assessment for learning. Although the EPA assessments aim to better prepare trainees for clinical practice, users suggested that the purpose of the EPAs might not be explicit for everyone. Since EPAs are a form of assessment, they could potentially lead to strategic behaviours of documenting successful EPAs, and, therefore, creating a fragmented idea about trainees’ performance in clinical practice. Additionally, the use of the current e-portfolio for high-stakes assessments only adds to this tension. Especially, trainees were not comfortable with sharing performance evidence for improvement, because they perceived the stakes as high [36]. The dilemma between learning versus performing has been the Achilles point in workplace-based assessment [37]. The lines between assessment and feedback seem to be also blurred in EPAs [38, 39].

Involving users during the design process can lead not only to early adaptations and refinement of EPAs, but also to better allocation of resources. In order to ensure successful implementation of EPAs, it is essential to recognize the central role of both trainers and trainees. Future research should focus on training programs designed to equip faculty, trainers, and trainees with a profound understanding of EPAs. Users in practice need rigorous training covering EPA principles, assessment techniques, and feedback strategies [40]. Moreover, fostering a culture of interdisciplinary collaboration among stakeholder groups is imperative. Encouraging review of assessment tools and facilitating the exchange of opinions during designprocesses can significantly enhance the overall quality of EPA frameworks, and, even more broadly, of workplace-based assessment practices.

Although EPAs are a valuable framework for assessing competencies in workplace settings, integrating other assessment tools is crucial to capture the full spectrum of skills needed to meet patient needs. Future research should focus on combining EPAs with other assessment methods, such as simulation-based assessments, either with standardized patients or with virtual reality, that would allow trainees to demonstrate their clinical and interpersonal skills within safe, controlled environments that closely replicate challenging patient scenarios [41]. Additionally, incorporating multisource feedback and continuous portfolio assessments could offer a comprehensive view of a trainee’s performance across various settings and interactions [42, 43]. Together, these methods would enhance the EPA framework, ensuring a comprehensive assessment of all essential competencies that future physicians should acquire.

Limitations

We need to acknowledge several limitations in this study. First, in medical education research, users’ involvement prerequisites a degree of experience with a specific subject. In our study, we involved users in the early design process of the EPA framework. Although we are aware of this limitation, we intentionally and consciously chose a participatory research design. We believe that users are experts in their own experience, and that they hold the knowledge and capabilities to be involved as partners in the development process. Second, our study involved a low number of users due to difficulties in recruitment. This might have led to recruiting participants who are fully engaged in the educational practices of the GP Training. Nevertheless, our findings are rooted in two methodologies, namely a modified Delphi method and semi-structured interviews. Therefore, we used triangulation to verify our results [25]. Finally, although workshops are mostly commonly in co-design studies [44], our study coincided with the last COVID-19 lockdown, necessitating adjustments. To cope with these challenges and uncertainties, we opted for methods that were the most feasible for our participants at that moment. Despite these challenges, the contributions from all stakeholders were invaluable, particularly in exploring potential implementation and evaluation issues.

Conclusion

For EPAs to be successful, they need to be acceptable as an assessment framework by different stakeholders’ groups. Accommodation of competing stakeholders’ needs during the design process is crucial for enhancing acceptability and effectiveness during implementation. Our findings highlight the significance of collaborative efforts to design EPAs, emphasizing its potential to empower users, identify implementation barriers, and pinpoint unintended consequences. Through this collaborative approach, wherein diverse stakeholders contribute their perspectives, we can create effective educational solutions to complex assessment challenges.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

GP:

General Practitioner

CBME:

competency-based medical education

EPA:

Entrustable Professional Activity

CanMEDS:

Canadian Medical Education Directives for Specialists

ICGPT:

Interuniversity Centre for GP Training

CCC:

clinical competence committee

References

  1. Frank JR, Snell LS, Cate OT, Holmboe ES, Carraccio C, Swing SR, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45.

    Article  Google Scholar 

  2. Iobst WF, Sherbino J, Cate OT, Richardson DL, Dath D, Swing SR, et al. Competency-based medical education in postgraduate medical education. Med Teach. 2010;32(8):651–6.

    Article  Google Scholar 

  3. Frank JR, Snell L, Englander R, Holmboe ES. Implementing competency-based medical education: moving forward. Med Teach. 2017;39(6):568–73.

    Article  Google Scholar 

  4. Nousiainen MT, Caverzagie KJ, Ferguson PC, Frank JR. Implementing competency-based medical education: what changes in curricular structure and processes are needed? Med Teach. 2017;39(6):594–8.

    Article  Google Scholar 

  5. Lockyer J, Carraccio C, Chan M-K, Hart D, Smee S, Touchie C, et al. Core principles of assessment in competency-based medical education. Med Teach. 2017;39(6):609–16.

    Article  Google Scholar 

  6. Ten Cate O, Scheele F. Competency-based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542–7.

    Article  Google Scholar 

  7. Carraccio C, Englander R, Gilhooly J, Mink R, Hofkosh D, Barone MA, et al. Building a framework of entrustable professional activities, supported by competencies and milestones, to bridge the educational continuum. Acad Med. 2017;92(3):324–30.

    Article  Google Scholar 

  8. Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99. Med Teach. 2015;37(11):983–1002.

    Article  Google Scholar 

  9. Ten Cate O, Taylor DR. The recommended description of an entrustable professional activity: AMEE guide no. 140. Med Teach. 2021;43(10):1106–14.

    Article  Google Scholar 

  10. Carraccio C, Martini A, Van Melle E, Schumacher DJ. Identifying core components of EPA implementation: a path to knowing if a complex intervention is being implemented as intended. Acad Med. 2021;96(9):1332–6.

    Article  Google Scholar 

  11. de Graaf J, Bolk M, Dijkstra A, van der Horst M, Hoff RG, Ten Cate O. The implementation of entrustable professional activities in postgraduate medical education in the Netherlands: rationale, process, and current status. Acad Med. 2021;96(7s):S29-35.

    Article  Google Scholar 

  12. Keeley MG, Bray MJ, Bradley EB, Peterson CM, Waggoner-Fountain LA, Gusic ME. Fidelity to best practices in EPA implementation: outcomes supporting use of the core components framework from the University of Virginia entrustable professional activity program. Acad Med. 2022;97(11):1637–42.

    Article  Google Scholar 

  13. St-Onge C, Boileau E, Langevin S, Nguyen LHP, Drescher O, Bergeron L, et al. Stakeholders’ perception on the implementation of developmental progress assessment: using the theoretical domains framework to document behavioral determinants. Adv Health Sci Educ. 2022;27(3):735–59.

    Article  Google Scholar 

  14. Taylor DR, Park YS, Egan R, Chan MK, Karpinski J, Touchie C, et al. EQual, a novel rubric to evaluate entrustable professional activities for quality and structure. Acad Med. 2017;92(11S):S110-117.

    Article  Google Scholar 

  15. Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health. 2010;100 Suppl 1(Suppl 1):S40-46.

    Article  Google Scholar 

  16. Frank JR, Snell L, Sherbino J. CanMEDS 2015 Physician competency framework. Ottawa: Royal College of Physicians & Surgeons of Canada; 2015.

  17. Harden RM. Learning outcomes and instructional objectives: is there a difference? Med Teach. 2002;24(2):151–5.

    Article  Google Scholar 

  18. Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): a tool to assess surgical competence. Acad Med. 2012;87(10):1401–7.

    Article  Google Scholar 

  19. de Villiers MR, de Villiers PJ, Kent AP. The Delphi technique in health sciences education research. Med Teach. 2005;27(7):639–43.

    Article  Google Scholar 

  20. Patton MQ, Fund RECM. Qualitative research & evaluation methods. SAGE Publications; 2002.

  21. Sargeant J. Qualitative research part II: participants, analysis, and quality assurance. J Graduate Med Educ. 2012;4(1):1–3.

    Article  Google Scholar 

  22. Ericsson KA, Simon HA. How to study thinking in everyday life: contrasting think-aloud protocols with descriptions and explanations of thinking. Mind Cult Act. 1998;5(3):178–86.

    Article  Google Scholar 

  23. Lumivero. NVivo (Version 14). 2023. www.lumivero.com.

  24. Krippendorff K. Content analysis: an introduction to its methodology. Sage; 2018.

  25. Carter N, Bryant-Lukosius D, DiCenso A, Blythe J, Neville AJ. The use of triangulation in qualitative research. Oncol Nurs Forum. 2014;41(5):545–7.

    Article  Google Scholar 

  26. Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 conference. Med Teach. 2011;33(3):206–14.

    Article  Google Scholar 

  27. Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40(11):1102–9.

    Article  Google Scholar 

  28. Göttgens I, Oertelt-Prigione S. The application of human-centered design approaches in health research and innovation: a narrative review of current practices. JMIR Mhealth Uhealth. 2021;9(12):e28102.

    Article  Google Scholar 

  29. Bonnie LHA, Visser MRM, Bont J, Kramer AWM, van Dijk N. Trainers’ and trainees’ expectations of entrustable professional activities (EPAs) in a primary care training programme. Educ Prim Care. 2019;30(1):13–21.

    Article  Google Scholar 

  30. van Loon KA, Bonnie LHA, van Dijk N, Scheele F. Benefits of EPAs at risk? The influence of the workplace environment on the uptake of EPAs in EPA-based curricula. Perspect Med Educ. 2021;10(4):200–6.

    Article  Google Scholar 

  31. van Loon KA, Scheele F. Improving graduate medical education through faculty empowerment instead of detailed guidelines. Acad Med. 2021;96(2):173.

    Article  Google Scholar 

  32. Peters S, Bussières A, Depreitere B, Vanholle S, Cristens J, Vermandere M, et al. Facilitating guideline implementation in primary health care practices. J Prim Care Community Health. 2020;11:2150132720916263.

    Article  Google Scholar 

  33. Peters S, Sukumar K, Blanchard S, Ramasamy A, Malinowski J, Ginex P, et al. Trends in guideline implementation: an updated scoping review. Implement Sci. 2022;17(1):50.

    Article  Google Scholar 

  34. Szulewski A, Braund H, Dagnone DJ, McEwen L, Dalgarno N, Schultz KW, et al. The assessment burden in competency-based medical education: how programs are adapting. Acad Med. 2023;98(11):1261–7.

    Article  Google Scholar 

  35. Thaler RH. Nudge, not sludge. Science. 2018;361(6401):431.

    Article  Google Scholar 

  36. Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52(6):654–63.

    Article  Google Scholar 

  37. Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53(1):76–85.

    Article  Google Scholar 

  38. Gaunt A, Patel A, Rusius V, Royle TJ, Markham DH, Pawlikowska T. “Playing the game”: how do surgical trainees seek feedback using workplace-based assessment? Med Educ. 2017;51(9):953–62.

    Article  Google Scholar 

  39. Martin L, Sibbald M, Brandt Vegas D, Russell D, Govaerts M. The impact of entrustment assessments on feedback and learning: trainee perspectives. Med Educ. 2020;54(4):328–36.

    Article  Google Scholar 

  40. Bray MJ, Bradley EB, Martindale JR, Gusic ME. Implementing systematic faculty development to support an EPA-Based program of assessment: strategies, outcomes, and lessons learned. Teach Learn Med. 2021;33(4):434–44.

    Article  Google Scholar 

  41. Lövquist E, Shorten G, Aboulafia A. Virtual reality-based medical training and assessment: the multidisciplinary relationship between clinicians, educators and developers. Med Teach. 2012;34(1):59–64.

    Article  Google Scholar 

  42. Norcini JJ. Peer assessment of competence. Med Educ. 2003;37(6):539–43.

    Article  Google Scholar 

  43. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE guide no. 31. Med Teach. 2007;29(9):855–71.

    Article  Google Scholar 

  44. Slattery P, Saeri AK, Bragge P. Research co-design in health: a rapid overview of reviews. Health Res Policy Syst. 2020;18(1):17.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the contribution of Mr. Guy Gielis, Mrs. An Stockmans, Mrs. Fran Timmers, and Mrs Karolina Bystram that assisted with coordination of the CCCs. We would also like to thank and acknowledge Prof. dr. Martin Valcke and Dr. Mieke Embo for facilitating this study through the SBO SCAFFOLD project(www.sbo-scaffold.com). Finally, we would like to thank the CCCs members and the trainers and trainees that participated in this study.

Funding

This work was supported by the Research Foundation Flanders (FWO) under Grant [S003219N]-SBO SCAFFOLD.

Author information

Authors and Affiliations

Authors

Contributions

All authors (VA, SP, JE, BS) have contributed to designing the study. VA collected the data, led the analysis, and wrote the manuscript. BS analysed the data and critically reviewed the manuscript. SE and JE contributed to critically revising this manuscript. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Vasiliki Andreou.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Social and Societal Ethics Committee G-2022-5615-R2(MIN), and all participants signed a informed consent prior to participation.

Consent for publication

All the participants gave their consent for publishing their data anonymously.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andreou, V., Peters, S., Eggermont, J. et al. Co-designing Entrustable Professional Activities in General Practitioner’s training: a participatory research study. BMC Med Educ 24, 549 (2024). https://doi.org/10.1186/s12909-024-05530-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-024-05530-y

Keywords