Skip to main content

Entrustment in physician-patient communication: a modified Delphi study using the EPA approach



Competency based curricula across the globe stress on the importance of effective physician patient communication. A variety of courses have been used to train physicians for this purpose. However, few of them link competencies with practice resulting in confusions in implementation and assessment. This issue can be resolved by treating certain specific patient communication related tasks as acts of entrustment or entrustable professional activities (EPAs). In this study, we aimed to define a competency-based framework for assessing patient physician communication using the language of EPAs.


A modified Delphi study was conducted in three stages. The first stage was an extensive literature review to identify and elaborate communication related tasks which could be treated as EPAs. The second stage was content validation by medical education experts for clarity and representativeness. The third stage was three iterative rounds of modified Delphi with predefined consensus levels. The McNemar test was used to check response stability in the Delphi Rounds.


Expert consensus resulted in development of 4 specific EPAs focused on physician-patient communication with their competencies and respective assessment strategies all aiming for level 5 of unsupervised practice. These include Providing information to the patient or their family about diagnosis or prognosis; Breaking Bad news to the patient or their family; Counseling a patient regarding their disease or illness; Resolving conflicts with patients or their families.


The EPAs for Physician-patient communication are a step toward an integrative, all-inclusive competency-based assessment framework for patient-centered care. They are meant to improve the quality of physician patient interaction by standardizing communication as a decision of entrustment. The EPAs can be linked to competency frameworks around the world and provide a useful assessment framework for effective training in patient communication. They can be integrated into any post graduate curriculum and can also serve as a self-assessment tool for postgraduate training programs across the globe to improve their patient communication curricula.

Peer Review reports


Almost a quarter into the twenty-first century, advances in technology have changed not only the way medicine is taught [1,2,3] but also, the way it is delivered to patients [4]. It would not be incorrect to say that healthcare education and delivery have been revolutionized in the past few decades [5, 6]. However, one core aspect of healthcare delivery remains traditional and occurs millions of times every day in every physician patient encounter: One conversation at a time. These conversations are actually the most frequently executed medical procedures [7,8,9] and their results may contribute to a large fraction of healthcare utilization [10]. Training students and residents how to partake in these conversations and effectively communicate with patients is now a necessity required by various accrediting bodies [11, 12]. Expertise in interpersonal and communication skills is expected at all levels of medical education. A review of the literature provides evidence of numerous communication curricula [13,14,15], which use various pedagogies and assessment modalities to develop and foster physician-patient communication. Nonetheless, all these programs are individual or institutional attempts to assess students and trainees for a skill that is universal, and as yet the medical profession has yet to agree on standard procedures or validated tools that may be used for the teaching and assessment of communication skills of the physician with the patient in any undergraduate or post graduate medical training program [16, 17].

Around the end of the last century, medical education witnessed a swift shift from the outcomes- based approach for medical curricula toward the development of competencies, giving ascent to the Competency Based Medical Education (CBME) movement [18]. The basic philosophy underlying CBME was the formulation of a set of competencies or predefined abilities as the outcomes of curricula [19]. CBME provided a shift in prominence away from time-based curricula in favor of needs-based graduate outcomes which were learner centered. Various competency-based frameworks for under and postgraduate medical students were introduced worldwide and over the span of the last two decades, literature both propagating and criticizing CBME has been published [20]. Of the various criticisms of CBME, one of the most widely discussed was the inability of programs worldwide to transform competencies into daily tasks resulting in confusion around their implementation and assessment. Varied implementations of CBME based programs around the globe failed to link the training of medical professionals to their practice [21]. A student can acquire a set of competencies but may be incapable of incorporating them into explicit tasks essential for adept performance.

To counter this claim, the concept of ‘entrustable’ professional activities (EPAs) was proposed by Olle Ten Cate in 2005 [22] with the aim of operationalizing CBME in post graduate programs. The EPA concept proposes combining various domains of competence to create an act of entrustment, thus bridging the gap between theoretical ideas of competence and realities of assessment in clinical practice [21]. Although initially proposed only for postgraduate training, the EPA concept is now more commonly applied in health professions education. Currently, EPAs are an integral part of many international medical curricula and have been developed for various subjects and themes covering various postgraduate specialties and undergraduate curricula [23]. EPAs unify various competencies to form descriptors of work that must be performed by the physician [24].

Professionalism and communication are necessary skills that are expected of all physicians and are included as competencies related to attitudes in almost all EPAs [25]. Yet, there are some specific tasks performed during routine physician – patient interaction that require only effective communication with the patient. Physician -patient communication literature shows that these skills should ideally be assessed by multiple assessors at multiple times throughout the continuum of clinical training [26]. This method of assessment aligns with the EPA concept of entrustment by using multiple assessors multiple times along with personalized learning [23]. The case for utilizing the concept of entrustment for verbal procedures was also proposed by Henry & colleagues [10]. Using an EPA framework for various physician patient communication related tasks will thus enable clinicians to address these communication tasks as recognizable professional activities performed in daily routine work which can easily be assessed by faculty [27]. No attempt has been made as yet to define these specific physician patient communication tasks as acts of entrustment. Our aim in this study was to develop EPAs for physician patient communication by expert consensus, along with their competencies, assessment strategies and supervision levels focusing on the entrustment of patient communication. Two research questions were developed according to our aims: 1. What are the desired EPAs for effective physician-patient communication? 2. What are the respective competencies (knowledge, skills and attitudes), assessment strategies and supervision levels needed for designing physician-patient communication EPAs?


This study was conducted in three stages (Fig. 1). The first stage was an extensive review of literature to identify the most specific physician-patient communication-related tasks that could be described as entrustable professional activities. The underlying competencies and assessment strategies for each of these EPAs were also identified. The second and third stages were both conducted as online surveys. The second stage was content validation by a small sample of medical Education experts (n = 5) for clarity and representativeness. The third stage was three iterative rounds of modified Delphi which were conducted with predetermined criteria of consensus described for every round. Items reaching consensus in the first round were sent in the second round to ensure stability of responses. In the last round, the few remaining competencies and assessment strategies were redistributed to experts for final consensus and stability of responses. This process is depicted in Fig. 1. The duration of the study was 9 months, including its conception, data collection, and reporting. The first stage of the study was completed in 4 months. The second stage involving content validation of the proposed EPAs was conducted in 2 months. The third and final stage of the study involving three rounds of Delphi was concluded in the next 3 months. Ethical approval for the study was obtained from the Ethical Review Committee of Riphah International University (Reference # Riphah/IIMC/IRC/20/134).

Fig. 1
figure 1

Stages of the study

Stage I: literature review

We conducted an initial review of the literature with the explicit purpose of identifying the communication-related tasks that could be considered EPAs for PPC. Various curricula implemented at different levels of undergraduate and postgraduate medical education were studied. National curricular guidelines in Pakistan related to teaching and assessment of communication skills were also reviewed. These guidelines recommend the integration of behavioral sciences into the undergraduate medical and dental curriculum [28]. Traditionally, communication skills are taught as part of the behavioral science curriculum in Pakistani medical universities [29, 30]. Thus, we also reviewed nationally recommended textbooks on behavioral sciences [31, 32]. Internationally, we reviewed published literature on communication skills curricula [15, 33,34,35,36,37] to identify our proposed EPAs. The literature review was further conducted on four databases and search engines, including ERIC; PubMed; PsychINFO & Google scholar to elaborate on the available evidence on the curricula, teaching methodologies and assessment strategies of each of these tasks in an attempt to quantify them as Entrustable professional activities. A snowballing approach was used as an addition to identify pertinent articles. We also reviewed EPA development literature [23, 25, 38] to ensure that the formats we developed for each EPA developed were according to the recommended guidelines [39].

Literature on the process and conduct of content validity was appraised [40, 41]. Lastly, we also reviewed literature on the modified Delphi process [42,43,44]. The literature review helped to develop a questionnaire with initial 4 EPA’s, 68 competencies, assessment strategies and supervision levels. No specific number of competencies for each EPA was predefined, and initially multiple competencies with almost similar meaning were selected to allow for subsequent revision or deletion. Published guidelines [45, 46] were consulted to ensure that grammatical and other errors were not present in the construct of the competencies.

Stage II: content validation

The second stage was a content validity study meant to establish the clarity and to improve phrasing and understanding of the competencies developed for each EPA. Content validity measures how well items correspond or reflect a specific domain and are measured using quantitative techniques [47]. The competencies identified from literature were sent to 8 medical education experts who either had an extensive experience of developing curricula or were clinicians with an additional formal degree in medical education. All experts had Masters/Doctoral Degree in Medical Education, were serving in undergraduate/postgraduate/ CPD educational institutions and had a minimum experience of 5 years in curriculum development. Out of 8 experts initially contacted by email, 5 agreed to participate. Four of these experts were from Pakistan, while 1 international expert also participated (Table 1, pg22). The email included an introductory statement about the research study and its purpose. These experts reviewed each competency in these 4 EPAs identified from literature for Content Clarity & Representativeness (for internal content validation). Comment boxes were also provided for qualitative feedback. For Content validation, the items were graded for clarity to calculate content clarity average (CCA) and representativeness for I-CVI The clarity of an item is evaluated on the basis of how clearly an item is worded to consider the entire measure and specify the addition or deletion of any item [41]. The experts were asked to rate the competencies for both aspects by marking them on a 4-point Likert scale {(Clarity; 1 = item is not clear, 2 = item needs major revisions to be clear,3 = item needs minor revisions to be clear,4 = item is clear)(Representativeness; 1 = item is not representative 2 = item needs major revisions to be representative 3 = item needs minor revisions to be representative 4 = item is representative)}Space was also provided for open ended comments for qualitative feedback of each construct. The content validity survey was open for 4 weeks. This additional step helped to refine the framework of the EPAs. Content Clarity Averages (CCA) and Internal Content Validity (ICVI) were calculated separately for each of the competencies included in the initial questionnaire [41]. Changes in phrasing of the competencies were also made according to suggestions of the experts.

Table 1 Characteristics of Study participants

Stage III: the Delphi rounds

A questionnaire was then prepared incorporating all components of EPAs i.e., competencies, assessment strategies and supervision levels and designed on “Google Forms” for distributing, collecting and analysing the data. Literature on best practices in Delphi studies [42] suggests that consensus levels should be predefined, and descriptive statistics (mean/median and standard deviation/interquartile range) can be used to calculate consensus amongst an expert panel [48]. In this study, we used, percentage agreement, median and interquartile range using the following predefined criteria for consensus

  • For Round 1, 2, and 3: ≥ 80% participants’ agreement in the top 2 measures (“extremely” or “very important”), median ≥ 4 and interquartile range ≤ 1 on a 5-point Likert scale

  • The study would conclude on the achievement of consensus. Stability was calculated using McNemar change test by comparing responses of successive rounds [48, 49]. A significant p-value of < 0.05. suggesting instability with change of responses between rounds while a p-value > 0.05 would show consistent results with response stability.

Study participants

Various experts around the world who fit the inclusion criteria were contacted via two social media platforms, LinkedIn and WhatsApp. Experts with demonstrated experience of developing and implementing communication curricula in the health professions were also sent personal message requests on LinkedIn requesting for their participation in the study. All participants were at least Assistant Professors or equivalent with a minimum teaching experience of 5 years in respective specialties. All either had a formal medical education degree or at least had done a certification course in medical education, and were actively involved in teaching and assessment of undergraduate medical students or postgraduate medical trainees. Twenty-seven experts showed their agreement to participate in the Delphi study. Participant specialties included general surgery, gynecology, pediatrics, medical education, dermatology, family medicine, psychiatry, anesthesiology, radiology, pathology, forensic medicine, rehab medicine and dental specialties including prosthodontics and maxillofacial surgery. Geographically 74% of participants were from Asia (Pakistan, Malaysia, Kingdom of Saudi Arabia) 14.8% from North America (U.S.A and Canada)7.4% from Africa (Egypt) and 3.4% from Europe (Switzerland) (Table 1).

Delphi round I

The participants were asked to rate and comment regarding the importance of identified competencies for Physician -Patient Communication. Five-point Likert items were used for each EPA and their competencies, and participants were asked to rate each statement for the level of importance (“not at all important,” “slightly important,” “moderately important,” “very important,” “extremely important”). They were also required to choose relevant assessment strategies for each EPA and to identify desired supervision level by the end of postgraduate training. Participants were asked to provide any additional EPAs, competencies, assessment strategies or suggestions/feedback via comment boxes with every question. Participants were informed about the approximate time to complete the survey (20 to 30 min) and anonymity was ensured to minimize bias.

Statistical analysis of quantitative data was done using IBM SPSS Statistics 22 (IBM Corporation, USA). Percentage responses, median and interquartile ranges were calculated. Percentage ranking of assessment strategies and preferred patient satisfaction tool were also calculated and those with ≤20% consensus were excluded. Items reaching consensus were resent in round 2 to check for response stability. For supervision level, ≥80% agreement on a specific level was set as a criterion for inclusion. Qualitative data (input from the provided comment boxes) were also analyzed. No new EPAs were identified, however, suggestions regarding inclusion and further elaboration of few competencies were addressed in the second round by rephrasing for more clarity.

Delphi round II

For Round II, individual emails were sent to each of the 27 experts who had participated in Round 1 to ensure their anonymity. The emails contained response sheets for each Delphi expert. Each sheet contained the entire Round II questionnaire, along with concerned experts’ answers for Round I. Anonymized group response for Round I was also mentioned in front of the individual response. The next column contained space for the expert’s response for Round II, and reason for change in response, in case of change of opinion.

The questionnaire comprised 64 competencies, grouped under 4 EPAs along with the supervision levels, assessment strategies and proposed expiration time for each of the EPAs. Round 2-survey remained active for 3 weeks. Data was collected as the participants replied to the emails. The aim for this round was to establish the stability of the responses, as consensus had been reached on almost all competencies of the EPAs.

Descriptive statistical analysis of quantitative data was done. Median and interquartile ranges were calculated along with percentage responses. To check the stability of the expert panel responses; inferential statistics were also performed. As the number of participants was less than 30, the McNemar Change test was used to see the stability of responses [49, 50] This test using chi-square statistics computes p-value, indicating a change of responses from one round to another when the responses are dichotomous. The null hypothesis was that responses have not significantly changed from one round to the other.

Delphi round III

The Round three questionnaire contained certain competencies, supervision levels, assessment strategies and proposed expiration time for which either consensus had not been achieved in Round II or response stability had not been shown. All participants who had completed both the first and second rounds (n = 22) were again emailed individually. Items for which both consensus criteria had been met and response stability had been seen were excluded from the Round III-questionnaire. The aim was to check response stability and consensus for all remaining items. Round III remained active for 3 weeks. Participants were requested to mark the remaining competencies on the same Likert scale as in the previous two rounds. Previous individual and group responses were provided to them, in the same format as Round II. Again, an 80% agreement rate was used to accept or reject items from the final list. Participants were also asked to re mark the relevant supervision levels for each EPA. Consensus had yet to be developed on the top assessment strategies and expiration times for the EPAs. Percentage responses were calculated and pre-defined agreement rate was used to include items in the final list of EPAs and their competencies. The remaining competencies, supervision levels, assessment strategies and proposed expiration time were all evaluated for stability of panel responses by applying the McNemar change test again and applying the null hypothesis criteria used in Round II.


Stage I: literature review

Our literature search enabled us to identify explicit physician patient communication related tasks that could be defined as acts of entrustment. We tried to ensure that each of these tasks were discrete and not mutually exclusive [38, 51] and were focused on specific interactions between the physician and patient according to the contextual environment. This thorough search of the literature helped identify the following tasks that could be developed into Entrustable professional activities.

  1. 1.

    Providing information to the patient about his diagnosis [52,53,54,55,56]

  2. 2.

    Breaking bad news to a patient/his family [57,58,59,60,61,62,63,64]

  3. 3.

    Counseling the patient regarding his disease [65,66,67].

  4. 4.

    Resolving conflicts with patients or their families [68,69,70,71,72]

We further performed a methodological search for literature available on the available curricula, teaching and assessment strategies for each of these EPAs. Our search enabled us to develop an initial EPA document, with 4 EPAs 68 competencies, proposed assessment strategies and supervision levels for these EPAs on Physician- patient communication.

Stage II: content validation

Five experts agreed to participate in content validation. Clarity and representativeness were calculated using 5 raters on a 4-point Likert scale. Overall, Average clarity scores for CCA less than 3.5 were considered for revision and those with CCA less than 3 were discarded. Individual Content Validity Index (I-CVI) for the competencies in the 4 EPAs were calculated on the basis of their representativeness. Competencies with I-CVI less than 0.70 were eliminated. Those with I-CVI between, between 0.70 and 0.79, were revised, and I-CVI values > 0.79, were retained as such without any changes. However, qualitative expert feedback was used for final decision making, regarding placement and construct of the competencies. Content validation resulted in refinement of the 4 EPAs, their competencies along with their established supervision and entrustment scales for postgraduate medical education.

Content validation of the EPAs for PPC modified the construct of 2 EPAs, and 30 competencies. Final EPA document comprised 4 EPAs and 60 competencies were finalized to be sent to the experts for consensus building in the 1st round of Delphi.

Stage III: the Delphi rounds

Delphi round I

Twenty-seven experts (77%) out of 35 initially contacted experts participated in Round I. Analysis of Round 1 results showed that out of the 60 competencies included in 4 EPAs, only 4 did not meet consensus criteria. The remaining 56 competencies met the predetermined criteria for consensus. Consensus was also not met for the supervision levels of all 4 EPAs, assessment strategies, and proposed expiration time. Qualitative feedback was reviewed by all authors and changes were made in the construct of EPA-I and some competencies to ensure gender-neutrality and diversity. Qualitative Feedback resulted in increasing the total number of competencies to 64. It was decided to resend the entire Round I questionnaire to the experts who had consented to participate in Round I, along with few newly added competencies for online communication. Expert responses would be checked for stability (for all competencies, those, which had met consensus and those which had not) while consensus would be sought for the supervision levels, assessment strategies and expiration time of the EPAs.

Delphi round II

The Round II-questionnaire was emailed to the 27 experts who had participated in Round I. It contained the previous 60 competencies, 4 new ones and several statements (EPA-1 phrasing & competencies) that had been modified or elaborated because of qualitative feedback. Round 2 remained active for 3 weeks; reminder emails were sent at the end of each week to experts who had not submitted their responses. This helped increase the response rate. After 3 weeks, 22(82%) responses had been collected and Round II was closed. Consensus criteria for inclusion in Round II were the same as Round I. Analysis of Round II results revealed that consensus was again achieved on 56 competencies for the 4 EPAs. The 4 new competencies which had been introduced in Round II and 3 of the previously undecided competencies had also achieved consensus but would be sent again in Round III to check stability of responses. The supervision levels, remaining assessment strategies and validated checklist/scale had still not reached a consensus. Also, the proposed expiration time of the EPAs was not agreed upon. To confirm the stability of responses, the McNemar change test was applied. It showed the stability of maximum responses with p value > 0.05 for all initially selected 56 competencies.

Delphi round III

In Round III, the competencies which had either been introduced in Round II, and those that had not achieved 80% consensus were resent to 22 experts for final review along with the supervision levels, assessment strategies and proposed expiration time for the EPAs. Previous individual and group responses were also sent to the experts. Percentage agreement was sought for all items in the questionnaire for Round 3. 20 experts (91%) participated in Round III. Results of round 3 revealed that 80% consensus was again achieved on all competencies except one, Mean, median and IQR scores for this competency also did not reach the preset criterion and thus it was removed. At the end of round III, final 4 EPAs with collectively 62 competencies (Additional file 1) and assessment strategies (Table 2) all aiming at supervision “level 5” (Table 3) were identified after expert consensus. Consensus was also reached on the proposed expiration times (Table 4).

Table 2 Proposed supervision levels of EPAs for physician patient communication at the end of Round III
Table 3 Percentage Ranking of Proposed Assessment strategies at the end of Round III
Table 4 Proposed Expiration Time of EPAs at the end of Round III

This study had high content validity ensured not only through the content validation process, but also by involvement of a reasonably large number of experts who had knowledge and experience of physician-patient communication, a high response rate of experts, the use of successive rounds and high criteria of response agreement.


To our knowledge, this is the first study of its kind in which the EPA approach is being used to outline competencies and assessment tools for a patient-physician communication course, thus creating an entrustment framework for resident supervision. The EPAs for Physician-patient communication are a step toward an integrative, all-inclusive competency-based assessment framework for patient-centered care. They are meant to improve the quality of physician patient interaction by standardizing communication as a decision of entrustment. Each EPA developed by expert consensus is a separate, discrete, observable and measurable task. Thus, it fits the description of an EPA as defined by Ten Cate [39]. Entrustment for these EPAs require proficiency in numerous communication competencies which can be assessed by several workplace- based assessment tools. Consensus for suitable assessment tools was also achieved during this study.

Recent literature on EPA’s development [73] provides detailed guidelines on the kind of tasks that can be selected for developing an EPA. The authors advocate that an EPA should not be too broad and should allow for feasible assessment and focused entrustment. Considering all tasks in physician- patient communication as one whole would therefore not be feasible as it would be difficult to assess the learner for all the tasks involved in patient communication. For eg: A year 1 PG trainee could be entrusted with providing information to a patient about his diagnosis, but he may not be able to demonstrate the competencies needed for breaking bad news or resolving conflicts with patients too. Also, various other EPAs identified in the literature may also have overlapping knowledge, skills and attitudes [74]. This is because of the particular knowledge and skill base developed by a student in the early years of medical training will enable them to develop cognitive schemas [75], which will aid in clinical decision making for various tasks during clinical practice and not necessarily a single task only.

A recently published article on EPAs [73] purports the use of a construct validity lens for EPA development, to ensure that EPAs accurately reflect the work of a profession or a specialty. This article has been published after the conduct of this study. However, it is interesting to note that the guidelines provided by the authors for ensuring the validity of the newly developed EPAs are similar to the steps followed in this study. The authors in this article state that expert selection is a crucial process in the in determining the content of the EPAs. Three major themes come into consideration here, first; is the inclusion of experts from different domains [73] In this study, the content validation process has helped ensure that the EPA document was also viewed by educational experts with specific experience in curriculum and assessment before the involvement of clinical experts during the Delphi. This helped get validation from medical education experts on the placement of these competencies within each EPA [47]. Second; is that by using a purposeful sampling technique and trying to ensure that experts from major medical and dental specialties were included in this study, representativeness was established to some extent and bias was minimized. Third; by providing clear instructions to the experts about the development methods and their role during content validation and Delphi, confusion and misinterpretation of the content was avoided.

Another recent article by Tekian et al. [76] discusses certain strategic points for implementing EPA’s based curricula, and notes that the entrustment of an integrated task or activity cannot be done until there is an understanding of the essential components, which make up that activity. Keeping in mind this concept, each EPA for physician patient communication has been developed in such a way, that the entire task or activity has been broken down into a series of competency domains, which describe the desired abilities of graduates. This creates a mental model that will facilitate both learners and assessors during the implementation of the EPA ensuring that the benefits of CBME are also fully exploited. Another reason is the prior intention for developing these EPAs to be adaptable to various postgraduate specialties. Thus, wherever the EPAs for PPC are implemented, curriculum developers will have in hand an evidence-based format to aid them.

The EPAs for physician- patient communication provide observable, measurable activities for patient-centered communication that can be linked to competency frameworks around the world. They can provide a useful assessment framework for effective training in patient communication. These EPAs identify the most common communication related entrustment decisions that require fulfillment by graduate trainees or residents working in every clinical specialty. They can be integrated into any post graduate curriculum and contain the desired competencies related to knowledge, skills and attitudes as well as suggestions for assessment strategies. Literature on physician patient communication identifies the educational strategies, and instructional tools needed for each EPA [36, 77, 78]. Similarly, Expert consensus shows that entrustment would require a combination of assessment tools to assess these EPAs. Faculty feedback has also been agreed upon for assessment and must ideally be conducted utilizing standardized communication specific checklists.

These EPAs may also serve as a self -assessment tool for postgraduate training programs across the globe to improve their patient communication curricula. By defining specific physician-patient communication tasks in the language of EPAs, we can guide both the students and teachers in their respective roles and develop an exclusive physician-patient communication course that can be used not only in postgraduate education but also for continuing professional development in patient communication.

Limitations of the study

An important limitation of this study is that for high stake EPAs, which lead to unsupervised practice, literature suggests that expert panels for EPA development should ideally also include, patients, learners and allied health professionals [73]. We were, however, unable to solicit patients who would be able to understand the complex nature of these EPAs and participate in our study to make a useful contribution. Developing a detailed curriculum for these EPAs to link the competencies to milestones was also out of the scope of this study. The limitations of each EPA along with desired performance standards or the milestones also need to be determined.

Our results also showed that experts agreed that a combination of assessment methods, should be used for these EPAs. It must be reiterated that communication is a multifactorial competency and thus assessment of these EPAs should also utilize multiple assessment strategies, ideally in combination with other clinical EPAs rather than in a standalone manner. Several assessment tools were also ruled out during consensus, which we believe could potentially be beneficial during application. Thus, further research piloting and evaluation should help to improve the implementation of these EPAs before formal adoption into a program of study.

Another limitation of this study is that the validity evidence focused only on content validity. Further research, piloting and evaluating these EPAs is suggested for obtaining additional validity evidence. Some studies have also used internally validated tools to measure the quality of the developed EPAs [79] Others have also used rubrics [80] for alignment with purpose. Due to lack of time, we were unable to use either of these formats to evaluate these EPAs.


This study has resulted in the development of 4 EPAs with collectively 62 competencies, all aiming at supervision “Level 5” by the end of postgraduate resident training along with various assessment strategies. The EPA approach for physician patient communication makes it a valuable approach where the major goal of optimal patient care remains in clear sight.

The need for improvements in teaching and assessment of physician- patient communication has been highlighted in various national and international studies. This is the first study of its kind, which has not only first gotten validation from expert medical educationists for the content of each EPA, but after that has achieved desired consensus from various national and international experts.

The Delphi experts in this study were of predominantly Asian origin, however, the content of these EPAs is such that they are generalizable to diverse cultural backgrounds. Recreating this study in another cultural context, would however be interesting to note and could provide insight into how entrustment of communication related tasks can vary across societies. The authors thus recommend that health communication experts from different cultural backgrounds should be solicited to review these EPAs, enabling further refinement before implementation.

These EPAs have been developed in such a way that they can be applied side by side with any postgraduate program wherever trainees are expected to communicate with patients daily. By defining competencies and learning objectives in the language of EPAs, this study can provide a roadmap and a source of explicit guidance for both students and faculty in identifying the goals and expectations for physician- patient communication.

It is hoped that this study will provide a basis for universal acceptance of communication as an act of entrustment and help to provide guidelines for teaching and assessment of Physician patient communication.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.



Entrustable Professional Activities


Competency Based Medical Education


Physician -Patient communication


Continuous Professional Development


Content Clarity Average


Individual Content Validity Index


  1. Vanderbilt A, Perkins SQ, Muscaro M, Papadimos TJ, Baugh R. Creating physicians of the 21st century: assessment of the clinical years. Adv Med Educ Pract [Internet]. 2017;8:395–8 [cited 2020 Nov 16] Available from: /pmc/articles/PMC5491574/?report=abstract.

    Article  Google Scholar 

  2. Wartman SA. The Empirical Challenge of 21st-Century Medical Education. Acad Med [Internet]. 2019;94(10):1412–5 [cited 2020 Nov 16] Available from:

    Article  Google Scholar 

  3. ten Cate O, Carraccio C. Envisioning a true continuum of competency-based medical education, training, and practice. Acad Med. 2019;94(9):1283–8.

    Article  Google Scholar 

  4. Jayaraman PP, ARM F, Morshed A, Haghighi PD, Kang Y. Healthcare 4.0: A review of frontiers in digital health. WIREs Data Min Knowl Discov [Internet]. 2020;10(2) [cited 2020 Nov 16] Available from:

  5. Yelton SJ, Schoener B. The evolution of healthcare technology management in leading healthcare delivery organizations. Biomed Instrum Technol [Internet]. 2020;54(2):119–24 [cited 2020 Nov 16] Available from:

    Article  Google Scholar 

  6. Tortorella GL, Fogliatto FS, Mac Cawley Vergara A, Vassolo R, Sawhney R. Healthcare 4.0: trends, challenges and research directions. Prod Plan Control [Internet]. 2020;31(15):1245–60 [cited 2020 Nov 16] Available from:

    Article  Google Scholar 

  7. Lipkin M. Sisyphus or Pegasus? The Physician Interviewer in the Era of Corporatization of Care. Ann Intern Med [Internet]. 1996;124(5):511 [cited 2019 Oct 7] Available from:

    Article  Google Scholar 

  8. Coulehan JL, Block MR. The medical interview: mastering skills for clinical practice. FA Davis Company; 1992.

    Google Scholar 

  9. Coulehan JL, Block MR. The medical interview. Mastering Ski Clin Pract. 4th ed. Philadelphia: FA Davis; 2001.

    Google Scholar 

  10. Henry SG, Holmboe ES, Frankel RM. Evidence-based competencies for improving communication skills in graduate medical education: a review with suggestions for implementation. Med Teach. 2013;35(5):395–403.

    Article  Google Scholar 

  11. Holmboe ES, Edgar L, Hamstra S. The milestones guidebook. Acgme [Internet]. 2016;1–41. [cited 2019 Oct 13] Available from:

  12. Frank JR, Danoff D. The CanMEDS initiative: implementing an outcomes-based framework of physician competencies. Med Teach [Internet]. 2007;29(7):642–7 [cited 2019 Aug 1] Available from:

    Article  Google Scholar 

  13. Sanson-Fisher R, Hobden B, Waller A, Dodd N, Boyd L. Methodological quality of teaching communication skills to undergraduate medical students: A mapping review. BMC Med Educ. 2018;18(1):1DUMMY.

    Article  Google Scholar 

  14. Bylund CL, Alyafei K, Anand A, Al Marri A, Omer W, Sinha T, et al. Implementing and tailoring a western-developed communication skills training program for graduate medical trainees in Qatar. Int J Med Educ [Internet]. 2017;8:16–8 [cited 2019 Aug 5] Available from:

    Article  Google Scholar 

  15. Bylund CL, Banerjee SC, Bialer PA, Manna R, Levin TT, Parker PA, et al. A rigorous evaluation of an institutionally-based communication skills program for post-graduate oncology trainees. Patient Educ Couns. 2018;101(11):1924–33.

    Article  Google Scholar 

  16. Gillis AE, Morris MC, Ridgway PF. Communication skills assessment in the final postgraduate years to established practice: a systematic review. Postgrad Med J. 2015;91(1071):13–21.

    Article  Google Scholar 

  17. Zill JM, Christalle E, Müller E, Härter M, Dirmaier J, Scholl I. Measurement of physician-patient communication-a systematic review. PLoS One. 2014;9(12):1–20.

    Article  Google Scholar 

  18. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: from Flexner to competencies. Acad Med. 2002;77(5):361–7.

    Article  Google Scholar 

  19. Frank JR, Snell LS, Ten Cate O, Holmboe ES, Carraccio C, Swing SR, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45.

    Article  Google Scholar 

  20. Holmboe ES, Sherbino J, Englander R, Snell L, Frank JR, Collaborators I. A call to action: the controversy of and rationale for competency-based medical education. Med Teach. 2017;39(6):574–81.

    Article  Google Scholar 

  21. Touchie C, Ten Cate O. The promise, perils, problems and progress of competency-based medical education. Med Educ. 2016;50(1):93–100.

    Article  Google Scholar 

  22. States U. Entrustability of professional activities and competency-based training; 2005. p. 1176–7.

    Google Scholar 

  23. El-Haddad C, Damodaran A, McNeil HP, Hu W. The ABCs of entrustable professional activities: an overview of ‘entrustable professional activities’ in medical education. Intern Med J [Internet]. 2016;46(9):1006–10 [cited 2019 Jul 26] Available from:

    Article  Google Scholar 

  24. Shorey S. Medical education in review Entrustable professional activities in health care education: a scoping review. 2019;(65):766–777.

    Google Scholar 

  25. ten Cate O. A primer on entrustable professional activities. Korean J Med Educ. 2018;30(1):1–10.

    Article  Google Scholar 

  26. Brouwers M, Rasenberg E, van Weel C, Laan R, van Weel-Baumgarten E. Assessing patient-centred communication in teaching: a systematic review of instruments. Med Educ. 2017;51(11):1103–17.

    Article  Google Scholar 

  27. Caverzagie KJ, Cooney TG, Hemmer PA, Berkowitz L. The Development of Entrustable Professional Activities for Internal Medicine Residency Training. Acad Med [Internet]. 2015;90(4):479–84 [cited 2020 Sep 18] Available from:

    Article  Google Scholar 

  28. PMDC. Curriculum of MBBS revised 2011. 2011 [cited 2019 Feb 13];(Revised):1–134. Available from:

    Google Scholar 

  29. University of Health Sciences (Lahore P. SYLLABUS OF FIRST PROFESSIONAL [Internet]. 1st ed. Lahore: University of Health Sciences; 2003. [cited 2020 Dec 18]. Available from:

  30. Naqvi S, Shaikh AA, Khan JM, Mahmood A, Shah TA, Bhatti F. Curriculum of MBBS (Revised 2011) [Internet]. Islamabad; 2011. [cited 2020 Dec 18]. Available from: and HEC Curriculum MBBS.PDF.

  31. Rana MH, Rana RZ, Mustafa M. University of Health Sciences (Lahore P. Handbook of behavioural sciences) [Internet]. 3rd ed. Lahore: Allied book company; 2016. p. 229. [cited 2019 Oct 30]. Available from:

    Google Scholar 

  32. Humayun A, Herbert M. Integrating Behavioural sciences in healthcare. Islamabad: The Army Press; 2011. p. 317.

    Google Scholar 

  33. Tate P, Frame F. The doctor’s communication handbook. Eigth. London: Taylor and Francis Ltd; 2020.p. 13–21.

  34. Wool MS. Teaching and learning communication skills in Medicine (2e). Health Expect. 2005;8(4):363–5.

    Article  Google Scholar 

  35. Berkhof M, van Rijssen HJ, Schellart AJM, Anema JR, van der Beek AJ. Effective training strategies for teaching communication skills to physicians: an overview of systematic reviews. Patient Educ Couns [Internet]. 2011;84(2):152–62 Available from:

    Article  Google Scholar 

  36. Junod P, Sommer J, Louis-Simonet M, Nendaz M. Teaching communication skills: beyond wishful thinking. Swiss Med Wkly [Internet]. 2015;145(0708) [cited 2019 Aug 5] Available from:

  37. Biglu M-H, Nateq F, Ghojazadeh M, Asgharzadeh A. Communication Skills of Physicians and Patients’ Satisfaction. Mater Sociomed [Internet]. 2017;29(3):192–5 [cited 2019 Dec 2] Available from:

    Article  Google Scholar 

  38. Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, Van Der Schaaf M. Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE Guide No. 99. Med Teach [Internet]. 2015;37(11):983–1002 Available from:

    Article  Google Scholar 

  39. ten Cate O, Taylor DR. The recommended description of an entrustable professional activity: AMEE Guide No. 140. Med Teach [Internet]. 2020 [cited 2020 Dec 19]; Available from:

  40. Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar A-R. Design and implementation content validity study: development of an instrument for measuring patient-centered communication. J Caring Sci. 2015;4(2):165–78.

    Article  Google Scholar 

  41. Rubio DM, Berg-Weger M, Tebb SS, Lee ES, Rauch S. Objectifying content validity: Conducting a content validity study in social work research. Soc Work Res [Internet]. 2003;27(2):94–104 [cited 2019 Dec 12] Available from:

    Article  Google Scholar 

  42. Chan TM, Yarris LM, Humphrey-Murto S. Delving into Delphis. Can J Emerg Med. 2019;21(2):167–9.

    Article  Google Scholar 

  43. Jünger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review. Palliat Med [Internet]. 2017;31(8):684–706 [cited 2019 Nov 16] Available from:

    Article  Google Scholar 

  44. Thangaratinam S, Redman CWE. The delphi technique. Obstet Gynaecol. 2005;7(2):120–5.

    Article  Google Scholar 

  45. Storey CP, Gehlbach H. AM Last Page. Acad Med [Internet]. 2012;87(9):1305 [cited 2019 Dec 10] Available from:

    Article  Google Scholar 

  46. Artino AR, Gehlbach H, Durning SJ. AM Last Page: Avoiding Five Common Pitfalls of Survey Design. Acad Med [Internet]. 2011;86(10):1327. [cited 2019 Dec 10] Available from:

  47. Colquitt JA, Sabey TB, Rodell JB, Hill ET. Content validation guidelines: evaluation criteria for definitional correspondence and definitional distinctiveness. J Appl Psychol. 2019;104(10):1243–65.

    Article  Google Scholar 

  48. von der Gracht HA. Consensus measurement in Delphi studies. Review and implications for future quality assurance. Technol Forecast Soc Change. 2012 Oct 1;79(8):1525–36.

    Article  Google Scholar 

  49. Yang YN. Testing the stability of experts’ opinions between successive rounds of Delphi studies; 2003.

    Google Scholar 

  50. Adedokun OA, Burgess WD. Analysis of paired dichotomous data: a gentle introduction to the McNemar test in SPSS. J Multidiscip Eval. 2012;8(17):125–31.

    Google Scholar 

  51. Committee C. Entrustable Professional Activity ( EPA ) Fast Facts; 2016. p. 4–5.

    Google Scholar 

  52. Husson O, Mols F. Van de poll-franse L V. the relation between information provision and health-related quality of life, anxiety and depression among cancer survivors: a systematic review. Ann Oncol. 2011;22(4):761–72.

    Article  Google Scholar 

  53. Longacre ML, Galloway TJ, Parvanta CF, Fang CY. Medical communication-related informational need and resource preferences among family caregivers for head and neck Cancer patients. J Cancer Educ. 2015;30(4):786–91.

    Article  Google Scholar 

  54. Johnston MJ, Arora S, Pucher PH, Reissis Y, Hull L, Huddy JR, et al. Improving escalation of care: development and validation of the quality of information transfer tool. Ann Surg. 2016;263(3):477–86.

    Article  Google Scholar 

  55. Adamson M, Choi K, Notaro S, Cotoc C. The doctor–patient relationship and information-seeking behavior: four orientations to cancer communication. J Palliat Care. 2018;33(2):79–87.

    Article  Google Scholar 

  56. Roter DL, Stewart M, Putnam SM, Lipkin M, Stiles W, Inui TS. Communication patterns of primary care physicians. JAMA [Internet]. 277(4):350–6 [cited 2019 Oct 30] Available from:

  57. Paul M. Breaking bad news. Acad Med. 2018;93(9):1390.

    Article  Google Scholar 

  58. Schildmann J, Kupfer S, Burchardi N, Vollmann J. Teaching and evaluating breaking bad news: a pre-post evaluation study of a teaching intervention for medical students and a comparative analysis of different measurement instruments and raters. Patient Educ Couns [Internet]. 2012;86(2):210–9 Available from:

    Article  Google Scholar 

  59. Langewitz W. Breaking bad news—quo vadis? Patient Educ Couns. 2017;100(4):607–9.

    Article  Google Scholar 

  60. Abdul Hafidz MI, Zainudin LD. Breaking bad news: an essential skill for doctors. Med J Malaysia. 2016;71(1):26–7.

    Google Scholar 

  61. Berkey FJ, Wiedemer JP, Vithalani ND. Delivering Bad or Life-Altering News; 2018. p. 99–104.

    Google Scholar 

  62. Murdoch K, McDermott E, Hall H, Greene K. Breaking bad news workshop: learning to listen. Med Educ. 2019;5(4):153279–1141.

    Article  Google Scholar 

  63. Wolfe A, Denniston S, Baker J, Catrine K, Regan-Hoover M. Bad news deserves better communication: a customizable curriculum for teaching learners to share life-altering information in pediatrics. MedEdPORTAL Publ. 2016;12:1–10.

    Google Scholar 

  64. Buckman R. Breaking bad news: Why is it still so difficult? Br Med J [Internet]. 1984;288(6430):1597–9 [cited 2020 Sep 17] Available from:

    Article  Google Scholar 

  65. Wouda JC, Van De Wiel HBM. Patient education and counseling education in patient – physician communication: how to improve effectiveness ? Patient Educ Couns [Internet]. 2013;90(1):46–53 Available from:

    Article  Google Scholar 

  66. Jones RN. Practical Counselling and Helping Skills. 1st ed. Sage Publications. London: SAGE Publications Ltd; 2005.

    Google Scholar 

  67. Hoad-Reddick G. How relevant is counselling in relation to dentistry? Br Dent J. 2004;197(1):9–14+50.

    Article  Google Scholar 

  68. Furlong GT. The conflict resolution toolbox: models & maps for analyzing, diagnosing, and resolving conflict [Internet]. 1st ed. Ontario: John Wiley & Sons, Canada, Ltd; 2005. [cited 2019 Nov 28]. Available from: of conflict resolution with patients&f=false

    Google Scholar 

  69. Wolfe AD, Hoang KB, Denniston SF. Teaching Conflict Resolution in Medicine: Lessons From Business, Diplomacy, and Theatre. MedEdPORTAL J Teach Learn Resour [Internet]. 2018;14:10672 [cited 2019 Nov 29] Available from:

    Google Scholar 

  70. Nelson CM, Nazareth BA. Nonbeneficial treatment and conflict resolution: building consensus. Perm J [Internet]. 2013;17(3):23–27. [cited 2019 Dec 1] Available from:

  71. Kaufman J. Conflict Management Education in Medicine: Considerations for Curriculum Designers. Online J Work Educ Dev. 2011;V(1):1–17.

    Google Scholar 

  72. Blackall GF, Simms S, Green MJ. Breaking the Cycle: How to Turn Conflict Into Collaboration When You and Your Patients Disagree by Michael J. Green,Steven Simms,George F. Blackall / Kissly: Affordable Ebooks [Internet]. 1st ed. Philadelphia: American College of Physicians; 2009. p. 35–9. [cited 2019 Nov 28] Available from:

    Google Scholar 

  73. Taylor D, Park YS, Smith C, ten CO, Tekian A. Constructing Approaches to Entrustable Professional Activity Development that Deliver Valid Descriptions of Professional Practice. Teach Learn Med [Internet]. 2020; 1–9 [cited 2020 Sep 18] Available from:;33(1):89–97.

    Article  Google Scholar 

  74. Lomis KD, Ryan MS, Amiel JM, Cocks PM, Uthman MO, Esposito KF. Core Entrustable Professional Activities for Entering Residency Pilot Group Update: Considerations for Medical Science Educators. Med Sci Educ [Internet]. 2016;26(4):797–800 [cited 2019 Dec 1] Available from:

    Article  Google Scholar 

  75. Seel NM. Schema Development. In: Encyclopedia of the Sciences of Learning [Internet]: Springer US; 2012. p. 2936–9. [cited 2020 Sep 17] Available from:

  76. Tekian A, ten Cate O, Holmboe E, Roberts T, Norcini J. Entrustment decisions: Implications for curriculum development and assessment. Med Teach [Internet]. 2020;42(6):698–704 [cited 2020 Sep 17] Available from:

    Article  Google Scholar 

  77. Kalet A, Pugnaire MP, Cole-Kelly K, Janicik R, Ferrara E, Schwartz MD, et al. Teaching communication in clinical clerkships: models from the Macy initiative in health communications. Acad Med. 2004;79(6):511–20.

    Article  Google Scholar 

  78. Vargovich A, Sperry J, Spero R, Xiang J, Williams D. Use of checklists teaches communication skills utilized by specialties. MedEdPublish. 2016;5(2):1–17.

    Article  Google Scholar 

  79. Post JA, Wittich CM, Thomas KG, Dupras DM, Halvorsen AJ, Mandrekar JN, et al. Rating the Quality of Entrustable Professional Activities: Content Validation and Associations with the Clinical Context. J Gen Intern Med [Internet]. 2016;31(5):518–23 [cited 2020 Sep 22] Available from:

    Article  Google Scholar 

  80. Taylor DR, Park YS, Egan R, Chan MK, Karpinski J, Touchie C, et al. EQual, a Novel Rubric to Evaluate Entrustable Professional Activities for Quality and Structure. Acad Med [Internet]. 2017;92(11):S110–7 [cited 2020 Sep 22] Available from:

    Article  Google Scholar 

Download references


The authors would like to acknowledge all the experts who took out time to participate in this study. They would also like to thank Professor. Dr. Musarat Ramzan, Wah Medical College, Pakistan and Dr. Elizabeth Ann Rider, Harvard Medical School, USA for their valuable contributions and feedback during the conduct of this study.


No funding was received to assist for the conduct of the study or the preparation of this manuscript.

Author information

Authors and Affiliations



AY & RA conceptualized the study and its methodology. AY conducted the formal analysis and investigation and wrote the original draft preparation. RA performed the review and editing of the article and supervised the study. RY co-supervised the study and performed the review of the article. The author(s) read and approved the final manuscript.

Author’s information

1. AY is an Assistant Professor in the Department of Medical & Dental Education at Shifa college of Dentistry, Shifa Tameer-e-Millat University Islamabad, Pakistan. Her interests include Curriculum development, assessment, faculty development, healthcare communication and mentoring. ORCiD ID:

2. RA is an Assistant Dean Medical Education, Chair Assessment Committee and Professor of Surgery at Riphah International University, Pakistan. His interests include curriculum development and innovation, assessment, and online instructional designing. ORCiD ID:

3. RY is Professor of Medical Education and Program Director MHPE, Associate Director (RARE) Medical education and capacity building (ORIC), at Riphah International University, Rawalpindi, Pakistan.

Corresponding author

Correspondence to Ayesha Younas.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for the study was obtained from the Ethical Review Committee of Riphah International University (Reference # Riphah/IIMC/IRC/20/134).

Following consent statement was included in the questionnaire and marked by all participants.

• You have been asked to participate in this research because of your expertise in physician patient communication.

• Participation in this study is voluntary.

• The information shared by you will be kept confidential to the full extent. Your name and identifying information will not be associated with any part of the written report of the research.

• Participating in this study may not benefit you directly, but your valued opinion will help us a lot in developing EPAs on physician-patient communication for all specialties in health care, as well as decide on various strategies for students’ assessment.

• There are no known or anticipated risks associated with this study.

• The research topic has been reviewed and approved by the Ethics Review Committee Islamic International Medical College.

• If you do not understand something or if you want more information, please don’t hesitate to ask.

• Kindly mark your informed consent before proceeding

Consent for publication

Not applicable.

Competing interests

The authors have no conflict of interest to declare.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Percentage agreement and p-values for final competencies included in each EPA at the end of Round III. 1. The data set shows mean percentage agreement of experts on competencies included in each EPA at the end of round III of the Delphi process. 2. p-values for each competency, calculated by applying the McNemar test to check for response stability in two successive rounds.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Younas, A., Khan, R.A. & Yasmin, R. Entrustment in physician-patient communication: a modified Delphi study using the EPA approach. BMC Med Educ 21, 497 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Assessment of communication
  • Competency based education
  • communication curriculum
  • Entrustable professional activities
  • Postgraduate studies
  • Modified Delphi study
  • Physician-patient communication