Skip to main content

Educational interventions to train healthcare professionals in end-of-life communication: a systematic review and meta-analysis



Practicing healthcare professionals and graduates exiting training programs are often ill-equipped to facilitate important discussions about end-of-life care with patients and their families. We conducted a systematic review to evaluate the effectiveness of educational interventions aimed at providing healthcare professionals with training in end-of-life communication skills, compared to usual curriculum.


We searched MEDLINE, Embase, CINAHL, ERIC and the Cochrane Central Register of Controlled Trials from the date of inception to July 2014 for randomized control trials (RCT) and prospective observational studies of educational training interventions to train healthcare professionals in end-of-life communication skills. To be eligible, interventions had to provide communication skills training related to end-of-life decision making; other interventions (e.g. breaking bad news, providing palliation) were excluded. Our primary outcomes were self-efficacy, knowledge and end-of-life communication scores with standardized patient encounters. Sufficiently similar studies were pooled in a meta-analysis. The quality of evidence was assessed using GRADE.


Of 5727 candidate articles, 20 studies (6 RCTs, 14 Observational) were included in this review. Compared to usual teaching, educational interventions to train healthcare professionals in end-of-life communication skills were associated with greater self-efficacy (8 studies, standardized mean difference [SMD] 0.57;95 % confidence interval [CI] 0.40–0.75; P < 0.001; very low quality evidence), more knowledge (4 studies, SMD 0.76;95 % CI 0.40–1.12; p < 0.001; low quality evidence), and improvements in communication scores (8 studies, SMD 0.69; 95 % CI 0.41–0.96; p < 0.001; very low quality evidence). There was insufficient evidence to determine whether these educational interventions affect patient-level outcomes.


Very low to low quality evidence suggests that end-of-life communication training may improve healthcare professionals’ self-efficacy, knowledge, and EoL communication scores compared to usual teaching. Further studies comparing two active educational interventions are recommended with a continued focus on contextually relevant high-level outcomes.

Trial registration

PROSPERO CRD42014012913

Peer Review reports


Advances in medical care and the aging population have highlighted the need for good end-of-life (EoL) communication and decision-making, in order to ensure that invasive medical treatments are not administered to patients who would prefer less aggressive forms of care at the end-of-life [1]. Unfortunately, health care providers (HCPs) often fail to engage patients in EoL discussions and to document patient wishes in the medical chart [1, 2]. This puts many patients at risk of having unwanted aggressive and potentially futile medical care during their last days of life, which is associated with worsened patient and caregiver quality of life and psychological burden [3].

An important strategy for improving the quality of EoL discussions is to improve EoL communication skills amongst HCPs [4]. The educational need for this skill has been well described for both trainee and practitioners alike: medical graduates currently are entering practice ill-prepared to discuss the important EoL issues with patients and families [57]. In a multicenter Canadian survey, internal medicine residents at five universities identified that EoL communication skills were a high learning priority [8] as resident physicians are often responsible for facilitating EoL discussions with hospitalized patients in academic centers [7, 9, 10]. This need persists in even practicing HCPs such as physicians and nursing staff who continue to have discomfort in facilitating EoL discussions [1115]. There are numerous EoL communication skills training programs described in literature, however the cumulative evidence on the impact of such an educational intervention remain unclear. Therefore, we conducted a systematic review to evaluate the effectiveness of educational interventions to train HCPs in EoL communication skills compared to usual teaching (i.e. standard curriculum). The effectiveness was measured based on the Kirkpatrick training evaluation model (Reaction, Learning, Behaviour and Results) [16] which was represented by outcomes of self-efficacy, knowledge, communication skills and patient-level effects.


Protocol and registration

The protocol for the complete review is available in the PROSPERO database at:

Eligibility criteria

Studies were eligible for our systematic review if they included adult patients over age 18 years, healthcare providers, or trainees, and if they evaluated a communication tool to assist adult patients in EoL decision-making, in comparison to a control group. Our definition of a communication tool included traditional decision aids in any format (paper, video, computer, etc.), and other structured approaches to help with decision-making, including organized meeting plans, reminders to complete advance directives (AD) or educational interventions for patients or healthcare providers. Interventions designed solely for information-sharing (e.g. breaking bad news, providing emotional support) were excluded, because although such interventions may affect EoL decision-making, it is not their sole or explicit purpose to do so. We included randomized controlled trials (RCTs) and prospective observational studies with a control group (including cohort studies and uncontrolled before-after studies in which participants acted as their own control). We restricted the review to studies published in peer-reviewed journals in the English language (See Additional file 1).

Eligible studies were then divided into a subgroup of studies of educational interventions for health care providers, and a subgroup of studies used as clinical intervention. In this paper, we specifically review only the studies of educational interventions directed to health care providers, whether trainees in a health professional training program (e.g. medical or nursing student, post-graduate training), or practicing providers receiving continuing medical education. Reviews of the patient directed end-of-life communication tools will be analyzed and reported elsewhere.

Outcome measures

Based Kirkpatrick model of evaluation Reaction measures the learners’ value they perceive in the educational intervention. Learning measures improvements in their knowledge, Behaviour measures their capability applied in context, and Result measures the impact the training on the target outcome – in this case, patient level outcomes [16].

With this framework, primary outcome measures were:

  1. 1)

    Self-efficacy (participant’s confidence or estimate of their ability to perform a task) [17].

  2. 2)

    Knowledge test scores on EoL communication and decision-making.

  3. 3)

    Communication scores using a standardized checklist during a standardized patient encounter.

Secondary outcome measured patient-level outcomes such as completion of AD, health care utilization, patient satisfaction with EoL planning, and patient assessment of clinician communication skills.

Search strategy

A comprehensive search was performed for papers available for search from database inception to July 2014 from Medline (1946 – July 2014), Excerpta Medica database (EMBASE 1980 – July 2014), Cumulative Index to Nursing and Allied Health Literature (CINAHL 1982 – July 2014), Cochrane Database of Controlled Clinical Trials (2005 – July 2014) and Education Resources Information Center (ERIC 1966 – July 2014). Searches were conducted using terms related to EoL decision-making and communication, including: “communication,” “decision-making,” “end-of-life” and “cardiopulmonary resuscitation.”. A snowball technique was used to hand search references for additional papers for review. This review is a subset of a larger systematic review on EoL decision-making interventions. Only those studies relevant to medical education are reviewed here. Reviews of communication interventions evaluated in the clinical setting will be analyzed and reported elsewhere.

Study selection

Title and abstracts were screened for relevance independently and in duplicate by two reviewers (SO, HC). Articles that passed initial screening by either reviewer underwent full-text review independently and in duplicate by the same two reviewers. Standardized, piloted eligibility forms were used for both title and abstract screening, and for full-text review. Disagreements about study eligibility were resolved through consensus discussion or resolved by a third reviewer (JY) in the case of ongoing disagreement. Kappa statistics were calculated to assess the inter-rater reliability of title and abstract screening and full-text review [18].

Data collection and data items

Data extraction was done using standardized, piloted, online forms by two reviewers (HC, SO), including publication information, study dates and population characteristics, interventions, outcomes, and study methods required to assess the risk of bias. We contacted study authors to obtain missing information relevant to outcomes or risk of bias.

Study quality and risk of bias of individual studies

Educational study quality and risk of bias were assessed in duplicate by two reviewers using the MERSQI (Medical Education Research Study Quality Instrument) Scale and Newcastle-Ottawa Scale Education (NOS-E) [19, 20]. The two instruments were used as they assess different aspects of quality and risk of bias acting in a complementary fashion. We considered a score above the sample median MERSQI score (12.5) and NOS-E score (2.5) as the threshold for high methodological quality as described in other literature [20].

The risk of bias for RCTs was additionally assessed using the Cochrane risk of bias tool which includes assessments of random sequence generation, allocation concealment, blinding of participants and personnel, incomplete outcome data, and selective reporting. Each domain was assessed independently by both reviewers and reported as being at ‘high,’ ‘low,’ or ‘uncertain’ risk of bias. Studies were considered to be at overall ‘high’ risk of bias if judged to be at ‘high’ risk of bias at any domain; ‘uncertain’ risk of bias if judged to be at uncertain risk of bias in any one domain, with no domains at high risk of bias; and at overall ‘low’ risk of bias, if not judged to be at ‘high’ or ‘uncertain’ risk of bias in any domains [21]. All risk of bias assessment was judged at the outcome level.

Publication bias

Publication bias was assessed using visual inspection of funnel plots, where sufficient numbers of studies existed to permit interpretation [22].

Data synthesis, summary measures and sensitivity analysis

Reviewers assessed studies for clinical heterogeneity by investigating study populations, interventions, and comparisons before considering whether to pool data. We assessed statistical heterogeneity for each of the outcomes of interest using the I2 statistic, with values greater than 50 % indicating significant heterogeneity [23].

We used Review Manager 5.3 software to calculate pooled estimates of effects using all relevant studies employing the generic inverse variance method. A random-effects model was used to pool weighted outcomes of standardized mean differences (SMD). The magnitude of effect was interpreted in accordance with the Cohen effect size classification (small 0.2–0.5, moderate 0.5–8, large >0.8) [24]. Estimates for standard deviation for change scores were calculated when not reported or obtainable from study authors [25]. To see if the methodologic quality would materially affect our findings, sensitivity analyses were conducted post-hoc by restricting pooling of studies to those of higher methodologic quality such as RCTs only, or high MERSQI or NOS-E scores.

Ratings of quality of evidence

We used the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach to assess the quality of evidence for each outcome of interest. To rate the quality of evidence, the GRADE approach considers, for each outcome of interest, risk of bias within each study; risk of bias across studies (e.g. publication bias); imprecision of results; inconsistency of results; and indirectness of the evidence [26]. Summary of finding tables were generated using GradePRO software [27].


Study selection

Initial database searches retrieved 5727 articles. After exclusion of duplicate references, conference abstracts, and title and abstract screening, 424 articles were selected for full-text review (κ = 0.65, 95 % CI [0.60, 0.70]). A total of 166 articles were found to be eligible for inclusion after full-text review and additional manual reference screening. Of these, 20 were studies of educational interventions and were reviewed in this paper (Fig. 1).

Fig. 1
figure 1

Flow diagram of study screening and eligibility. aMarked ‘other’ due to unclear documentation whether study excluded due to duplication or non-relevance. bOne educational study overlapped with the inpatient studies and two overlapped with the ICU studies

Study characteristics

Study setting & populations

Most of the studies were completed in the USA (80 %), and 17 (90 %) were aimed toward medical trainees (14 postgraduate level, 3 undergraduate medical school), one to postgraduate medical trainees and nurse practitioners in acute care programs, and two studies were open to all acute care HCP (Table 1).

Table 1 Study characteristics

Study interventions

Review of the instructional design showed the majority of the studies used a combination of didactic lectures (17 studies), small group discussions (16 studies) and role-play with direct observation and feedback (16 studies). A minority of studies included self-study modules (3 studies), video or audio transcript analysis (2 studies), exemplar demonstration (5 studies) and reflective portfolio (1 study). One study used review of a multimedia Advance Directive decision aid with a patient as the educational intervention. Ten studies distributed the learning over more than a day, ten were workshops or tutorials done within a day or less.

Study quality and risk of bias

Six studies were RCTs, three of which were considered overall ‘high’ risk of bias, and 3 considered to be at ‘unclear’ risk of bias. Of the 14 observational studies, 10 studies were uncontrolled before-after design and 4 studies were double-arm cohort studies with a control group. Mean NOS-E was 3.35 (SD 2.13) out of maximum 6 points. The mean quality of all studies by MERSQI was 11.97 (SD 2.1) out of maximum 18 points. None of the RCTs were rated low overall risk of bias. 11 and 10 studies met the median threshold for high quality by MERSQI or NOS-E criteria respectively (See Additional file 2).

Synthesis of results

Ratings for the overall quality of evidence and effectiveness of the educational interventions can be seen in the GRADE summary of findings table (See Additional file 3).

  1. 1.


    Eight studies (2 RCTs [28, 29], 6 Observational [4, 6, 3033]), including 522 participants found that EoL communication skills training was associated with improved self-efficacy compared to usual training (SMD 0.57; 95 % CI 0.40–0.75; p < 0.001; very low quality evidence). There was no evidence of heterogeneity (I 2 = 0 %) (Fig. 2a).

    Fig. 2
    figure 2

    Effect of educational interventions on a self-efficacy b knowledge and c communication scores with standardized patient encounters

  2. 2.


    Four studies (2 RCTs [28, 34] and 2 Observational [35, 36]), including 290 participants, reported knowledge outcomes. EoL communication skills training was associated with an increase in knowledge scores compared to usual training (SMD 0.76; 95 % CI 0.40–1.12 p < 0.001, low quality evidence) with moderate heterogeneity (I 2 = 47 %) (Fig. 2b).

  3. 3.

    Communication score

    Eight studies (3 RCTs [7, 29, 37] and 5 Observational [31, 3841]), including 590 participants found that EoL communication skills training was associated with improvement in communication scores rated during standardized patient encounters (SMD 0.69; 95 % CI 0.41–0.96; p < 0.001; very low quality evidence) with appreciable heterogeneity (I 2 = 57 %). Heterogeneity could not be easily explained by qualitative examination of learner demographics, study quality or instructional design; however in all studies, point estimates of effect were in the direction of benefit for EoL communication skills training (Fig. 2c).

  4. 4.

    Patient outcomes

    There were four studies (2 RCT and 2 Observational) that reported patient-important outcomes. Outcome measures were heterogeneous, precluding pooling of data across studies. Overall, the interventions were neutral to positive: one study found no statistically significant change in the overall proportion of AD completed after a morning educational session [5], whereas another Intensive Care Unit (ICU) based intervention showed a beneficial effect on earlier completion of AD and decreased non-beneficial care in the ICU [42]. One study showed improved patient satisfaction in advance care planning [34], while conversely, another showed no improvement in patient-reported quality of EoL care or quality of communication [43].

Publication bias

No asymmetry was detected with a visual inspection of the funnel plot for self-efficacy or communication score outcomes, while knowledge outcomes showed some asymmetry. However, due to the limited number of studies, we could not conclusively comment on possible publication bias (See Additional file 4).

Sensitivity analysis

Sensitivity analyses were conducted in which we restricted our pooled analyses to those studies with higher methodological quality: RCTs only, high MERSQI scores only, or high NOS-E scores only. In these sensitivity analyses, the overall direction and magnitude of the effect remained similar after restricting to studies of higher methodological quality (Fig. 3).

Fig. 3
figure 3

Sensitivity analysis restricting to studies of higher methodologic quality (RCTs only, higher quality MERSQI and NOS-E) for a self-efficacy b knowledge and c communication score


In this systematic review, we found very low to low quality evidence from a modest number of studies suggesting that EoL communication training for HCP may improve self-efficacy, knowledge and communication scores compared to no formal training. Our confidence in the effect of these interventions on self-efficacy, knowledge, and communication scores is very low to low primarily because of the high overall risk of bias of individual studies included in the review, as well as imprecision in the pooled results due to small sample sizes. Several studies used uncontrolled pre-post designs, which may overestimate effects due to concurrent co-interventions and maturation effects [44].

Self-efficacy was found to be a common outcome measure as it is easy to measure. However the outcome has important limitations. At best, improving self-efficacy may be beneficial as lack of confidence or negative expectancy may decrease the likelihood that the HCP will voluntarily utilize beneficial communication behaviours [17]. Otherwise, self-assessed performance measures are generally a poor surrogate marker for competence [44]. Earlier studies show physician confidence and actual ability in EoL discussions showed a large disconnect [10, 45]. Similarly, knowledge outcomes do not serve as surrogates to adequate communication skills, as learners may cognitively understand what is important in these discussions, but lack the appropriate skills to carry them out.

We found communication skills outcome the most relevant in capturing the construct of EoL decision-making communication. Our review found evidence of improvement in these measures in the training group, however of very low quality. As well, estimates of effect were quite heterogeneous, which was not easily explained by known study characteristics. There are also potential issues with using a reductionist approach to assessing competence, as expertise may not be adequately captured by binary checklist scores done in these studies [46]. While this way of assessing the outcome measure may be adequate for novice learners, an addition of a subjective global rating may provide a better understanding of their skill in future studies.

We did not find much data on our secondary patient-level outcomes. We found four studies with conflicting effects on the overall benefit of the intervention. This was not unexpected; although patient-level outcomes are an important measure, educational studies rarely have sufficient power or long-term follow-up to detect these high-level outcomes. There are significant confounding variables in between the effect of an educational intervention to finally the effect on patient behavior. This dilution of effect makes it difficult to design the study for adequate power or follow-up length [47]. We must be careful to judge the value of an educational intervention on patient outcomes alone.

Limitations and strengths

This study is not without limitations. The broad inclusion criteria required to capture a thorough review of the field may have led to some additional heterogeneity and inconsistency in our data, resulting in low quality of evidence according to GRADE, however we suspect that even with more narrow inclusion criteria, the overall quality of the evidence for our outcomes of interest would still be low due to the limited size and quality of studies in this area. We also restricted our search to studies published in the English language. This may limit the applicability of our results to predominantly English-speaking regions.

Other limitations were intrinsic to the available data. We found that the terminology used in the area of EoL communication and decision making is still not well established and is not uniform, which made our literature search difficult. We believe our manual searching of references adequately mitigates this limitation and improves the comprehensiveness of our search, although it is possible that we still missed some relevant studies.

The strength of our review is in the comprehensive literature search with no restrictions with time, inclusion criteria of a broad range of learners, outcomes, and study design; our independent, duplicate screening, eligibility, and quality assessment with rigorous data collection and secondary verification. We also used multiple measures to assess the quality of evidence, using the Cochrane tool, MERSQI or the NOS-E, and conducted sensitivity analyses based on these measures. Finally, we performed comprehensive quality assessments of the totality of evidence using the rigorous GRADE approach.

To our knowledge, this is the only systematic review of educational interventions to train healthcare providers in EoL communication skills that has assessed the quality of evidence using GRADE and conducted meta-analyses to obtain pooled estimates of effect. Other reviews only included a narrative summary of the interventions, with less comprehensive search criteria, and no assessment of study quality [48, 49]. As well, we looked specifically at interventions aimed at improving communication of facilitating and supporting patient decision making on EoL treatment goals, whereas these other reviews looked at a broader skills in palliative care symptom management and breaking bad news.


These results generally support the use of structured communication training to improve HCP’s ability to discuss and facilitate EoL decision-making, since they suggest that such training may be effective in improving HCP communication skills. Unfortunately, justification studies such as these that compare against no intervention do not tell us anything aside from the fact that an intervention works [50]. We cannot infer the comparative effectiveness of the different teaching methods, nor can we determine which interventions might be most suitable for a given educational setting or learner population. More studies of higher quality and sound instructional design need to be performed with contextually relevant outcome data and against other active educational comparators.


In this systematic review, we found consistent, but low-quality evidence that structured communication training, compared to usual curricula, may increase HCP self-efficacy, knowledge, and communication skills for EoL decision-making. While awaiting more robust evidence in this area, educators of health professionals electing to introduce EoL communication skills curricula should continue to design their interventions according to best practice guidelines and base them on a solid theoretical framework.

Ethics approval and consent to participate

Not Applicable.

Consent for publication

Not Applicable.

Availability of data and materials

The datasets supporting the conclusions of this article are included within the article and its additional files. All raw data used in this systematic review are extracted from available published articles. Extracted raw data and Revman analysis files available upon request.



advance directives


confidence interval


end of life


grading of recommendations assessment, development, and evaluation


health care professionals


intensive care unit


medical educational research study quality instrument


Newcastle Ottawa scale education


randomized controlled trials


standardized mean difference


  1. Heyland DK, Barwich D, Pichora D, Dodek P, Lamontagne F, You JJ, et al. Failure to engage hospitalized elderly patients and their families in advance care planning. JAMA Intern Med. 2013;173:778–87.

    Article  Google Scholar 

  2. You JJ, Fowler RA, Heyland DK. Canadian Researchers at the End of Life Network (CARENET). Just ask: discussing goals of care with patients in hospital with serious illness. CMAJ. 2014;186:425–32.

    Article  Google Scholar 

  3. Wright AA, Zhang B, Ray A, Mack JW, Trice E, Balboni T, et al. Associations between end-of-life discussions, patient mental health, medical care near death, and caregiver bereavement adjustment. JAMA. 2008;300:1665–73.

    Article  Google Scholar 

  4. Hales BM, Hawryluck L. An interactive educational workshop to improve end of life communication skills. J Contin Educ Health Prof. 2008;28:241–8. quiz249–55.

    Article  Google Scholar 

  5. Furman CD, Head B, Lazor B, Casper B, Ritchie CS. Evaluation of an educational intervention to encourage advance directive discussions between medicine residents and patients. J Palliat Med. 2006;9:964–7.

    Article  Google Scholar 

  6. Pekmezaris R, Walia R, Nouryan C, Katinas L, Zeitoun N, Alano G, et al. The impact of an end-of-life communication skills intervention on physicians-in-training. Gerontol Geriatr Educ. 2011;32:152–63.

    Article  Google Scholar 

  7. Szmuilowicz E, Neely KJ, Sharma RK, Cohen ER, McGaghie WC, Wayne DB. Improving residents’ code status discussion skills: a randomized trial. J Palliat Med. 2012;15:768–74.

    Article  Google Scholar 

  8. Schroder C, Heyland D, Jiang X, Rocker G, Dodek P. Canadian researchers at the end of life network. Educating medical residents in end-of-life care: insights from a multicenter survey. J Palliat Med. 2009;12:459–70.

    Article  Google Scholar 

  9. Gorman TE, Ahern SP, Wiseman J, Skrobik Y. Residents’ end-of-life decision making with adult hospitalized patients: a review of the literature. Acad Med. 2005;80:622–33.

    Article  Google Scholar 

  10. Tulsky JA, Chesney MA, Lo B. See one, do one, teach one? House staff experience discussing do-not-resuscitate orders. Arch Intern Med. 1996;156:1285–9.

    Article  Google Scholar 

  11. Curtis JR, Wenrich MD, Carline JD, Shannon SE, Ambrozy DM, Ramsey PG. Understanding physicians’ skills at providing end-of-life care perspectives of patients, families, and health care workers. J Gen Intern Med. 2001;16:41–9.

    Google Scholar 

  12. Powazki R, Walsh D, Cothren B, Rybicki L, Thomas S, Morgan G, et al. The care of the actively dying in an academic medical center: a survey of registered nurses’ professional capability and comfort. Am J Hosp Palliat Care. 2014;31:619–27.

    Article  Google Scholar 

  13. White KR, Coyne PJ. Nurses’ perceptions of educational gaps in delivering end-of-life care. Oncol Nurs Forum. 2011;38:711–7.

    Article  Google Scholar 

  14. Wenrich MD, Curtis JR, Ambrozy DA, Carline JD, Shannon SE, Ramsey PG. Dying patients’ need for emotional support and personalized care from physicians: perspectives of patients with terminal illness, families, and health care providers. J Pain Symptom Manage. 2003;25:236–46.

    Article  Google Scholar 

  15. Dunlay SM, Foxen JL, Cole T, Feely MA, Loth AR, Strand JJ, et al. A survey of clinician attitudes and self-reported practices regarding end-of-life care in heart failure. Palliat Med. 2015;29:260–7.

    Article  Google Scholar 

  16. Kirkpatrick DL, Kirkpatrick JD. Evaluating training programs the four levels. San Francisco, CA: Berrett-Koehler; 2006. Print.

  17. Parle M, Maguire P, Heaven C. The development of a training model to improve health professionals’ skills, self-efficacy and outcome expectancies when communicating with cancer patients. Soc Sci Med. 1997;44:231–40.

    Article  Google Scholar 

  18. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37:360–3.

    Google Scholar 

  19. Reed DA, Beckman TJ, Wright SM, Levine RB, Kern DE, Cook DA. Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM’s Medical Education Special Issue. J Gen Intern Med. 2008;23:903–7.

    Article  Google Scholar 

  20. Cook DA, Reed DA. Appraising the quality of medical education research methods: the medical education research study quality instrument and the Newcastle-Ottawa scale-education. Acad Med. 2015;90:1067–76.

    Article  Google Scholar 

  21. Higgins JPT, Altman DG, Sterne JAC. Chapter 8: Assessing risk of bias in included studies. In: Higgins JPT, Green S,editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011. Available from Accessed 6 Sep 2015.

  22. Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ Group. 1997;315:629–34.

    Article  Google Scholar 

  23. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. Br Med J Publishing Group. 2003;327:557–60.

    Article  Google Scholar 

  24. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Lawrence Erlbaum Associates, Inc; 1988.

    Google Scholar 

  25. Higgins JPT, Deeks JJ. Chapter 7: Selecting studies and collecting data. In: Higgins JPT, Green S,editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011. Available from Accessed 6 Sep 2015.

  26. Schünemann H, Brożek J, Guyatt G, Oxman A. GRADE handbook for grading quality of evidence and strength of recommendations. [Internet]. GradePRO. [cited 2015 Sep 6]. Available from: Accessed 6 Sep 2015.

  27. GRADEpro GDT: GRADEpro Guideline Development Tool [Software]. McMaster University, 2015 (developed by Evidence Prime, Inc.). Available from

  28. Greenberg JM, Doblin BH, Shapiro DW, Linn LS, Wenger NS. Effect of an educational program on medical students’ conversations with patients about advance directives: a randomized trial. J Gen Intern Med. 1993;8:683–5.

    Article  Google Scholar 

  29. Szmuilowicz E, El-Jawahri A, Chiappetta L, Kamdar M, Block S. Improving residents’ end-of-life communication skills with a short retreat: a randomized controlled trial. J Palliat Med. 2010;13:439–52.

    Article  Google Scholar 

  30. Bristowe K, Shepherd K, Bryan L, Brown H, Carey I, Matthews B, et al. The development and piloting of the REnal specific Advanced Communication Training (REACT) programme to improve Advance Care Planning for renal patients. Palliat Med. 2014;28:360–6.

    Article  Google Scholar 

  31. Clayton JM, Butow PN, Waters A, Laidsaar-Powell RC, O’Brien A, Boyle F, et al. Evaluation of a novel individualised communication-skills training intervention to improve doctors’ confidence and skills in end-of-life communication. Palliat Med. 2013;27:236–43.

    Article  Google Scholar 

  32. Schell JO, Green JA, Tulsky JA, Arnold RM. Communication skills training for dialysis decision-making and end-of-life care in nephrology. Clin J Am Soc Nephrol. 2013;8:675–80.

    Article  Google Scholar 

  33. Smith L, O’Sullivan P, Lo B, Chen H. An educational intervention to improve resident comfort with communication at the end of life. J Palliat Med. 2013;16:54–9.

    Article  Google Scholar 

  34. Green MJ, Levi BH. Teaching advance care planning to medical students with a computer-based decision aid. J Cancer Educ. 2011;26:82–91.

    Article  Google Scholar 

  35. Fischer GS, Arnold RM. Feasibility of a brief workshop on palliative care communication skills for medical interns. J Palliat Med. 2007;10:19–23.

    Article  Google Scholar 

  36. Junod Perron N, Morabia A, De Torrenté A. Evaluation of do not resuscitate orders (DNR) in a Swiss community hospital. J Med Ethics. 2002;28:364–7.

    Article  Google Scholar 

  37. Sharma RK, Jain N, Peswani N, Szmuilowicz E, Wayne DB, Cameron KA. Unpacking resident-led code status discussions: results from a mixed methods study. J Gen Intern Med. 2014;29:750–7.

    Article  Google Scholar 

  38. Alexander SC, Keitz SA, Sloane R, Tulsky JA. A controlled trial of a short course to improve residents’ communication with patients at the end of life. Acad Med. 2006;81:1008–12.

    Article  Google Scholar 

  39. Back AL, Arnold RM, Baile WF, Fryer-Edwards KA, Alexander SC, Barley GE, et al. Efficacy of communication skills training for giving bad news and discussing transitions to palliative care. Arch Intern Med. 2007;167:453–60.

    Article  Google Scholar 

  40. Lorin S, Rho L, Wisnivesky JP, Nierman DM. Improving medical student intensive care unit communication skills: a novel educational initiative using standardized family members. Crit Care Med. 2006;34:2386–91.

    Article  Google Scholar 

  41. Williams DM, Fisicaro T, Veloski JJ, Berg D. Development and evaluation of a program to strengthen first year residents’ proficiency in leading end-of-life discussions. Am J Hosp Palliat Care. 2011;28:328–34.

    Article  Google Scholar 

  42. Holloran SD, Starkey GW, Burke PA, Steele G, Forse RA. An educational intervention in the surgical intensive care unit to improve ethical decisions. Surgery. 1995;118:294–8. discussion298–9.

    Article  Google Scholar 

  43. Curtis JR, Back AL, Ford DW, Downey L, Shannon SE, Doorenbos AZ, et al. Effect of communication skills training for residents and nurse practitioners on quality of communication with patients with serious illness: a randomized trial. JAMA. 2013;310:2271–81.

    Article  Google Scholar 

  44. Norman G, Eva KW. Quantitative research methods in medical education. In: Swanwick T, editor. Understanding medical education. Oxford, UK: Wiley-Blackwell; 2010. pp. 301–22.

  45. Tulsky JA, Chesney MA, Lo B. How do medical residents discuss resuscitation with patients? J Gen Intern Med. Springer. 1995;10:436–42.

    Google Scholar 

  46. Hodges B, Regehr G, McNaughton N, Tiberius R, Hanson M. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–34.

    Article  Google Scholar 

  47. Cook DA, West CP. Perspective: reconsidering the focus on “outcomes research” in medical education: a cautionary note. Acad Med. 2013;88:162–7.

    Article  Google Scholar 

  48. Kottewar SA, Bearelly D, Bearelly S, Johnson ED, Fleming DA. Residents’ end-of-life training experience: a literature review of interventions. J Palliat Med. 2014;17:725–32.

    Article  Google Scholar 

  49. Lloyd-Williams M, Macleod RDM. A systematic review of teaching and learning in palliative care within the medical undergraduate curriculum. Med Teach. 2004;26:683–90.

    Article  Google Scholar 

  50. Cook DA. Randomized controlled trials and meta-analysis in medical education: what role do they play? Med Teach. 2012;34:468–73.

    Article  Google Scholar 

Download references


The study and authors were supported by a grant from the Technology Evaluation in the Elderly Network (Grant #KS 2014 – 06). The funding body was not involved in the design, analysis and manuscript generation.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Han-Oh Chung.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

HC, SO, and JY conceived the study, conducted the analysis, interpreted the data and drafted the manuscript. LH and LM contributed to the analysis and interpretation of the data and edited the manuscript. All authors approved the final submitted version of the manuscript.

Authors’ information

SO is a Clinical Scholar with the Department of Medicine McMaster University.

JY is Associate Professor in the department of Medicine, and Clinical Epidemiology & Biostatistics in McMaster University.

LH is the Director of Advance Care Planning Canada/Canadian Hospice Palliative Care Association.

LM is an assistant professor in the department Clinical Epidemiology & Biostatistics, McMaster University and Research Methodologist for the Biostatistics Unit, St Joseph’s Healthcare Hamilton.

Additional files

Additional file 1:

Search Strategy – Describes our search strategy for various electronic databases. (DOCX 25 kb)

Additional file 2: Table S1.

Assessment of Study Quality – The rating of individual study quality. (PDF 30 kb)

Additional file 3: Table S2.

GRADE Summary of Findings for Primary Outcomes – Overall quality of evidence by GRADE criteria. (PDF 47 kb)

Additional file 4: Figure S1.

Funnel Plot – Funnel plot used to assess for potential publication bias. (PDF 38 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chung, HO., Oczkowski, S.J.W., Hanvey, L. et al. Educational interventions to train healthcare professionals in end-of-life communication: a systematic review and meta-analysis. BMC Med Educ 16, 131 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • End of life care
  • Communication
  • Advance care planning
  • Advance directives
  • Communication training
  • Medical education