Skip to main content
  • Research article
  • Open access
  • Published:

Attitudes towards Interprofessional education in the medical curriculum: a systematic review of the literature

Abstract

Background

There is agreement among educators and professional bodies that interprofessional education needs to be implemented at the pre-registration level. We performed a systematic review assessing interprofessional learning interventions, measuring attitudes towards interprofessional education and involving pre-registration medical students across all years of medical education.

Methods

A systematic literature review was performed using PubMed, PsycINFO, EThOS, EMBASE, PEDro and SCOPUS. Search terms were composed of interprofession*, interprofessional education, inter professional, inter professionally, IPE, and medical student. Inclusion criteria were 1) the use of a validated scale for assessment of attitudes towards IPE, and results for more than 35 medical students; 2) peer-reviewed articles in English and German, including medical students; and 3) results for IPE interventions published after the 2011 Interprofessional Education Collaborative (IPEC) report. We identified and screened 3995 articles. After elimination of duplicates or non-relevant topics, 278 articles remained as potentially relevant for full text assessment. We used a data extraction form including study designs, training methods, participant data, assessment measures, results, and medical year of participants for each study. A planned comprehensive meta-analysis was not possible.

Results

This systematic review included 23 articles with a pre-test-post-test design. Interventions varied in their type and topic. Duration of interventions varied from 25 min to 6 months, and interprofessional groups ranged from 2 to 25 students. Nine studies (39%) reported data from first-year medical students, five (22%) from second-year students, six (26%) from third-year students, two (9%) from fourth-year students and one (4%) from sixth-year students. There were no studies including fifth-year students. The most frequently used assessment method was the Readiness for Interprofessional Learning Scale (RIPLS) (n = 6, 26%). About half of study outcomes showed a significant increase in positive attitudes towards interprofessional education after interventions across all medical years.

Conclusions

This systematic review showed some evidence of a post-intervention change of attitudes towards IPE across different medical years studied. IPE was successfully introduced both in pre-clinical and clinical years of the medical curriculum. With respect to changes in attitudes to IPE, we could not demonstrate a difference between interventions delivered in early and later years of the curriculum.

Trial registration

PROSPERO registration number: CRD42020160964.

Peer Review reports

Background

According to the World Health Organization (WHO), Interprofessional Education (IPE) occurs when “students from two or more professions learn about, from, and with each other to enable effective collaboration and improve health outcomes” [1]. Safe, high-quality, accessible, patient-centred care requires continuous development of interprofessional competencies [2], and IPE has repeatedly been called for, so that healthcare students can enter the workforce as effective collaborators [3,4,5].

A growing amount of empirical work shows that IPE can have a beneficial impact on learners’ attitudes, knowledge, skills, and behaviours (the so-called collaborative competencies) [6, 7], and can positively affect professional practice and patient outcomes [8, 9]. IPE may enhance attitudes toward teamwork and collaboration, leading to improved patient care upon graduation. However, the optimal time to expose medical students to IPE is still subject to debate.

IPE may enhance attitudes toward collaboration and teamwork during training, leading to improved attitudes towards IP upon graduation. Nevertheless, the complexity of simultaneous teaching for different healthcare disciplines, as well as logistical problems and busy timetables raise issues concerning the introduction of IPE interventions. The optimal timing to introduce IPE and whether immersion (i.e. continuous collaborative learning) or exposure (periodic collaborative activities) should be adopted [10] are still subject to debate. Gilbert [11] suggests exposure during the early years and immersion in the graduation year. Reasons for this include ensuring the optimal development of students’ professional identity before expecting them to work collaboratively with others. Furthermore, delaying the introduction of IPE to later in the curriculum may be deterred by the students’ focus on profession-specific clinical practice, and immersion in vocation-specific stereotypes or negative attitudes [10]. Current undergraduate literature shows a tendency to introduce IPE earlier, even in the first year of studies [11, 12], but the most effective timing to perform PE interventions in the medical curriculum remains to be determined.

We undertook a systematic literature review to determine the most effective time to introduce IPE to pre-registration medical students. Additionally, we were interested in exploring the nature of the training, the assessment methods and the study outcomes. Our systematic review was guided by the research question: “What is the optimal time to institute interprofessional education interventions in the medical school curriculum?”

Methods

Study design

We performed a systematic review of the literature focusing on interprofessional learning interventions in pre-registration medical students and applied a review protocol based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) statement [13]. We also aimed to perform a meta-analysis with studies grouped by type of assessment. This systematic review was registered in PROSPERO (www.crd.york.ac.uk) with the number CRD42020160964.

Data sources and selection criteria

The systematic literature search was performed on December 12, 2019, using the databases PubMed, PsycINFO, EThOS, EMBASE, PEDro and SCOPUS. The following keywords and subject headings were used as search terms: interprofession*, interprofessional education, inter professional, inter professionally, IPE, and medical student. We included all peer-reviewed articles in English and German that reported on evaluative studies of IPE interventions including medical students, and were published after the 2011 Interprofessional Education Collaborative (IPEC) report [2]. The full search strategy is available in an additional word file [see Additional file 1]. In addition, we included articles found in the reference lists of previous reviews on IPE, discovered as a result of the search for IPE interventions [4, 6, 9, 14,15,16,17,18,19,20,21,22].

Inclusion criteria

We included studies that reported on assessment of knowledge, skills or attitudes (KSA), with an IPE intervention, and that reported quantitative results with a validated IPE instrument. We included only studies using previously comprehensive validated instruments according to various psychometric tests. Validated questionnaires provide reliable and valid results, and can be used to benchmark or compare results on an international level [23], and make statistical comparisons, therefore increasing rigour and allowing for a meta-analysis. One limitation of the use of validated questionnaires is the lack of further piloting or cultural adaptation, which may induce bias. We also narrowed our search to groups of at least 35 medical students in the same year of their medical education programme, to ensure an adequate sample size for statistical validity. To avoid interventions in overlapping years of education, we selected studies reporting on interventions with a duration of at most 6 months (regardless of the type of intervention, the study programme, and the educational year of other students taking part). Although we encountered qualitative IPE studies, we chose a positivist approach because it better aligned with our intention to perform a meta-analysis.

Exclusion criteria

We excluded conference contributions and abstracts without a related peer-reviewed published article. We also excluded all non-validated questionnaires and articles without available full-text in English or German.

Identification of potentially eligible studies

After the primary search, all titles and abstracts were screened and duplicates or non-relevant articles were excluded. The full text of the remaining articles was read by two authors (JBE and AF) to identify the eligible articles for this review. All potentially eligible articles were imported into a software platform for systematic reviews (http://rayyan.qcri.org) [24] to expedite the screening of abstracts and titles and to determine the final selection of eligible studies. The two authors initially performed selection in a blinded mode with three options: “include”, “exclude” and “maybe”. After finishing the first personal assessment, results were unblinded and disagreements were resolved by discussion of individual papers to find consensus. The study selection process is outlined in the PRISMA Flow Diagram – Fig. 1.

Fig. 1
figure 1

PRISMA Study Flow diagram

Data extraction and synthesis

The data extraction form was developed by two reviewers, informed by the form from Reeves et al. [9] but modified to include important aspects specific to this review, including ratio of study year to total duration of studies and classification of “early” or “late” depending if the IPE intervention occurred in the first or second half of medical studies. The reviewers extracted additional data regarding the context of study, recruitment, description of participants, study design, results and conclusions. The analysis of the risk of bias was performed independently, at a later stage. RG moderated in case of disagreement.

Upon completion of article extraction, data were analysed using the Statistical Package for the Social Sciences (SPSS). 23.0. (IBM Corp., Armonk NY, USA). We report descriptive statistics for quantitative data (median, IQR). Data extracted were synthesised in a narrative manner, using an integrative and aggregative approach [25].

Quality assessment and risk of bias

The quality of included studies was also evaluated by JBE and HC using a standardised critical appraisal tool, the McMaster Critical Review Form for Quantitative Studies [26]. If research articles met each criterion outlined in the appraisal guidelines, they received a score of “one” for that item, or, if they did not, a score of “zero”. Item scores were then summed to provide a score of a maximum of 16, with 16 indicating excellent methodological rigour. The quality was defined as poor when the overall score was 8 or less, fair if 9–10, good if 11–12, very good if 13–14 and excellent if 15–16 [27]. This tool was chosen for this systematic review as it is published, freely available, has been used extensively, and can be applied to a range of research designs [28]. Differences in judgment were resolved through discussion.

Statistics

A meta-analysis for those studies using the Readiness of Interprofessional Learning Scale (RIPLS) [10, 29,30,31,32,33] was attempted with the R meta package [34], as this scale was most often used. Otherwise, descriptive analyses were conducted, including frequencies. Where applicable, scales were reversed by subtracting the mean from the maximum score for the scale to ensure a consistent direction of effects across studies. Weighted means of subscales were calculated for each study using the number of participants as weights. Pooling of estimates on the single-item level was not possible, as Sheu et al. [30] only reported on subscale level. Estimates of weighted means of subscales are reported with 95% confidence intervals (CIs). A random effects model was used with the inverse variance method for pooling of estimates across the remaining studies using RIPLS. Standard deviations of mean changes were not given and had to be calculated according to Cochrane’s Handbook [35], which introduced further uncertainty by the need to choose a more or less random correlation coefficient for standard deviations.The meta-analysis was conducted using R 3.5.0 statistical package (R Foundation for Statistical Computing, Vienna, Austria) after related content was extracted and all remaining analyses were conducted by SPSS v.23 (IBM Corp. in Armonk, NY, USA).

Results

Trial flow

The literature search retrieved 3995 articles. After applying the inclusion and exclusion criteria and removing duplicates, 23 articles were included in the review [10, 29,30,31,32,33, 36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52] (see PRISMA Flow diagram, Fig. 1). All studies had a pre-test-post-test design. Basic characteristics of educational interventions are presented in Table 1. We present an overview of characteristics of the included studies in Table 2.

Table 1 Categorised description and characteristics of the 23 included studies (Findings of individual studies could belong to more than one category)
Table 2 Extraction grid for selected studies

Participants

In total 5231 students, of which 62% (n = 3229) were medical students, experienced an IPE intervention. The median number of MS in the IPE interventions was 100 [35–464]. Nine studies (39%) reported data for first-year medical students [10, 29,30,31, 36,37,38,39,40], five (22%) for second-year students [41,42,43,44,45], six (26%) for third-year students [32, 46,47,48,49,50], two (9%) for fourth-year [33, 51] and one for sixth-year medical students [52]. No study reported interventions occurring in the fifth year. Most studies (65%) [10, 29,30,31, 36,37,38,39,40,41, 43,44,45,46, 48] were performed in the first half of the medical curriculum. Three studies [10, 45, 50] (13%) involved only medical students. In all the interventions across all the studies, the other professional groups in the IPE intervention included nursing, pharmacy, dental medicine, physical therapy, biomedical science, occupational therapy, physician’s assistant, radiotherapy and dietetics students (Table 2).

Study designs and locations

The study design was mainly cross-sectional (n = 16). Only two studies (9%) were randomised [39, 40]. Most studies took place in the USA (n = 14) [30,31,32, 37, 38, 40,41,42,43,44, 47, 49,50,51,52] and in Europe (n = 5, Germany, Italy, Spain, Sweden and the United Kingdom) [36, 39, 45, 46, 48].

Interventions

Interventions varied in their type and topic. Most frequently, faculty chose IPE interventions on the topic of chronic care [e.g., Alzheimer’s disease [42], end-of-life issues [49], geriatric care [44], long-term conditions [10, 33, 36, 41, 52] (n = 8)] or acute care (n = 4) [30, 32, 43, 51]. Other topics were communication (n = 2) [37, 46]; medication plans and errors (n = 3) [38, 44, 47] and teaching aimed at influencing interprofessional knowledge, attitudes and skills [29, 31, 39, 40, 45, 48, 53]. Duration of interventions varied from 25 min [50] to 6 months [37], and interprofessional group size ranged from 2 [42, 48] to 25 [49] students. The main educational strategies were small group discussions (n = 7) [30, 31, 36,37,38, 47, 48], simulations (n = 6) [32, 41, 43, 49,50,51] and workshops (n = 5) [38,39,40, 44, 47]. The majority of the reported interventions (48%, n = 11) were held a single time, and 39% (n = 9) lasted less than 6 h.

Assessment measures and outcomes

All studies reported learning outcomes. We could identify 49 different outcome measurements with 46 different assessment methods, but the majority (76%, n = 35) were questionnaires. The most frequent outcomes were attitudes towards IPE and/or other professions (78%, n = 38) and satisfaction (16%, n = 8). Eight studies (35%) used more than one validated instrument to evaluate the experience; four studies [30, 40, 42, 51] used two instruments, and the other four [32, 33, 39, 49] used three. The most commonly used method for assessing attitudes towards IPE was the RIPLS, used in six studies (26%) [10, 29,30,31,32,33], but a total of 22 different scales were used:

  • Attitudes to Health Professionals Questionnaire (AHPQ) [36]

  • Common Ground Instrument (CGI) [36]

  • Scale of Attitudes toward Physician-Pharmacist Collaboration (SATP2C) [38, 40, 44]

  • Sociocultural Attitudes in Medicine Inventory (SAMI) [30]

  • Jefferson Scale of Empathy (JSE) [39, 40]

  • Jefferson Scale of Attitudes toward Physician-Nurse Collaboration (JSAPNC) [39, 48, 49]

  • Jefferson Scale of Physician Lifelong Learning (JeffSPLL) [39]

  • Interprofessional Collaborative Competency Attainment Scale (ICCAS) [41]

  • Attitudes Toward Collaboration Scale (ATCS) [42],

  • Attitudes Toward Interdisciplinary Teams Scale (ATITS) [42]

  • Interprofessional Educative Collaborative Competency Self-Assessment Instrument (IPEC CSI) [43]

  • Interdisciplinary Education Perception Scale (IEPS) [45]

  • University of the West of England Interprofessional Questionnaire (UWE-IP-D) [46]

  • Attitudes Towards Health Care Teams Scale (ATHCTS) [33, 42, 47, 49]

  • Self-Efficacy for Interprofessional Experimental Learning (SEIEL) [50]

  • Teamwork Assessment Scale (TAS) [32]

  • Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) Teamwork Attitude Questionnaire (T-TAQ) [32]

  • Team Skills Scale (TSS) [33]

  • Student Perceptions of Interprofessional Clinical Education (SPICE-R2) [51]

  • Healthcare Stereotypes Scale (HSS) [51]

  • Interprofessional Socialization and Valuing Scale (ISVS) [52]

Findings

Over half of the studies (n = 13) [29, 32, 33, 36,37,38,39, 41, 43, 45, 49, 51, 52] showed a significant increase in positive attitudes towards IP after the interventions. Nine studies (39%) showed no significant changes in medical students’ attitudes towards IPE [30, 31, 40, 42, 44, 46,47,48, 50], while one demonstrated an increase in negative attitudes towards IPE after the intervention [10]. In years 1 and 2 IPE interventions appear longer in duration. Late IPE interventions show a trend to be longer and more statistically significant (Fig. 2). The sample size is too low for further comparisons.

Fig. 2
figure 2

Bar chart: Outcome and duration of IPE interventions in selected articles, according to early (first half) or late (second half) time of medical school. White bars: statistically significant positive change of attitudes; Grey bars: Non-significant positive change of attitudes; full line: continuous IPE intervention; dotted line: intermittent IPE intervention

Methodological rigour

There was 91% agreement (kappa = 0.772) between the reviewers on the scores elicited by the McMaster Critical Review Form for Quantitative Studies [26], which represents good inter-rater reliability [54]. Consensus was reached on the disagreements after discussion. Methodological rigour scores ranged from 7 to 15 out of a maximum of 16. An additional word file shows the scoring in more detail [see Additional file 2]. Most studies (n = 18) were rated as either “Good” [10, 31, 36,37,38, 44, 47, 49, 51, 52], “Very Good” [29, 30, 39, 41, 45, 48] or “Excellent” [33].

Meta-analysis

Initially we planned to undertake a meta-analysis of all studies included in the review. However, with such a broad range of instruments and therefore covering various different factors, it was not feasible. Instead, we performed the analysis with the RIPLS – as it was the most frequently used instrument –in the knowledge that this would only represent 26% of the articles in this review.

Due to the heterogeneity in the reporting of RIPLS results, a sound estimation of summary scores across studies was hampered. Whereas Darlow et al. [33] and Hudson et al. [10] used altered instruments with more than 19 items, Chua et al. [29], Paige et al. [32], Sheu et al. [30] and Sytsma et al. [31] used the original 19-item RIPLS. Nevertheless, in the article by Paige et al. [32], the item “For small group learning to work, students need to trust and respect each other.” is missing and the author did not respond to an email inquiring further information. Combined with extensive heterogeneity in reporting as well as statistically tested (Cochrane’s Q < 0.01 for the meta-analysis of Chua et al. [29], Paige et al. [32], Sheu et al. [30] and Sytsma et al. [31] for the subscales team, identity and role (see supplemental digital file Additional file 3/Table 3: Original RIPLS scores for Chua et al., Paige et al., Systma et al. and Sheu et al., supplemental_material_IPE_RIPLS_original_data.xls) the combination of the single study data for a summary measure seems prone to error. Additionally, authors used means and standard deviations in the original articles, which are not the appropriate summary measures for Likert scaled items. As Sheu et al. [30] only reported the means and standard deviations of RIPLS-subscales, a merging of information for meta-analysis was only possible on that level and not on a single item level. Furthermore, the standard deviations for the mean changes (difference of scores pre-test-post-test) were not given and had to be estimated according to Cochrane’s Handbook (16.1.3.2 Imputing standard deviations for changes from baseline), which introduced further uncertainty by the need to choose a rather random correlation coefficient of standard deviations (0.4 in our case). With regard to the pragmatic heterogeneity of interventions across studies, an ordinary pre-test-post-test score difference is a too simple way to capture the information created by the original studies. All in all, a meta-analysis could not be performed because of the high heterogeneity of the instruments used and the inconsistent data reporting.

Discussion

In this systematic review, we analysed IPE interventions based on 23 studies published between 2011 and 2019. Our findings show that medical students were exposed to IPE interventions at various points in their training, and we could establish evidence of effectiveness of IPE. Three studies involved only medical students and therefore did not meet the WHO definition of IPE. However, they reported on interprofessional interventions and therefore were not excluded from this systematic review.

All years except the fifth study year were represented, so no preference for pre-clinical or clinical years could be observed. However, studies in the first four years of medical education were more frequent. This may reflect variation in the length of pre-registration medical education programmes worldwide. In the USA, medical school consists mainly of 4 years of training (generally preceded by a 3–4-year Bachelor’s degree), while in Europe it averages 6 years (without a preceding program) [55].

In Europe, most medical university programmes are public, and rather larger cohorts of students are educated (e.g., Germany has 36 public and only two private medical schools, and almost 10,000 new medical students per educational year, leading to an average class size of over 260 students) [56], while in the USA (141 fully-accredited medical schools), more than one third are private (n = 56) and class size is much smaller, with an average of 146 students per educational year [56, 57]. This may also explain the higher frequency of studies from the USA, as implementing IPE elements could be more feasible with smaller classes, and private medical schools may suffer more pressure to evaluate their programmes.

The optimal timing to introduce IPE is still subject to debate [10]. In clinical years it may seem reasonable, as it contributes to optimal development of students’ professional identities and gives them experience in working collaboratively with students in different health professions [11]. However, the introduction of IPE so late in the medical curriculum may be complicated by the students’ focus on profession-specific clinical practice [10]. On the other hand, introducing IPE early in pre-registration healthcare courses may be useful in breaking down negative attitudes and avoiding stereotypes [58,59,60].

From our analysis we could not determine the best time to introduce IPE, as both pre-clinical and clinical IPE interventions showed some degree of success. It appears that late IPE interventions show a trend to be longer and more statistically significant. It seems reasonable to conclude that interventions should be introduced in the early years and continue throughout the curriculum. More well-designed studies are needed to address this gap in knowledge.

Published IPE interventions had a pre-test-post-test design and most studies were cross-sectional. Interventions varied in their type and topic, group sizes were small and most activities were only performed once. There was also a paucity of studies reporting medium and long-term outcomes. Most studies (78%) were of good or very good quality, although a small proportion still scored poorly. This is consistent with previous reviews [4, 6, 15, 18]. This trend limits the development of strategies for targeting long-term behaviour changes and potential to positively impact patient outcomes. Longer interventions and longitudinal follow-up of learning outcomes are key to identifying robust outcomes that lead to changes in practice. An increasing number of studies now report mid- and long-term outcomes, but – as we can see from our own sample – these are still a minority. More studies are needed in models for pre-licensure IPE interventions (including adequate evaluation of their effectiveness), particularly regarding long-term outcomes [9, 31, 61]. In situations where prolonged IPE training is not feasible due to organizational limitations, intermittent interventions may be a good strategy [47]. The heterogeneity of most outcome measures may also limit the ability to draw conclusions about best practices and has, in our case, prevented the accomplishment of a meta-analysis.

Studies were most frequently assessed with RIPLS. The Readiness for Interprofessional Learning Scale, developed in 1999, was among the first scales developed for measurement of attitudes towards interprofessional learning [62]. It has been translated and acculturated into several languages [63]. The scale is very popular, but it has not been updated, it fails to embody all the dimensions of the Core Competencies for Interprofessional Collaborative Practice [2], and its conceptual framework has recently been questioned [63]. Additionally, concerns about its low internal consistency at item level and subscale results – raised by the RIPLS authors themselves – perpetuate the debate of what exactly the RIPLS is measuring [64] and there have even been past recommendations to abandon the scale altogether [23, 65]. Finally, some newer scales, more aligned with the IPEC dimensions, have also been successfully tested and acculturated [66, 67]. While educators, curriculum planners and policy makers continue to struggle to identify methods of interprofessional education that lead to better practice [9], clearer measures of interprofessional competency are needed to assess the outcomes from health professional degree programs and to determine what approaches to interprofessional education benefit patients and communities.

The results from this review and from individual studies should be interpreted with caution: students’ educational backgrounds, as well as attitudes, expectations and stereotypes, may vary considerably between institutions and countries and may influence how the IPE interventions are experienced. This probably accounts for many differences in effectiveness of IPE activities in different settings [15]. Additionally, a few studies described a “package” of interprofessional activities, and medical curricula differ significantly, which may introduce more bias. University IPE programmes should agree on a comparable methodology that aligns with research in IPE (e.g., larger cohorts, multi-centre studies) and should focus on fewer instruments to measure IPE, adequately assessed for validity, responsiveness, reliability, and interpretability [45].

There is a broad variation in the length of the medical curriculum between continents and countries. Most of the studies didn’t explain their specific curriculum to the reader. For many articles, we were not able to determine the total length of purported medical studies and therefore determine whether the IPE intervention took place in the final year, which would have been relevant to this literature review. To bridge this gap in knowledge we propose that future research should briefly describe their specific medical curriculum.

Our methodology also has limitations. We decided a priori to include only papers with a at least 35 medical students. The reason was to have sufficiently powered studies in the sample. However, this may have led to some selection bias, or left out potentially relevant interventions. Because we were interested in IPE effects on medical students, we also excluded all studies that did not report specific results for medical students. This limited the number of positive studies available. Similar to other systematic reviews, our work aimed to exclude all “lower quality” studies (i.e., non-randomised, non-experimental, qualitative studies) [9, 16, 20]. Reflecting on our methods, we question whether they are adequate for social or educational research, as there are repeated appeals for more qualitative reviews in IPE [61].

Unfortunately, there were also several issues that made a meta-analysis impossible. First, as RIPLS uses a Likert scale (therefore, an ordinal scale), central tendency statements should be calculated with the median value. However, most studies in this sample chose to report the mean. This is acceptable if one assumes equal distances between items, but it is very unrealistic. Additionally, students responding to pre- and post-intervention questionnaires were pooled cohorts, and items differed in wording (questionnaires were slightly modified). In given studies, some items were not reported. In other studies, items were sometimes scored reversely (negative attitudes), and some studies did not report the change in score which is the outcome of interest for the meta-analysis.

Conclusions

This systematic review showed some evidence of a post-intervention change of attitudes towards IPE across different medical years studied. IPE was successfully introduced both in pre-clinical and clinical years of the medical curriculum. However, we found great variability in the scales chosen to evaluate changes in knowledge, behaviours and attitudes linked with participation in IPE. There was a paucity of studies reporting medium and long-term outcomes. The heterogeneity of results prevents further comparisons or the performance of a rigorous meta-analysis.

Availability of data and materials

All data generated and analysed during this study are included in this published article and its supplementary information files.

Abbreviations

AHPQ:

Attitudes to Health Professionals Questionnaire

ATCS:

Attitudes Toward Collaboration Scale

ATHCTS:

Attitudes Towards Health Care Teams Scale

ATITS:

Attitudes Toward Interdisciplinary Teams Scale

CGI:

Common Ground Instrument

HSS:

Healthcare Stereotypes Scale

ICCAS:

Interprofessional Collaborative Competency Attainment Scale

IEPS:

Interdisciplinary Education Perception Scale

IPE:

Interprofessional Education

IPEC:

Interprofessional Education Collaborative

IPEC CSI:

Interprofessional Educative Collaborative Competency Self-Assessment Instrument

ISVS:

Interprofessional Socialization and Valuing Scale

JeffSPLL:

Jefferson Scale of Physician Lifelong Learning

JSAPNC:

Jefferson Scale of Attitudes toward Physician-Nurse Collaboration

JSE:

Jefferson Scale of Empathy

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-analyses

PROSPERO:

International prospective register of systematic reviews

RIPLS:

Readiness for Interprofessional Learning Scale

SAMI:

Sociocultural Attitudes in Medicine Inventory

SATP2C:

Scale of Attitudes toward Physician-Pharmacist Collaboration

SEIEL:

Self-Efficacy for Interprofessional Experimental Learning

SPICE-R2:

Student Perceptions of Interprofessional Clinical Education

TAS:

Teamwork Assessment Scale

TSS:

Team Skills Scale

T-TAQ:

Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) Teamwork Attitude Questionnaire

UWE-IP-D:

University of the West of England Interprofessional Questionnaire

References

  1. Framework for action on interprofessional education and collaborative practice [https://www.who.int/hrh/resources/framework_action/en/]. Accessed 5 May 2020.

  2. Panel IECE. Core Competencies for Interprofessional Education: Report of an Expert Panel. Washington, DC: Interprofessional Education Collaborative; 2011.

    Google Scholar 

  3. Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, Fineberg H, Garcia P, Ke Y, Kelley P, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376(9756):1923–58.

    Google Scholar 

  4. Reeves S, Fletcher S, Barr H, Birch I, Boet S, Davies N, McFadyen A, Rivera J, Kitto S. A BEME systematic review of the effects of interprofessional education: BEME guide no. 39. Med Teach. 2016;38(7):656–68.

    Google Scholar 

  5. Cox M, Cuff P, Brandt B, Reeves S, Zierler B. Measuring the impact of interprofessional education on collaborative practice and patient outcomes. J Interprof Care. 2016;30(1):1–3. https://doi.org/10.3109/13561820.2015.1111052.

  6. Abu-Rish E, Kim S, Choe L, Varpio L, Malik E, White AA, Craddick K, Blondon K, Robins L, Nagasawa P, et al. Current trends in interprofessional education of health sciences students: a literature review. J Interprof Care. 2012;26(6):444–51.

    Google Scholar 

  7. Makino T, Shinozaki H, Hayashi K, Lee B, Matsui H, Kururi N, Kazama H, Ogawara H, Tozato F, Iwasaki K, et al. Attitudes toward interprofessional healthcare teams: a comparison between undergraduate students and alumni. J Interprof Care. 2013;27(3):261–8.

    Google Scholar 

  8. Kent F, Keating J. Patient outcomes from a student-led interprofessional clinic in primary care. J Interprof Care. 2013;27(4):336–8.

    Google Scholar 

  9. Reeves S, Perrier L, Goldman J, Freeth D, Zwarenstein M. Interprofessional education: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2013;28(3):CD002213.

    Google Scholar 

  10. Hudson JN, Lethbridge A, Vella S, Caputi P. Decline in medical students' attitudes to interprofessional learning and patient-centredness. Med Educ. 2016;50(5):550–9.

    Google Scholar 

  11. Gilbert JH. Interprofessional learning and higher education structural barriers. J Interprof Care. 2005;19(Suppl 1):87–106.

    Google Scholar 

  12. Kozmenko V, Bye EJ, Simanton E, Lindemann J, Schellpfeffer SE. The optimal time to institute Interprofessional education in the medical school curriculum. Med Sci Educ. 2017;27:259–66.

    Google Scholar 

  13. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097–110.

    Google Scholar 

  14. Lapkin S, Levett-Jones T, Gilligan C. A systematic review of the effectiveness of interprofessional education in health professional programs. Nurse Educ Today. 2013;33(2):90–102.

    Google Scholar 

  15. Olson R, Bialocerkowski A. Interprofessional education in allied health: a systematic review. Med Educ. 2014;48(3):236–46.

    Google Scholar 

  16. Kent F, Keating JL. Interprofessional education in primary health care for entry level students--a systematic literature review. Nurse Educ Today. 2015;35(12):1221–31.

    Google Scholar 

  17. Kent F, Hayes J, Glass S, Rees CE. Pre-registration interprofessional clinical education in the workplace: a realist review. Med Educ. 2017;51(9):903–17.

    Google Scholar 

  18. Nelson S, White CF, Hodges BD, Tassone M. Interprofessional team training at the Prelicensure level: a review of the literature. Acad Med. 2017;92(5):709–16.

    Google Scholar 

  19. Visser CLF, Ket JCF, Croiset G, Kusurkar RA. Perceptions of residents, medical and nursing students about Interprofessional education: a systematic review of the quantitative and qualitative literature. BMC Med Educ. 2017;17(1):77–96.

    Google Scholar 

  20. Guraya SY, Barr H. The effectiveness of interprofessional education in healthcare: a systematic review and meta-analysis. Kaohsiung J Med Sci. 2018;34(3):160–5.

    Google Scholar 

  21. Fox L, Onders R, Hermansen-Kobulnicky CJ, Nguyen TN, Myran L, Linn B, Hornecker J. Teaching interprofessional teamwork skills to health professional students: a scoping review. J Interprof Care. 2018;32(2):127–35.

    Google Scholar 

  22. O'Leary N, Salmon N, Clifford A, O'Donoghue M, Reeves S. 'Bumping along': a qualitative metasynthesis of challenges to interprofessional placements. Med Educ. 2019;53(9):903–15.

    Google Scholar 

  23. Mahler C, Berger S, Reeves S. The readiness for Interprofessional learning scale (RIPLS): a problematic evaluative scale for the interprofessional field. J Interprof Care. 2015;29(4):289–91.

    Google Scholar 

  24. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.

    Google Scholar 

  25. Hannes K, Lockwood C. Pragmatism as the philosophical foundation for the Joanna Briggs meta-aggregative approach to qualitative evidence synthesis. J Adv Nurs. 2011;67(7):1632–42.

    Google Scholar 

  26. Law M, Stewart D, Letts L, Pollock N, Bosch J, Westmorland M. Guidelines for critical review of qualitative studies. McMaster University occupational therapy evidence-based practice research Group; 1998.

    Google Scholar 

  27. Wilson B, Bialocerkowski A. The effects of Kinesiotape applied to the lateral aspect of the ankle: relevance to ankle sprains--a systematic review. PLoS One. 2015;10(6):e0124214.

    Google Scholar 

  28. Katrak P, Bialocerkowski AE, Massy-Westropp N, Kumar VS, Grimmer KA. A systematic review of the content of critical appraisal tools. BMC Med Res Methodol. 2004;4(1):22.

    Google Scholar 

  29. Chua AZ, Lo DY, Ho WH, Koh YQ, Lim DS, Tam JK, Liaw SY, Koh G. The effectiveness of a shared conference experience in improving undergraduate medical and nursing students' attitudes towards inter-professional education in an Asian country: a before and after study. BMC Med Educ. 2015;15:233–42.

    Google Scholar 

  30. Sheu L, Lai CJ, Coelho AD, Lin LD, Zheng P, Hom P, Diaz V, O'Sullivan PS. Impact of student-run clinics on preclinical sociocultural and interprofessional attitudes: a prospective cohort analysis. J Health Care Poor Underserved. 2012;23(3):1058–72.

    Google Scholar 

  31. Sytsma TT, Haller EP, Youdas JW, Krause DA, Hellyer NJ, Pawlina W, Lachman N. Long-term effect of a short interprofessional education interaction between medical and physical therapy students. Anat Sci Educ. 2015;8(4):317–23.

    Google Scholar 

  32. Paige JT, Garbee DD, Yu Q, Rusnak V. Team training of inter-professional students (TTIPS) for improving teamwork. BMJ Simul Technol Enhanc Learn. 2017;3(4):127–34.

    Google Scholar 

  33. Darlow B, Coleman K, McKinlay E, Donovan S, Beckingsale L, Gray B, Neser H, Perry M, Stanley J, Pullon S. The positive impact of interprofessional education: a controlled trial to evaluate a programme for health professional students. BMC Med Educ. 2015;15:98.

    Google Scholar 

  34. Balduzzi S, Rucker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22(4):153–60.

    Google Scholar 

  35. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane handbook for systematic reviews of interventions. New York: Wiley; 2019.

  36. Hawkes G, Nunney I, Lindqvist S. Caring for attitudes as a means of caring for patients--improving medical, pharmacy and nursing students' attitudes to each other's professions by engaging them in interprofessional learning. Med Teach. 2013;35(7):e1302–8.

    Google Scholar 

  37. Hess R, Hagemeier NE, Blackwelder R, Rose D, Ansari N, Branham T. Teaching communication skills to medical and pharmacy students through a blended learning course. Am J Pharm Educ. 2016;80(4):1–10.

    Google Scholar 

  38. Quesnelle KM, Bright DR, Salvati LA. Interprofessional education through a telehealth team based learning exercise focused on pharmacogenomics. Curr Pharm Teach Learn. 2018;10(8):1062–9.

    Google Scholar 

  39. Tuiran-Gutierrez GJ, San-Martin M, Delgado-Bolton R, Bartolome B, Vivanco L. Improvement of inter-professional collaborative work abilities in Mexican medical and nursing students: a longitudinal study. Front Psychol. 2019;10:1–5.

    Google Scholar 

  40. Van Winkle LJ, Bjork BC, Chandar N, Cornell S, Fjortoft N, Green JM, La Salle S, Lynch SM, Viselli SM, Burdick P. Interprofessional workshop to improve mutual understanding between pharmacy and medical students. Am J Pharma Educ. 2012;76(8):150.

    Google Scholar 

  41. Haber J, Hartnett E, Allen K, Crowe R, Adams J, Bella A, Riles T, Vasilyeva A. The impact of Oral-systemic health on advancing Interprofessional education outcomes. J Dent Educ. 2017;81(2):140–8.

    Google Scholar 

  42. McCaffrey R, Tappen RM, Lichtstein DM, Friedland M. Interprofessional education in community-based Alzheimer's disease diagnosis and treatment. J Interprof Care. 2013;27(6):534–6.

    Google Scholar 

  43. Pinto C, Possanza A, Karpa K. Examining student perceptions of an inter-institutional interprofessional stroke simulation activity. J Interprof Care. 2018;32(3):391–4.

    Google Scholar 

  44. Shrader S, Hummel H, Byrd L, Wiley K. An interprofessional geriatric medication activity within a senior mentor program. Am J Pharma Educ. 2013;77(1):15.

    Google Scholar 

  45. Zanotti R, Sartor G, Canova C. Effectiveness of interprofessional education by on-field training for medical students, with a pre-post design. BMC Med Educ. 2015;15:121–6.

    Google Scholar 

  46. Berger S, Mahler C, Krug K, Szecsenyi J, Schultz JH. Evaluation of interprofessional education: lessons learned through the development and implementation of an interprofessional seminar on team communication for undergraduate health care students in Heidelberg - a project report. GMS J Med Educ. 2016;33(2):Doc22.

    Google Scholar 

  47. Bridgeman MB, Rusay M, Afran J, Yeh DS, Sturgill MG. Impact of an interprofessional medication error workshop on healthcare student perceptions. Curr Pharm Teach Learn. 2018;10(7):975–81.

    Google Scholar 

  48. Friman A, Wiegleb Edstrom D, Edelbring S. Attitudes and perceptions from nursing and medical students towards the other profession in relation to wound care. J Interprof Care. 2017;31(5):620–7.

    Google Scholar 

  49. Erickson JM, Blackhall L, Brashers V, Varhegyi N. An interprofessional workshop for students to improve communication and collaboration skills in end-of-life care. Am J Hosp Palliat Care. 2015;32(8):876–80.

    Google Scholar 

  50. Oza SK, Boscardin CK, Wamsley M, Sznewajs A, May W, Nevins A, Srinivasan M, EH K. Assessing 3rd year medical students' interprofessional collaborative practice behaviors during a standardized patient encounter: a multi-institutional, cross-sectional study. Med Teach. 2015;37(10):915–25.

    Google Scholar 

  51. Lockeman KS, Appelbaum NP, Dow AW, Orr S, Huff TA, Hogan CJ, Queen BA. The effect of an interprofessional simulation-based education program on perceptions and stereotypes of nursing and medical students: a quasi-experimental study. Nurse Educ Today. 2017;58:32–7.

    Google Scholar 

  52. Seaman K, Saunders R, Dugmore H, Tobin C, Singer R, Lake F. Shifts in nursing and medical students' attitudes, beliefs and behaviours about interprofessional work: an interprofessional placement in ambulatory care. J Clin Nurs. 2018;27(15–16):3123–30.

    Google Scholar 

  53. Hayashi T, Shinozaki H, Makino T, Ogawara H, Asakawa Y, Iwasaki K, Matsuda T, Abe Y, Tozato F, Koizumi M, et al. Changes in attitudes toward interprofessional health care teams and education in the first- and third-year undergraduate students. J Interprof Care. 2012;26(2):100–7.

    Google Scholar 

  54. Viera AJ, Garrett JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360–3.

    Google Scholar 

  55. Wijnen-Meijer M, Burdick W, Alofs L, Burgers C, ten Cate O. Stages and transitions in medical education around the world: clarifying structures and terminology. Med Teach. 2013;35(4):301–7.

    Google Scholar 

  56. Zavlin D, Jubbal KT, Noe JG, Gansbacher B. A comparison of medical education in Germany and the United States: from applying to medical school to the beginnings of residency. Ger Med Sci. 2017;15:Doc15.

    Google Scholar 

  57. Tables and Graphs for Fiscal Year 2018 [https://www.aamc.org/data-reports/report/tables-and-graphs-fiscal-year-2018 ]. Accessed 22 Apr 2020.

  58. Parsell G, Bligh J. Interprofessional learning. Postgrad Med J. 1998;74(868):89–95.

    Google Scholar 

  59. Ahmad MI, Chan SW, Wong LL, Tan ML, Liaw SY. Are first-year healthcare undergraduates at an Asian university ready for interprofessional education? J Interprof Care. 2013;27(4):341–3.

    Google Scholar 

  60. Areskog NH. The need for multiprofessional health education in undergraduate studies. Med Educ. 1988;22(4):251–2.

    Google Scholar 

  61. Thistlethwaite J. Interprofessional education: a review of context, learning and the research agenda. Med Educ. 2012;46(1):58–70.

    Google Scholar 

  62. Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ. 1999;33(2):95–100.

    Google Scholar 

  63. Visser CLF, Wilschut JA, Isik U, van der Burgt SME, Croiset G, Kusurkar RA. The Association of Readiness for Interprofessional learning with empathy, motivation and professional identity development in medical students. BMC Med Educ. 2018;18(1):125.

    Google Scholar 

  64. Mahler C, Rochon J, Karstens S, Szecsenyi J, Hermann K. Internal consistency of the readiness for interprofessional learning scale in German health care students and professionals. BMC Med Educ. 2014;14:145.

    Google Scholar 

  65. CS C, Brandt BF. The readiness for Interprofessional learning scale: to RIPLS or not to RIPLS? That is only part of the question. J Interprof Care. 2015;29(6):525–6.

    Google Scholar 

  66. Norris J, Carpenter JG, Eaton J, Guo JW, Lassche M, Pett MA, Blumenthal DK. The development and validation of the Interprofessional attitudes scale: assessing the Interprofessional attitudes of students in the health professions. Acad Med. 2015;90(10):1394–400.

    Google Scholar 

  67. Pedersen T, Cignacco E, Meuli J, Berger-Estilita J, Greif J. The German Interprofessional Attitudes Scale (G-IPAS): translation, cultural adaptation and validation. GMS J Med Educ. 2020;37(3):Doc32.

    Google Scholar 

Download references

Acknowledgements

We thank Jeannie Wurz for proofreading this manuscript.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

JBE, AF and RG contributed to study design and analysed all the articles. MH conducted the literature search and performed the meta-analysis. HC and JBE performed the article quality analysis. All authors contributed to the interpretation of the results. All authors contributed important intellectual content to the paper and approved the final version.

Author’s information

Joana Berger-Estilita, Dr. med., Consultant in Anaesthesiology and Intensive Care, MMEd (Dundee).

Alexander Fuchs, Dr., Resident in Anaesthesiology.

Markus Hahn, Dr., Resident in Anaesthesiology, Msc Epidemiology.

Hsin Chiang, Dr., Resident in Anaesthesiology.

Robert Greif, Prof. Dr. med., Professor in Anaesthesiology and Intensive Care, MMEd (Bern), FERC

Corresponding author

Correspondence to Joana Berger-Estilita.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This manuscript follows the applicable PRISMA guidelines for systematic reviews.

Supplementary information

Additional file 1.

Literature research for Review about interprofessional education for medical students. Detailed description of the search, including extracted hits, stratified by database.

Additional file 2.

Methodological rigour assessment of the included studies using the modified McMaster Critical Review Form for Quantitative Studies.

Additional file 3.

Table 3: Original RIPLS scores for Chua et al., Paige et al., Systma et al. and Sheu et al.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Berger-Estilita, J., Fuchs, A., Hahn, M. et al. Attitudes towards Interprofessional education in the medical curriculum: a systematic review of the literature. BMC Med Educ 20, 254 (2020). https://doi.org/10.1186/s12909-020-02176-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-020-02176-4

Keywords