Skip to main content

Exploring how differently patients and clinical tutors see the same consultation: building evidence for inclusion of real patient feedback in medical education

Abstract

Background

Undergraduate medical education recognises that patient feedback is potentially valuable for student learning and development as a component of multi-source feedback. However greater exploration of how patient feedback perspectives differ to clinical educators is required for curriculum development and improving student feedback literacy. This study aimed to determine how two sources of feedback, patients and clinical tutors, compare on the same patient-centred, interpersonal criteria.

Methods

A patient feedback instrument designed for the undergraduate medical education setting was used to compare patients’ feedback with clinical tutors’ feedback following a student-patient consultation in the learning context. Assessments from 222 learning consultations involving 40 medical students were collected. Descriptive statistics for tutors and patients for each question were calculated and correlations between patient and tutor were explored using Spearman’s rank-order correlation. Mixed effects ordered logistic regression was used to compare each question with an overall rating for tutor and patients in addition to comparing patient with tutor ratings.

Results

Clinical tutor and patient assessments had a weak but significant positive correlation in all areas except questions related to respect and concern. When making judgements compared with overall assessment, patients’ ratings of respect, concern, communication and being understood in the consultation have a greater effect. After eliminating the effect of generally higher ratings by patients compared with tutors using comparative ordered logistic regression, patients rated students relatively less competent in areas of personal interaction.

Conclusion

This study provides insight about patient feedback, which is required to continue improving the use and acceptability of this multisource feedback to students as a valuable component of their social learning environment. We have revealed the different perspective-specific judgement that patients bring to feedback. This finding contributes to building respect for patient feedback through greater understanding of the elements of consultations for which patients can discriminate performance.

Peer Review reports

Background

Building the case for involvement of patients as a critical feedback provider in the medical education curriculum is the context for this study. Desirable medical graduates’ work readiness includes capabilities for lifelong learning and development through incorporating feedback into practice, an appreciation of self-regulating for improvement, and a capacity to refine a patient-centred professional identity. Including patients and their feedback perspectives in medical teaching can contribute to these outcomes [1, 2].

Our previous participatory research developed a conceptual map for Requirements of Patient-Centred Care Systems (ROPCCS) [3]. The findings state the need for “Mechanisms to give patients a voice at all points of care and in all settings” as well as “Measuring with patients whether patient-centred care is achieved and feedback to staff about these outcomes” [4]. These statements reinforce the need to formalise and value patients’ feedback to students, in addition to that of the clinical tutors, about students’ engagement with patients and patient-centred capabilities. Evidence that communication in medical consultations is rated differently by patients and physicians, suggests that perspective matters and each player applies different concepts influenced by different value judgments and internal reference standards [5].

The supposition driving this enquiry is that if educators and students understand the perspective-specific feedback differences more explicitly we can improve the use, acceptability and valuing of these different perspectives [6] in the learning environment to further develop understanding of patient-centredness [1, 7].

Feedback concepts and practices have evolved in response to new understandings of what is required for most effective learning, including the role different and multiple actors need to play in feedback processes [6, 8]. In considering the patient as a source of feedback for students we are reminded by Baines et al. [9] that the subjective nature of patient narratives, which provide powerful insights about what matters to patients, should be viewed as a strength for learning processes, not a weakness. What matters to a patient and their individual perception of a health consultation encounter is the authentic feedback only real patients with lived experience of illness and health concerns can provide. Literature focussing on feedback provided by standardised patients (SPs - actors or patients trained in a vignette for a character role with a health condition) indicates this approach offers students a different perspective and can be impactful for improving communication skills [10, 11], however nearly two decades ago Howley and Martindale [12] recommended the need for further research to understand the nature of feedback offered by SPs and how this differs from that delivered by medical professionals. It appears that only a broad understanding of difference is known, not the detailed components of difference. There are a growing number of studies investigating processes and impact of patient feedback for post graduate trainees or junior doctors [13,14,15]. However, very little is written about community patient feedback in pre-registration medical education and there are no studies which articulate the patient-centred consultation attributes which real community patients, rather than trained SPs, can provide feedback on and how these provide a particular point of difference to the way clinical tutors may view a student’s patient-centred approach.

A scoping review of understanding of feedback in medical education [16] revealed only 2.2% of studies examined used patient feedback to help learners improve. Abudu [17], a senior medical student, has highlighted the reality that medical schools underutilise patient feedback and rely on clinical tutors’ impressions of the patient perspective, rather than asking the patient to comment about humanistic skills.

References to multisource feedback in medical student learning tend to focus on peer and self-assessment, not patient assessment. However recently, Chua and Bogetz [18] shifted their focus from the post-graduate realm to explore medical students’ solicitation of patient feedback in the acute setting and this work provides useful insights which can guide processes for students seeking meaningful patient feedback. Despite this, understanding what perspective-specific role patients play as a feedback source in medical school learning environments has yet to be fully investigated in terms of specifically what students can expect from patients that may be different to what they receive from their clinical tutors, to enable them to synthesise and utilise such feedback in their development.

Baines et al. [13] suggested that “… facilitated reflection appears integral to transforming initial patient feedback reactions into measurable behavioural change, quality improvement initiatives or educational tasks” (p 182), suggesting that clinical tutors will benefit as facilitators in clinical learning environments if they have a deeper understanding of what the content of patient feedback contributes compared to what they, tutors, provide students. It is advantageous for both educators and students to therefore broaden the scope of the consultation feedback [19] to include patients and have clarity about how the two sources of feedback on the same patient-centred, interpersonal criteria compare in order to value the information as relevant when delivered.

There is an increased focus on the impact of social learning environments and social learning processes on health professional education [19,20,21], which intersects with more patient-centred educational approaches and patient involvement in these activities. Gruppen et al.’s [20] scoping review of interventions to improve the health professional learning environment identified a gap in knowledge about the patients’ impact on the learning environment.

We aim to contribute to understanding the social dimensions of learning with patients. By highlighting what patients uniquely add to the feedback discourse of the student-patient interaction, we aim to offer insights which may improve the learning environment to support the student more effectively. The scope of this study and paper does not extend to an evaluation of the student learning or acceptability of use of written paper feedback. Students’ response to patient feedback is an area for another study.

Methods

Context

The patient-centred medical education program, the Patient Partner Program (P3, [22]), for medical students in the 4th year of a 5-year undergraduate degree, is the context of this study. P3 provides a dedicated social learning environment and is underpinned by the “explicit teaching of patient-centred care with consideration of appropriate staging of skill development and repeated opportunities for practice” [23]. The patient partners who volunteer for the program are community members with chronic illness who involve themselves as the learning context and their feedback contributes towards students’ necessary growth, improvement and application to practice. P3 has established the provision of a safe environment for students and patients to partner in learning. This involves formal consent processes for patient partners, enabling students to explore issues and hone consultation skills and patients to be honest about themselves and about the student performances.

P3 is conducted at our clinical school weekly in small teaching groups: 3–4 students, one patient partner and one clinical tutor for each session. The P3 consultation focuses on chronic health management skills, where the student, patient and clinical tutor engagement enables students to learn to negotiate doctor and patient agendas focusing on what matters to the patient for optimal health management outcomes while building a nuanced, integrated consultation approach in a patient-centred context. Since P3’s inception, the clinical tutors who teach and mentor in the program have provided students with immediate verbal and written feedback on their patient-centred consultation skills according to our clinical tutor feedback instrument, the Rating Instrument for Clinical Skills (RICS) [22, 24]. Immediate verbal patient partner feedback for students has always been sought but formalising this feedback process with both verbal and written patient feedback has become an important focus. Verbal feedback is provided to students in the group setting, while written feedback is provided individually after the session. The routine educational offering in P3 now involves the Medical Student Interpersonal Skills Questionnaire (MSISQ), chosen for its utility in a patient teaching program closely aligned to the patient-centred learning setting for this study [24, 25], and is completed by all patient partners at the end of each P3 consultation for the purpose of providing students with feedback.

The MSISQ (Supplementary file 1) was developed for use in a similar patient-partner teaching program after added benefits of written patient feedback to usual verbal patient feedback and tutor verbal and written feedback were identified by author JM and colleagues [24]. Review of available tools for written patient feedback identified the Doctors’ Interpersonal Skills Questionnaire (DISQ) as the best candidate for modification for medical student teaching settings [25]. As patient teaching partners did not consider the DISQ ideal for their feedback to students, a modification was developed in collaboration with them: the MSISQ. The MSISQ is a 10-item questionnaire with a 5-point likert scale, which asks patients to rate a student’s performance in the learning setting from their perspective. The patient-centred aspects encompassed by the MSISQ include; respect, being listened to, knowledge, hearing clear language and an opportunity to express concerns (feedback items are available in results, Table 2). The patient partners are given an explanation about completing the MSISQ honestly according to their experience in the consultation, but no formal training about providing feedback is given. The MSISQ has been utilised in this study to deepen understanding of the differences of the perspectives of patients and clinical tutors regarding students’ patient-centred approach in the learning context.

Design

Data collection of matched tutor and patient partner MSISQ assessments in P3 spanned one full academic year, and one cohort of 46 medical students. In addition to usual assessment tutors were invited to complete the MSISQ for study purposes, for each student in their group who were directly involved in patient interaction during the consultation, after the session was complete according to their view of how the student interacted with the patient partner. In all other respects verbal and written patient feedback to students was as described under ‘context’ above. As each MSISQ rating relates to one individual consultation, no repeated assessments were possible as every assessment triad was different, that is, different student groupings, tutor and patient combinations. Three participant groups were recruited to the study:

  1. (i)

    Students – scheduled for P3 clinical encounters over the year,

  2. (ii)

    Clinical tutors – rated students using RICS and MSISQ

  3. (iii)

    Patient Partners – assessed student encounters using MSISQ

Ethical approval was obtained from the Tasmania Human Research Ethics Committee (reference H0016358).

Statistical methods

Relationships between patient partner and clinical tutor MSISQ assessments were first explored using descriptive statistics (range, mean SD), and Spearman’s rank-order correlation analysis for each MSISQ question. Comparison of the judgements of the skills of the students made by patient partners and clinical tutors was made using mixed effects ordered logistic regression, as the most appropriate statistical method given the underlying assumptions about the data (the responses were inherently rank-ordered). An interaction analysis was used to compare the strength of the response to each of the questions (Q1–9) with the overall judgement (Q10) made by the clinical tutors and patients separately. Subsequently, judgements of the patients with that of the clinical tutors were compared, indicating whether any differences in perceptions of the student-patient encounter are being made by the patients and the clinical tutors. This was conducted using repeated measures mixed effects ordered logistic regression. The strength of the response to Questions 1–9 being compared to Question 10, and the difference between those responses by the clinical tutors and patients was compared by interaction analysis. Regression coefficients are expressed as odds ratios (OR; 95% confidence intervals; P-values). The analysis was performed using StataMP2 version 14.2 [26].

Results

Eighty-five patient partners and all eight clinical tutors participated in the study, with 40 students (87% of cohort) consenting to utilise their MSISQ feedback. We collected 222 matched assessments. The full 5-point scale was used by patients for 4 of 10 items and by clinical tutors for 5 of 10 items. For the remaining items only the four upper ratings were utilised. Average ratings were consistently higher for patients compared with clinical tutors (Table 1).

Table 1 Descriptive analyses of individual questions to global rating

There were weak but significant positive correlations between clinical tutors and patients for all items except Q7 (The respect shown to me by this student doctor was...) and Q9 (The concern the student doctor showed for me as an individual in this consultation was…) where there was no correlation (Table 2).

Table 2 Correlation between patient and clinical tutor ratings for each question

Ordered logistic regression allowed us to rank the assessments of individual items relative to an overall assessment (Q10) for clinical tutors and patients individually (refer to patient and tutor results on left hand side of Table 3). Relative to their overall assessment of the students (Q10), the clinical tutors assessment of students was higher for their rapport and communication (Q1,2,3), concern shown for individuals (Q9), exploration of patient concerns (Q6), and respect (Q7), whilst there assessment of the students was lower in their knowledge (Q4) relative to their overall performance. Relative to their overall assessment (Q10), the patients assessment of students was higher for their concern shown for individuals (Q9), exploration of patient concern (Q6) and respect (Q7), whilst their assessment of students was lower in knowledge and confidence in ability (Q4,5) (Table 3). It is notable that clinical tutors but not patients assessed students more highly in rapport and communication (Q1,2,3) compared to their overall rating, indicating that tutors were less influenced than patients by these elements of the consultation when considering their overall score. Conversely patients but not clinical tutors rated confidence in students’ ability lower than the overall assessment, indicating that patients were less influenced than clinical tutors by this in considering their overall score.

Table 3 Patient and clinical tutor judgements comparing individual questions with the overall judgement of the student

Ordered logistic regression analyses compared the strength of the favourability of the judgements of the patients in each of the questions with that of the clinical tutors, and between the overall judgement (Q10) and each of the individual judgements (Q1–9) (Comparison results on right hand side of Table 3). This allowed comparison between patients and clinical tutors assessments, correcting for the systematically more favourable judgement being made by the patients. This analysis showed that patients assessed students as relatively less competent in areas of the personal interaction: less openness and ease (Q1), less ability to listen (Q2), and less clear language used (Q3). There was less difference between clinical tutor and patients in the question of performance related to knowledge (Q4), engendering confidence (Q5), elucidation of concerns (Q6), showing respect (Q7), personal understanding (Q8) and concern shown for the individual (Q9). Despite a lack of correlation at an individual student level between clinical tutors and patients in their ratings of students on respect (Q7) and concern (Q9) (Table 2), at an aggregate level there was consistency in the assessment of these aspects of the consultation compared to overall performance, with both groups rating them relatively more highly to other items on the instrument (left side of Table 3).

Discussion

Broadening our scope of feedback by understanding patients’ contribution to it, is important if learning development outcomes are to adequately shift towards partnership in health care delivery. Generating a deeper understanding about the feedback perspectives provided to students from multiple sources is imperative to improving feedback processes and embedding acceptance and utilisation for learning. Findings from this study provide insights about how patient feedback, as a regular, integral component of patient-centred medical education curricula, should be a factor in the social component of the learning environment [20].

Different perspective-specific feedback

Our study has shown that different perspectives provide different but important information about student patient-centred performance. While this is not unexpected, these findings contribute to an appreciation of how patients may impact on students’ insights of their consultation approach and what is expected in partnerships [27]. Moonen-van Loon et al.’s [28] investigation of reliability of multi-source feedback in junior doctor training recommends further research to investigate which competencies are best assessed by patients to allow for successful implementation in competency-based assessment. We have begun this investigation in the pre-registration setting, showing which areas patients are bringing their expertise for a different perspective to the assessment process.

The correlation between clinical tutors’ and patients’ ratings of students is weak but significant for 8 of 10 items in the MSISQ questionnaire, with the notable exceptions being the items relating to respect and concern for which there was no correlation. Ordered logistic regression shows that patient and clinical tutor groups placed different emphases on individual items relative to the overall rating, thereby each using the scale differently but consistently from their particular perspective. Both groups can be considered ‘expert’ groups for feedback, with each placing importance on different things. However, we did find both groups were the same in their ability to detect our novices had deficiencies, assessing the students as having inferior knowledge relative to their overall performance.

The study shows empirically that patients and clinical tutors have different perspectives of patient-student consultations when asked identically worded questions. Clinical tutors were more influenced by skills and knowledge than patients when providing an overall assessment which is consistent with the review by Lee, Brain and Martin [29] who found expert clinical raters, in direct observation settings, had greater stringency in making interpretations when rating interviewing and physical examinations than less experienced clinicians. Clinical tutors also rated students’ rapport and communication skills higher relative to the overall, whilst patients did not. This finding suggests that clinical tutors have more difficulty assessing the patient-centred aspects of communication within the consultation which is not surprising given observers do not experience the patient perspective. We believe that these findings provide important insights about patient-centred, interpersonal criteria that patients consider matter to them in interactions as well as useful information for clinical tutors about what they can reliably assess in terms of patient-centred experience.

Patients systematically made more favourable overall assessments (Q10) than clinical tutors indicating different types of judgements being made between patients and clinical tutors. While we don’t know if participants interpreted it as a ‘recommender score’ or a ‘global assessment,’ we have identified that there are different things influencing the ‘overall’ for each group. There are significant differences between clinical tutors’ and patients’ overall judgement of a student’s performance. Ratings to specific items demonstrated some important nuanced differences – notably relatively lower ratings by patients on aspects of the consultation involving personal interactions. The mechanisms driving these differences in patient feedback, particularly the ‘overall’ judgement are beyond the scope of this study and worthy of further exploration.

Respect and concern

Our results show that one of the strengths of patients’ perspectives in the learning environment is in rating respect, concern, communication and being understood in a consultation. It is logical that patients can make a definitive assessment on these because they experience and feel these components as the recipients of the interaction. Being treated like a person and as an equal ‘like I matter’, are both common meanings of respect held by diverse groups of patients [30]. As Baines et al. [9] purport, the patient narrative is a form of evidence, the patients’ ‘constructed version of reality’ which must be respected. The findings in relation to perceptions of ‘respect’ showed no correlation between patients and clinical tutors. However both groups rated this item highly relative to the overall question possibly suggesting a level of importance respect has to each group. Given that the MSISQ tool was co-designed collaboratively with patients we can be confident that the items in the MSISQ instrument are important to patients and these results indicate ‘respect’ and ‘concern’ have a greater relative importance than other items on the MSISQ. Despite this, what respect actually means to patients and clinical tutors appears different, suggesting that there is no common understanding of what respect means for patients. Given that one person cannot ‘feel respected’ for someone else and therefore a tutor is unable to determine the experience of the patient from observing an interaction [31], it follows that the patient must be the voice to inform students how they have been treated. It is known that respect matters to patients and is considered a fundamental right in terms of how they ought to be treated [30, 32]. Empirical evidence indicates it is the key predictor of overall physician ratings [33] and for some patients respect is demonstrated as the doctor showing concern when asking questions, linking respect and concern as important components of communication. Our study shows that on each form of analysis, patients’ have a different perspective to that of tutors and their performance ratings in relation to patient-centredness in a consultation interaction. Therefore feedback provided on these elements are worthy of attention by student learners. Future work focussing on building shared understanding of respect and measuring it, will add value to feedback in learning and practice.

Implications for social learning in patient-centred curriculum

The involvement of patients in education is recognised to align with social learning theory, bringing opportunities for authentic experience and perspectives to a learning interaction [1]. This study contributes to understanding the impact patients can have on the social learning environment of medical students [1, 20] namely, understanding feedback capacity of patients and the perspective-specific content of their feedback.

There is a call for a person-centred care skills framework for health professional education to improve consistency of teaching and assessment of such skills [34]. This concept we argue, should incorporate patient involvement with provision of feedback given our patient partners show which person-centred practice qualities they can specifically offer a unique perspective. The assessments made by patient partners in this study shows their ratings of respect, concern and communication provides feedback about qualities that are oriented towards others and promote relationships, which is core to person-centred care and partnerships [27]. Ideally, the curriculum environment for effective feedback is a dialogic one with different kinds of interactions and actors [8], so providing the opportunity within patient-student learning contexts for multi-source feedback which includes both senior clinicians and patients collaboratively, will assist students in interpreting the feedback information and discern what is required to improve.

Two insights arise from the study results in relation to how our educationally engaged patients approach their MSISQ assessment responsibility. Firstly, patients generally rate higher on the scale than clinical tutors, a finding consistent with Moonen-van Loon et al. [28] who found non-clinician assessors were more lenient than clinician assessors in a study of multi-source feedback study in the clinical setting. However, our patients have demonstrated a willingness to utilise the entire scale for some items in their assessments indicating capacity to discriminate between student performances. Secondly, it is evident that in this setting, with a safe environment established through a structured learning partnership where honesty is fundamental, patients are very prepared to make the call on student performance.

These findings add confidence for educators contemplating involving patient feedback into student assessment, particularly considering we provided no formalised feedback training to patients, only a general explanation of the MSISQ questionnaire and the value of their perspective. Patients’ understanding of their own capacity to provide valuable feedback should be a focus of attention, so that they clearly understand what specific learning goals they are contributing to. Our study pinpoints the elements of the consultation where they are providing a unique and valuable perspective. This responds to Chua and Bogetz’s [18] call for patients to be empowered to see themselves as teachers who have an impact on student learning through feedback.

What is unclear from this study, is a deeper understanding of what factors influence patients’ feedback capacity outside of their interaction experience within that consultation. If we consider the patient as another expert in the room alongside the clinical tutor, then perhaps Johnson et al.’s [35] work with educators clarifying feedback behaviours has relevance for how we might build greater patient understanding and readiness for high-quality feedback. We understand that feedback episodes are shaped by context, the individuals involved and culture [35], so if we are to build a culture of patient feedback within the practice of multi-source feedback, further research is required to investigate the factors influencing patient feedback behaviours.

Future investigation of the patient experience of providing feedback will allow the complexity of patient feedback to be contextualised and further supported for maximum benefit. Further research is also necessary to understand the student experience when receiving and integrating patient feedback from the MSISQ.

Limitation

This study has been conducted at a single site in the out-of-ward learning setting with patients with chronic illness and therefore further research is required to investigate the feedback correlation in other settings including the acute, work-based setting where consultations are observed and assessed by clinical supervisors and could include patient feedback. It is acknowledged that this study using the MSISQ tool has a limited focus on understanding what the patient’s perspective brings to feedback in learning. Further research is required to understand other factors influencing patient ratings in a consultation and the differential meanings of the constructs respect and concern.

Conclusion

The findings from this study have provided some insight required to continue improving student and tutors’ patient feedback understanding and utilisation within the out-of-ward learning context as well as building the case for social learning relationships which involve patients and integrate their feedback for development of person-centred graduates.

Our study shows that patients and clinical tutors largely accord in their assessments, with patients additionally highlighting a valuable different perspective. We have determined the different perspective-specific judgements that patients bring to medical education feedback compared to clinical tutors as respect, concern, communication and being understood in a consultation. We believe it is possible to build respect for this source of feedback as important, through greater understanding of the elements of consultations which patients can discriminate performance. Educators and students ought to make the effort to actively pursue accessing and learning from these specific insights of patient interactions from the most valuable resource – the patient.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request

Abbreviations

SPs:

Standardised patients

P3:

Patient Partner Program

PTA:

Patient Teaching Associates

MSISQ:

Medical Student Interpersonal Skills Questionnaire

ROPCCS:

Requirements of Patient-Centred Care Systems

References

  1. 1.

    Fong S, Tan A, Czupryn J, Oswald A. Patient-centred education: how do learners’ perceptions change as they experience clinical training? Adv Health Sci Educ. 2019;24(1):15–32. https://doi.org/10.1007/s10459-018-9845-y.

    Article  Google Scholar 

  2. 2.

    Barr J, Bull R, Rooney K. Developing a patient focussed professional identity: an exploratory investigation of medical students’ encounters with patient partnership in learning. Adv Health Sci Educ. 2014;20(2):325–38.

    Article  Google Scholar 

  3. 3.

    Ogden K, Barr J, Greenfield D. Determining requirements for patient-centred care: a participatory concept mapping study. BMC Health Serv Res. 2017;17(1):780. https://doi.org/10.1186/s12913-017-2741-y.

    Article  Google Scholar 

  4. 4.

    Ogden K, Barr J, Greenfield D. Determining requirements for patient-centred care: a participatory concept mapping study. BMC Health Serv Res. 2017;17(1):4.

  5. 5.

    Röttele N, Schöpf-Lazzarino AC, Becker S, Körner M, Boeker M, Wirtz MA. Agreement of physician and patient ratings of communication in medical encounters: a systematic review and meta-analysis of interrater agreement. Patient Educ Couns. 2020;103(10):1873–82. https://doi.org/10.1016/j.pec.2020.04.002.

    Article  Google Scholar 

  6. 6.

    Carless D, Boud D. The development of student feedback literacy: enabling uptake of feedback. Assess Eval High Educ. 2018;43(8):1315–25. https://doi.org/10.1080/02602938.2018.1463354.

    Article  Google Scholar 

  7. 7.

    de Groot E, Schönrock-Adema J, Zwart D, Damoiseaux R, Van den Bogerd K, Diemers A, et al. Learning from patients about patient-centredness: a realist review: BEME guide no. 60. Med Teach. 2019;42(4):380–92. https://doi.org/10.1080/0142159X.2019.1695767.

  8. 8.

    Boud D, Molloy E. Rethinking models of feedback for learning: the challenge of design. Assess Eval High Educ. 2013;38(6):698–712. https://doi.org/10.1080/02602938.2012.691462.

    Article  Google Scholar 

  9. 9.

    Baines R, Denniston, C, Munro, J. BMJOpinion: BMJ 2019. [cited 2019]. Available from: https://blogs.bmj.com/bmj/2019/07/08/the-transformative-power-of-patient-narratives-in-healthcare-education/. Accessed 7 July 2019.

  10. 10.

    Qureshi AA, Zehra T. Simulated patient’s feedback to improve communication skills of clerkship students. BMC Med Educ. 2020;20(1):15. https://doi.org/10.1186/s12909-019-1914-2.

    Article  Google Scholar 

  11. 11.

    Block L, Brenner J, Conigliaro J, Pekmezaris R, DeVoe B, Kozikowski A. Perceptions of a longitudinal standardized patient experience by standardized patients, medical students, and faculty. Med Educ Online. 2018;23(1):1548244. https://doi.org/10.1080/10872981.2018.1548244.

    Article  Google Scholar 

  12. 12.

    Howley LD, Martindale J. The efficacy of standardized patient feedback in clinical teaching: a mixed methods analysis. Med Educ Online. 2004;9(1):4356. https://doi.org/10.3402/meo.v9i.4356.

    Article  Google Scholar 

  13. 13.

    Baines R, Regan de Bere S, Stevens S, Read J, Marshall M, Lalani M, et al. The impact of patient feedback on the medical performance of qualified doctors: a systematic review. BMC Med Educ. 2018;18(1):173.

    Article  Google Scholar 

  14. 14.

    Bogetz AL, Rassbach CE, Chan T, Blankenburg RL. Exploring the educational value of patient feedback: a qualitative analysis of pediatric residents’ perspectives. Acad Pediatr. 2017;17(1):4–8. https://doi.org/10.1016/j.acap.2016.10.020.

    Article  Google Scholar 

  15. 15.

    Bogetz AL, Orlov N, Blankenburg R, Bhavaraju V, McQueen A, Rassbach C. How residents learn from patient feedback: a multi-institutional qualitative study of pediatrics residents’ perspectives. J Grad Med Educ. 2018;10(2):176–84. https://doi.org/10.4300/JGME-D-17-00447.1.

    Article  Google Scholar 

  16. 16.

    Bing-You R, Hayes V, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Feedback for learners in medical education: what is known? A scoping review. Acad Med. 2017;92(9):1346–54. https://doi.org/10.1097/ACM.0000000000001578.

    Article  Google Scholar 

  17. 17.

    Abudu B. Where is the patient voice in clinical clerkship evaluations? Acad Med. 2019;94(5):610–1. https://doi.org/10.1097/ACM.0000000000002615.

    Article  Google Scholar 

  18. 18.

    Chua IS, Bogetz AL. Patient feedback requirements for medical students: do perceived risks outweigh the benefits? Clin Pediatr (Phila). 2018;57(2):193–9. https://doi.org/10.1177/0009922817696464.

    Article  Google Scholar 

  19. 19.

    van der Leeuw R, Teunissen P, van der Vleuten C. Broadening the scope of feedback to promote its relevance to workplace learning. Acad Med. 2018;93(4):556–9. https://doi.org/10.1097/ACM.0000000000001962.

    Article  Google Scholar 

  20. 20.

    Gruppen LID, Durning S, Maggio L. Interventions designed to improve the learning environment in the health professions: a scoping review. MedEdPublish. 2018;7(3):73.

    Article  Google Scholar 

  21. 21.

    Noble C, Sly C, Collier L, Armit L, Hilder J, Molloy E. Enhancing feedback literacy in the workplace: a learner-centred approach. Augmenting Health and Social Care Students’ Clinical Learning Experiences Outcomes and Processes. Edited by Stephen Billett, Jennifer Newton, Gary Rogers, and Christy Noble. Wiesbaden: Springer Nature; 2019. 283–306. https://doi.org/10.1007/978-3-030-05560-8_13.

  22. 22.

    Barr J, Ogden K, Rooney K. Committing to patient-centred medical education. Clin Teach. 2014;11(7):503–6. https://doi.org/10.1111/tct.12196.

    Article  Google Scholar 

  23. 23.

    Ogden K, Barr J, Greenfield D. Determining requirements for patient-centred care: a participatory concept mapping study. BMC Health Serv Res. 2017;17(1):2.

  24. 24.

    Lai MMY, Roberts N, Mohebbi M, Martin J. A randomised controlled trial of feedback to improve patient satisfaction and consultation skills in medical students. BMC Med Educ. 2020;20(1):277. https://doi.org/10.1186/s12909-020-02171-9.

    Article  Google Scholar 

  25. 25.

    Hogan N, Li H, Pezaro C, Roberts N, Schmidt E, Martin J. Searching for a written patient feedback instrument for patient-medical student consultations. Adv Med Educ Pract. 2017;8:171–8. https://doi.org/10.2147/AMEP.S119611.

    Article  Google Scholar 

  26. 26.

    StataCorp. Stata Statistical Software: Release 14. College Station: StataCorp LP; 2015.

  27. 27.

    Langlois S, Mehra K. Teaching about partnerships between patients and the team: exploring student perceptions. J Patient Exp. 2020;0(0):2374373520933130.

    Google Scholar 

  28. 28.

    Moonen-van Loon JM, Overeem K, Govaerts MJ, Verhoeven BH, van der Vleuten CP, Driessen EW. The reliability of multisource feedback in competency-based assessment programs: the effects of multiple occasions and assessor groups. Acad Med. 2015;90(8):1093–9. https://doi.org/10.1097/ACM.0000000000000763.

    Article  Google Scholar 

  29. 29.

    Lee V, Brain K, Martin J. Factors influencing mini-CEX rater judgments and their practical implications: a systematic literature review. Acad Med. 2017;92(6):880–7. https://doi.org/10.1097/ACM.0000000000001537.

    Article  Google Scholar 

  30. 30.

    Beach MC, Branyon E, Saha S. Diverse patient perspectives on respect in healthcare: a qualitative study. Patient Educ Couns. 2017;100(11):2076–80. https://doi.org/10.1016/j.pec.2017.05.010.

    Article  Google Scholar 

  31. 31.

    Wilkinson E. The patients who decide what makes a good doctor. BMJ. 2018:k1829. https://doi.org/10.1136/bmj.k1829.

  32. 32.

    Alfred Emergency Education. Compassion in care: the patient experience. https://emergencyeducation.org.au/compassion-webinar-series/ [Webinar]2020. Accessed 29 Sept 2020.

  33. 33.

    Quigley DD, Elliott MN, Farley DO, Burkhart Q, Skootsky SA, Hays RD. Specialties differ in which aspects of doctor communication predict overall physician ratings. J Gen Intern Med. 2014;29(3):447–54. https://doi.org/10.1007/s11606-013-2663-2.

    Article  Google Scholar 

  34. 34.

    Moore HL, Farnworth A, Watson R, Giles K, Tomson D, Thomson RG. Inclusion of person-centred care in medical and nursing undergraduate curricula in the UK: interviews and documentary analysis. Patient Educ Couns. 2021;104(4):877–86. https://doi.org/10.1016/j.pec.2020.09.030.

  35. 35.

    Johnson CE, Keating JL, Farlie MK, Kent F, Leech M, Molloy EK. Educators’ behaviours during feedback in authentic clinical practice settings: an observational study and systematic analysis. BMC Med Educ. 2019;19(1):129. https://doi.org/10.1186/s12909-019-1524-z.

    Article  Google Scholar 

Download references

Acknowledgements

This project was supported by the Launceston Clinical School, University of Tasmania and Eastern Health Clinical School, Monash University. We acknowledge the contributions of Michelle Horder, Patient Partner Program Coordinator who facilitated data collection for the study; all Patient Partners, clinical tutors and the student cohort who generously participated in the study.

Funding

No funding was received for this research.

Author information

Affiliations

Authors

Contributions

JB and KO contributed to the concept and design of the study. JB, KO, IR and JM contributed to the analysis and interpretation of data. JB wrote the original manuscript and all authors participated in critical revision of the article and approved the final article.

Corresponding author

Correspondence to Jennifer Barr.

Ethics declarations

Ethics approval and consent to participate

Ethics approval was obtained from the Tasmania Human Research Ethics Committee (reference H0016358). All participants received written and verbal information about the research project before signing a consent form to participate.

Competing interests

None

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Medical Student Interpersonal Skills Questionnaire (MSISQ).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Barr, J., Ogden, K., Robertson, I. et al. Exploring how differently patients and clinical tutors see the same consultation: building evidence for inclusion of real patient feedback in medical education. BMC Med Educ 21, 246 (2021). https://doi.org/10.1186/s12909-021-02654-3

Download citation

Keywords

  • Patient feedback
  • Multisource feedback
  • Medical education