Skip to main content

Improving the assessment of communication competencies in a national licensing OSCE: lessons learned from an experts’ symposium

Abstract

Background

As the communication competencies of physicians are crucial for providing optimal patient care, their assessment in the context of the high-stakes Objective Structured Clinical Examination (OSCE) is of paramount importance. Despite abundant literature on the topic, evidence-based recommendations for the assessment of communication competencies in high stakes OSCEs are scarce. As part of a national project to improve communication-competencies assessments in the Swiss licensing exam, we held a symposium with national and international experts to derive corresponding guidelines.

Methods

Experts were invited on account of their recognized expertise either in teaching or assessing communication competencies, or in conducting national high-stakes OSCEs. They were asked to propose concrete solutions related to four potential areas for improvement: the station design, the rating tool, the raters’ training, and the role of standardized patients. Data gene.rated in the symposium was available for analysis and consisted of video recordings of plenary sessions, of the written summaries of group work, and the cards with participants’ personal take-home messages. Data were analyzed using a thematic analysis approach.

Results

Nine major suggestions for improving communication-competencies assessments emerged from the analysis and were classified into four categories, namely, the roles of the OSCE scenarios, rating tool, raters’ training, and simulated patients.

Conclusion

In the absence of established evidence-based guidelines, an experts’ symposium facilitated the identification of nine practical suggestions for improving the assessment of communication competencies in the context of high-stakes OSCEs. Further research is needed to test effectiveness of the suggestions and how they contribute to improvements in the quality of high-stakes communication-competencies assessment.

Peer Review reports

Background

Good physician-patient communication is crucial for optimal patient care, as evidenced by its positive impact on outcomes, such as patients’ health, compliance, trust, and related healthcare costs [1,2,3]. On the other hand poor communication accounts for a significant portion of patients’ complaints [4]. As is the case in many countries, the Swiss framework for undergraduate medical training requires medical faculties to provide communication-skills training and to conduct assessments of communication competencies throughout the undergraduate curriculum [5, 6]. Assessment of physician-patient communication competencies can be performed through direct observation of interaction with real patients, rating of encounters with standardized patients, rating of interactions recorded on audio- or videotape, patient or multisource questionnaires [7, 8]. Direct observation of clinical encounter with standardized patients is widely used by medical schools and residency programs, since it provide evaluation of communication and interpersonal skills in a high fidelity and authentic setting [7] Since 2011, the clinical skills component of the Swiss Federal Licensing Exam (FLE), which consists of a 12-station Objective Structured Clinical Examination (OSCE) [9], has been used for the systematic assessment of communication competencies at all stations. Alongside the USA and Canada, Switzerland is one of the few countries in the world that introduced the OSCE format as part of the national licensing exams. Despite the recognized appropriateness of the OSCE format for assessing complex communication skills [10, 11], the reliable and valid assessment of these competencies poses challenges. For instance, the multiplicity of frameworks developed over the last decade to describe what constitutes “good physician-patient communication” and the “communication tasks” to be accomplished during medical encounters [12,13,14], has led to the development of many assessment instruments for communication competencies without an agreed-upon gold standard for the OSCE setting [15,16,17]. Furthermore, compared to other clinical skills, communication competencies seem to be harder to assess reliably [18]. Indeed, research shows low inter-case reliability as a consequence of high context-specificity, and low inter-rater reliability, which is probably intrinsic to the subjective nature of an assessment of communication [18]. During the clinical skills component of the FLE, communication competencies are assessed at every station using a global rating scale derived from the Analytic Global OSCE Ratings developed by Hodges and McIlroy [19, 20], which measures four dimensions: (a) addressing the patient’s needs, (b) structure of the conversation, (c) verbal expression, and (d) nonverbal expression. Several aspects were considered in the selection of this instrument: (1) differences among Swiss medical schools in terms of their instructional models and assessment tools for communication competencies; (2), ease of use by examiners without a need for extensive training; (3) sufficiently general to ensure students of different medical faculties are not placed at a disadvantage, and, (4) the assessment of communication competencies should be completed in less than 2 min. In the Swiss FLE context, this scale showed over the years good internal consistency among the four dimensions, with a Cronbach’s alpha ranging from 0.85 to 0.90. However, internal quality analysis (data not published) suggested that the scale’s high internal consistency might have been due to the raters’ inability to differentiate between the four dimensions of this scale.

These reflections led to the question of how to improve assessments of communication competencies in the Swiss FLE. The Swiss Federal Office of Public Health funded a national project in 2014 to address the challenge of improving the assessment of communication competencies. The first step of the project consisted of a nationwide survey; collecting data from instructors and students on how Swiss medical schools train students to apply communication skills [5]. The second consisted in a literature review and a second survey exploring perspectives suggested by the FLE candidates, raters, and communication-competency instructors on how to improve assessments of physician-patient communication for the Swiss FLE. This allowed us to identify four potential areas for improvement: (1) the station design, (2) the rating tool, (3) the raters’ training, and (4) the role of standardized patients (SPs). The next step was aimed at developing concrete measures pertaining to these areas by organizing a symposium with international experts. This article reports and discusses the main themes that emerged from this symposium. Given the scarcity of evidence-based recommendations for physician-patient communication assessment in the high-stakes licensing OSCE [12, 18, 21,22,23], we believe this article may offer practical suggestions to all people involved in the organization of this type of OSCE.

Methods

Participants

Twenty-nine communication and assessment experts met for a 2-day symposium held in February 2016 in Bern, Switzerland. The four international experts (from Canada, Italy, the Netherlands, and the United Kingdom) were recognized for their expertise and research activity either in teaching or assessing physician-patient communication competencies, or in conducting national high-stakes OSCEs. The twenty-five experts from the five Swiss medical schools were communication instructors and researchers, members of the experts’ group in charge of the conceptualization and quality improvement of the national OSCE, and faculty members in charge of the communication-competencies curricula.

Symposium delivery and methods

The symposium started with a presentation of the above-mentioned national survey. Each of the international experts then presented their perspective on the assessment of communication-competencies. Participants were then divided into small groups and discussed possible improvements regarding the four predefined areas (station design, rating tool, raters’ training, and SPs’ role). At the end of every day, each group presented and discussed its achievements in plenary sessions. The last part of the symposium consisted of a plenary discussion of the main lessons learned. Participants were asked to propose and discuss concrete implications for the future communication skills assessment within the Swiss Clinical Skills-FLE. The plenary sessions were video-recorded. At the end of the symposium, we asked each participant to write down a personal take-home message.

Data analysis

Data generated in the symposium were available for qualitative analysis and consisted of video recordings of the two plenary sessions (2 h 55 min), of the written summaries of group work, and of the cards with personal take-home messages. Data were analyzed using a five-phase thematic analysis approach [24]. Thematic analysis is a flexible approach to the analysis of different types of qualitative data [24].. Since analysis began several weeks after the symposium and data were derived from different sources, the use of thematic analysis seemed to us an appropriate method for analyzing and organizing the available material, thus minimizing the risk of memory distortions. All authors of this manuscript participated in the symposium. CK, MM, RB, SH, and KS conducted phases 1 to 3 (i.e., they became familiar with the data, generated codes, and identified themes). All authors were involved in reviewing and labelling the themes (phases 4 and 5). Discrepancies were discussed until consensus was achieved. Themes were categorized into the four predefined areas. A theme was reported if it was addressed in all three settings (plenary sessions, work groups, and take-home messages), as indicators of its importance to the participants.

Results

The thematic analysis helped us highlight nine major themes, which were classified according to the four pre-established areas (Table 1). Statements written in quotation marks and italics correspond to the verbatim transcription of excerpts from the video recordings.

Table 1 Major suggestions identified through the thematic analysis

How can the design of OSCE stations improve the assessment of physician-patient communication competencies?

The variety of contexts to which candidates are exposed during the FLE includes a large sample of situations. However, to better discriminate the levels of communication competence, participants also suggested designing stations with a specific focus on communication:

  1. 1)

    Scenarios measuring the adequacy of examinees’ responses to a pre-specified patient situation with emotional distress

Participants suggested “enriching some of the traditional stations with specific communication challenges, for example, by introducing specific emotional cues or concerns in the OSCE case”. These cases aim to measure the appropriateness of the examinee’s responses to emotional distress portrayed by the SP. “The emotional cues or concerns could be expressed verbally or non-verbally and should be related to the medical problem” or its perceived consequences. A suggestion was made to use the Verona Coding Definitions of Emotional Sequence (VRCoDES) [25, 26] as a framework for the development of such scenarios, and to develop a measure to assess the appropriateness of the examinee’s response. Participants anticipated a potential pitfall of this approach “if the examinees expected an emotional agenda in every patient encounter”, and consequently, would adopt a non-authentic, test-induced communication style. Hence, participants suggested limiting the number of stations with specific emotional stimuli and varying the types and intensity of the emotional states to be portrayed (e.g., anger, sadness, or anxiety).

  1. 2)

    Scenarios with the main focus on specific communication situations

Participants suggested “developing specific OSCE scenarios where communication would be the core of the clinical encounter”. For such stations, specific communication situations (e.g., breaking bad news or a motivational interview) were proposed. To ensure content validity, participants proposed the development of OSCE scenarios that were built on validated communication models (e.g., the SPIKES-model for breaking bad news) [20]. The importance of selecting communication models corresponding to those taught during medical training was stressed. Participants also suggested creating “a platform for the communication instructors from the five medical schools to exchange information and impressions about such models”.

  1. 3)

    Involvement of communication experts in the development of the OSCE stations

Given the specificity of the communication models used in the new OSCE stations, the participants anticipated that “not all clinicians would be familiar with these concepts”. Therefore, a recommendation was made to pair communication experts with clinical experts for the case-writing process.

  1. 4)

    Balance between authenticity and standardization

The participants stressed the “need to strive for high levels of authenticity and standardization in the context of high-stakes assessments”. They pointed out the difficult trade-off between these two characteristics. For example, if, in the attempt to achieve higher standardization, case writers develop very detailed SP scenarios, the SPs would have less flexibility in adjusting their role-plays to correspond to the quality of the examinee’s communication. “A one-fits-all response of the SP to all examinees’ interactions would, therefore, decrease the authenticity of the experience”. For this reason, some participants proposed allowing some flexibility in the SP’s portrayal, based on whether or not the examinee adopted the expected communication attitude.

How can the rating scale improve the assessment of physician-patient communication competencies?

Medical content (medical history, physical examination, and management) is actually assessed using a case-specific checklist, while communication competencies are assessed at all stations using a global rating scale derived from the Analytic Global OSCE Ratings [19, 20].

  1. 5)

    Ensure the presence of items that capture case-specific communication outcomes

Participants stressed the importance of “ensuring that items on the checklists and the global rating scale systematically capture the specific communication goals of each station”. From the perspective of a licensing examination, participants pointed out how it might be more meaningful to assess the “outcome of the encounter” (e.g., “Did the candidate obtain relevant patient information?” “Did the candidate ease the patient’s fear?” or “Did the patient understand the candidate’s explanation?”), than the technique the candidate used to achieve the results (e.g., “Did the candidate use the correct communication technique or model?”).

  1. 6)

    Having a global rating scale for the assessment of general physician-patient communication competencies

Given the concerns about the inability of raters to differentiate between the four dimensions of the Analytic Global OSCE Ratings, participants proposed a focus on familiarizing raters with the scale, rather than changing it. To achieve this, they proposed “having all faculty members in the undergraduate curriculum use this scale for both summative and formative assessments”.

How can simulated patients contribute to the assessment of physician-patient communication competencies?

  1. 7)

    Additional assessment of communication by SPs

Some participants suggested involving SPs in the evaluation of communication competencies in order to achieve a more accurate discrimination of good communication. They argued that “assessments by SPs could be complementary to those of physician raters because they perceive other dimensions of communication”. The feasibility of this proposal depends on the time interval between the stations. A short time (e.g., 2 min in our setting) might prevent SPs from conducting a thorough evaluation.

  1. 8)

    Adapting SPs’ training to the new stations for the assessment of communication competencies

Participants expressed concern that the introduction of the scenarios in which SPs have to portray pre-determined verbal and non-verbal emotional hints at a pre-defined level of intensity would increase the complexity of the SP’s role-play and training. The standardization imperatives of the OSCE require SPs to provide the same information to each candidate, regardless of the quality of their communication competencies. Hence, some participants suggested allowing greater flexibility in the SPs’ role-plays. Even if the role-play and the SP-training are challenging, “standardization can be ensured by providing SPs with examples of how to respond to “unsatisfactory, intermediate, and good” responses by students/candidates” and train them to react differently to such behaviors.

How can the raters’ training be improved?

  1. 9)

    Adapt the raters’ training to the stations dedicated to the assessment of communication competencies

With the introduction of OSCE stations dedicated to communication competencies, raters’ training must address all aspects related to the assessments conducted at such stations. In particular, raters should know how to use specific assessment criteria (e.g., those inspired by the Verona Coding Definitions of Emotional Sequence) and models, especially if they are not communication experts. As mentioned previously, frequent use of the same rating scale for the clinical skills component of the FLE throughout the undergraduate curriculum could also be considered as a type of training for raters. Finally, participants emphasized the importance of keeping rating scales as simple and intuitive as possible, thereby simplifying the raters’ training.

Discussion

Our symposium identified significant elements for improvement, mainly concerning the design and development of OSCE cases and the assessment instruments.

Quality in the assessment of physician-patient communication competencies at the Swiss FLE has so far been ensured by at least three elements: the blueprint of OSCE stations providing a large sample of clinical situations (emergency, acute, chronic, and palliative) in different clinical settings (hospital or ambulatory) and disciplines, the systematic assessment of general communication competencies at each OSCE station and the monitoring of psychometric properties of the rating tool over the years [9]. This, however, should not be considered as a sufficient condition for the thorough evaluation of communication competencies.

Our suggestion to introduce concrete elements in OSCE scenarios, so as to allow candidates to be exposed to specific communication aspects (Suggestions 1 and 2) is corroborated by a recent analysis by the National Board of Medical Examiners (NBME) which reviewed all components of the United States Medical Licensing Examination (USMLE) [23]. This analysis showed that in OSCE scenarios, the biomedical content dominated over the psychosocial or emotional details. The latter, although present, often lacked sufficient detail to elicit the desired communication skills. This may partly explain why examinees focus more on data gathering than on patient-centered behaviors [23]. This supports the importance of developing OSCE scenarios linked with concrete station’s endpoint [13, 14]. For example, De Haes and colleagues developed a very interesting framework, named the Six-function model of medical communication [11], which provides opportunities to focus on (1) fostering the relationship, (2) gathering information, (3) information provision, (4) decision making, (5) enabling disease and treatment-related behavior, and (6) responding to emotions. For each of these functions it is possible to use validated, specific communication models, e.g. the SPIKES model for breaking bad news [27], or the Verona Coding Definitions of Emotional Sequence (VRCoDES) [25, 26]. The VRCoDES was developed to analyze emotional communication during patient encounters, and it classifies clinicians’ responses to patients’ cues and emotions into two main dimensions: explicitness of the response (explicitly or not explicitly referred to as the cue/concern) and provision of space (the response provides or reduces space for further disclosure of the cue/concern) [25, 26]. Zhou and colleagues successfully applied the VRCoDES in the context of OSCEs with good reliability [28]. To ensure validity and transparency of the assessment criteria for communication, models should: (a) be validated models, (b) be used during training, and (c) involve communication experts in case development (Suggestions 3). This prompted the idea of creating a platform to facilitate communication among the instructors of the five medical schools to exchange information and impressions about the models used in the teaching process.

Concerning the importance of ensuring authenticity in the SPs’ portrayals and the validity of the assessment, we already develop scenarios inspired by real patient narratives [27] which we enrich with psychosocial details [23]. The interesting suggestion is to enable SPs to vary their portrayals in response to whether or not the examinee adopts the expected communication attitude (Suggestion 4). Standardization of the biomedical content of cases can be maintained by providing examples of adequate and inadequate responses to emotional cues in scenarios to SPs and raters in training. Therefore, the training of SPs and raters requires attention to the balance between standardization and authenticity (Suggestions 8 and 9).

Another condition evoked to ensure validity of the assessment of communication competencies, is the selection of an appropriate rating tool (Suggestion 5 and 6). There is an ongoing debate in the literature in relation to the topic of which type of rating tool best suits the assessment of communication competencies in the OSCE setting. Recent research has suggested some advantages of using global rating scales over checklists, such as greater internal consistency [15, 29, 30] and the ability to capture multiple aspects of performance when used by experienced clinicians with adequate training [15, 31,32,33]. On the other hand, checklists seem to be better at capturing specific details of communication behaviors, to be less prone to rater bias, and useful to non-experts [31, 34].

Our suggestion is consistent with the finding that combining the global rating scale and checklists increases the reliability and content validity of the communication-competencies assessment [15, 22, 29, 30].

As for the appropriateness to keeping the Analytic Global OSCE Ratings [19, 20] (Suggestion 6) for the assessment of general communication competences in every station, we did not find a more suitable tool in the literature, nor did existing alternatives have advantages from a psychometric point of view [8, 16, 17]. Moreover, the alternative tools were lengthy, with 10 to 36 items [16], and hence, unsuitable in a setting where examiners complete rating scales while the OSCE encounter is in progress and the time between the two examinees is only 2 min. The Analytic Global OSCE Ratings, with its four items, seems to be the best fit for our FLE context, in terms of reliability, feasibility, and acceptance [10, 19, 35].

Our suggestions is that a well-suited evaluation tool for communication-competencies assessments in the licensing OSCE setting should be a validated scale with good psychometric properties, which is well known and accepted by raters, feasible in the exam setting, with dimensions that are consistent with instruction, and items that are able to capture the communication goals of the station.

Limitations

The symposium was not aimed nor empowered to produce consensus statements. Indeed, agreement about the results of the thematic analysis was obtained only from the authors of this article and not from the other participants. Further, the results have not been additionally reviewed by external experts. That is why we thought it appropriate to talk in terms of suggestions and not recommendations. However, we believe that our conclusions are credible for the following reasons. Firstly, all the authors actively participated in the symposium and were therefore able to capture the essence of the discussions. Secondly, the material was analyzed separately by different authors and comes from original and verifiable sources such as video recordings of the plenary discussions. Thirdly, in the analysis we also took into account the individual “take-home massages” of each participant. Such a symposium can provide major directions; however, detailed concepts have to be developed based on these proposals. Nevertheless, to our knowledge, this is one of the few publications of this type, and the results should be useful to all professionals involved in the assessment of communication skills.

Conclusion

This article offers nine practical suggestions, at both structural and process levels, to improve the assessment of communication competencies in high-stakes licensing OSCEs. Further research is needed to test to what extent the implementation of all these suggestions will effectively contribute to improvements in the quality of communication-competencies assessments.

Availability of data and materials

The datasets analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

OSCE(s):

Objective Structured Clinical Examination(s)

FLE:

Federal Licensing Exam

SP(s):

Standardized patient(s)

VRCoDES:

Verona Coding Definitions of Emotional Sequence

References

  1. 1.

    Dwamena F, et al. Interventions for providers to promote a patient-centred approach in clinical consultations. Cochrane Database Syst Rev. 2012;12:CD003267. https://doi.org/10.1002/14651858.CD003267.pub2.

    Article  Google Scholar 

  2. 2.

    Joosten EA, et al. Systematic review of the effects of shared decision-making on patient satisfaction, treatment adherence and health status. Psychother Psychosom. 2008;77(4):219–26. https://doi.org/10.1159/000126073.

    Article  Google Scholar 

  3. 3.

    Zolnierek KB, Dimatteo MR. Physician communication and patient adherence to treatment: a meta-analysis. Med Care. 2009;47(8):826–34. https://doi.org/10.1097/MLR.0b013e31819a5acc.

    Article  Google Scholar 

  4. 4.

    Reader TW, Gillespie A, Roberts J. Patient complaints in healthcare systems: a systematic review and coding taxonomy. BMJ Qual Saf. 2014;23(8):678–89. https://doi.org/10.1136/bmjqs-2013-002437.

    Article  Google Scholar 

  5. 5.

    Junod Perron N, et al. How do Swiss medical schools prepare their students to become good communicators in their future professional careers: a questionnaire and interview study involving medical graduates, teachers and curriculum coordinators. BMC Med Educ. 2018;18(1):285. https://doi.org/10.1186/s12909-018-1376-y.

    Article  Google Scholar 

  6. 6.

    CIMS-SMIFK. PROFILES - Principal Relevant Objectives and Framework for Integrative Learning and Education in Switzerland for the training of medical students. 2017 [cited 2020; Available from: https://www.profilesmed.ch.

  7. 7.

    Duffy FD, et al. Assessing competence in communication and interpersonal skills: the Kalamazoo II report. Acad Med. 2004;79(6):495–507. https://doi.org/10.1097/00001888-200406000-00002.

    Article  Google Scholar 

  8. 8.

    Kiessling C, et al. General principles to consider when designing a clinical communication assessment program. Patient Educ Couns. 2017;100(9):1762–8. https://doi.org/10.1016/j.pec.2017.03.027.

    Article  Google Scholar 

  9. 9.

    Berendonk C, et al. The new final Clinical Skills examination in human medicine in Switzerland: Essential steps of exam development, implementation and evaluation, and central insights from the perspective of the national Working Group. GMS Z Med Ausbild. 2015;32(4):Doc40. https://doi.org/10.3205/zma000982.

    Article  Google Scholar 

  10. 10.

    Hodges B, et al. Evaluating communication skills in the OSCE format: reliability and generalizability. Med Educ. 1996;30(1):38–43.

    Article  Google Scholar 

  11. 11.

    Wass V, et al. Assessment of clinical competence. Lancet. 2001;357(9260):945–9. https://doi.org/10.1016/s0140-6736(00)04221-5.

    Article  Google Scholar 

  12. 12.

    Deveugele M, et al. Teaching communication skills to medical students, a challenge in the curriculum? Patient Educ Couns. 2005;58(3):265–70. https://doi.org/10.1016/j.pec.2005.06.004.

    Article  Google Scholar 

  13. 13.

    Brown RF, Bylund CL. Communication skills training: describing a new conceptual model. Acad Med. 2008;83(1):37–44. https://doi.org/10.1097/ACM.0b013e31815c631e.

    Article  Google Scholar 

  14. 14.

    de Haes H, Bensing J. Endpoints in medical communication research, proposing a framework of functions and outcomes. Patient Educ Couns. 2009;74(3):287–94. https://doi.org/10.1016/j.pec.2008.12.006.

    Article  Google Scholar 

  15. 15.

    Setyonugroho W, Kennedy KM, Kropmans TJ. Reliability and validity of OSCE checklists used to assess the communication skills of undergraduate medical students: A systematic review. Patient Educ Couns. 2015. https://doi.org/10.1016/j.pec.2015.06.004.

    Article  Google Scholar 

  16. 16.

    Comert M, et al. Assessing communication skills of medical students in objective structured clinical examinations (OSCE)--a systematic review of rating scales. PLoS One. 2016;11(3):e0152717. https://doi.org/10.1371/journal.pone.0152717.

    Article  Google Scholar 

  17. 17.

    Schirmer JM, et al. Assessing communication competence: a review of current tools. Fam Med. 2005;37(3):184–92.

    Google Scholar 

  18. 18.

    Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. Med Educ. 2011;45(12):1181–9. https://doi.org/10.1111/j.1365-2923.2011.04075.x.

    Article  Google Scholar 

  19. 19.

    Hodges B, McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37(11):1012–6.

    Article  Google Scholar 

  20. 20.

    Scheffer S, et al. Assessing students' communication skills: validation of a global rating. Adv Health Sci Educ Theory Pract. 2008;13(5):583–92. https://doi.org/10.1007/s10459-007-9074-2.

    Article  Google Scholar 

  21. 21.

    Norcini J, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 conference. Med Teach. 2011;33(3):206–14. https://doi.org/10.3109/0142159X.2011.551559.

    Article  Google Scholar 

  22. 22.

    Daniels VJ, Harley D. The effect on reliability and sensitivity to level of training of combining analytic and holistic rating scales for assessing communication skills in an internal medicine resident OSCE. Patient Educ Couns. 2017;100(7):1382–6. https://doi.org/10.1016/j.pec.2017.02.014.

    Article  Google Scholar 

  23. 23.

    Hoppe RB, et al. Enhancement of the assessment of physician-patient communication skills in the United States medical licensing examination. Acad Med. 2013;88(11):1670–5. https://doi.org/10.1097/ACM.0b013e3182a7f75a.

    Article  Google Scholar 

  24. 24.

    Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. https://doi.org/10.1191/1478088706qp063oa.

    Article  Google Scholar 

  25. 25.

    Piccolo LD, et al. Verona coding definitions of emotional sequences (VR-CoDES): conceptual framework and future directions. Patient Educ Couns. 2017;100(12):2303–11. https://doi.org/10.1016/j.pec.2017.06.026.

    Article  Google Scholar 

  26. 26.

    Zimmermann C, et al. Coding patient emotional cues and concerns in medical consultations: the Verona coding definitions of emotional sequences (VR-CoDES). Patient Educ Couns. 2011;82(2):141–8. https://doi.org/10.1016/j.pec.2010.03.017.

    Article  Google Scholar 

  27. 27.

    Baile WF, Buckman R, Lenzi R, Glober G, Beale EA, Kudelka AP. SPIKES—A Six Step Protocol for Delivering Bad News: Application to the Patient with Cancer. Oncologist. 2000;5:302–11. https://doi.org/10.1634/theoncologist.5-4-302.

    Article  Google Scholar 

  28. 28.

    Zhou Y, et al. How do medical students respond to emotional cues and concerns expressed by simulated patients during OSCE consultations?--a multilevel study. PLoS One. 2013;8(10):e79166. https://doi.org/10.1371/journal.pone.0079166.

    Article  Google Scholar 

  29. 29.

    Barry M, Bradshaw C, Noonan M. Improving the content and face validity of OSCE assessment marking criteria on an undergraduate midwifery programme: a quality initiative. Nurse Educ Pract. 2013;13(5):477–80. https://doi.org/10.1016/j.nepr.2012.11.006.

    Article  Google Scholar 

  30. 30.

    Moineau G, et al. Comparison of student examiner to faculty examiner scoring and feedback in an OSCE. Med Educ. 2011;45(2):183–91. https://doi.org/10.1111/j.1365-2923.2010.03800.x.

    Article  Google Scholar 

  31. 31.

    Regehr G, et al. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73(9):993–7.

    Article  Google Scholar 

  32. 32.

    Lang F, et al. Communication assessment using the common ground instrument: psychometric properties. Fam Med. 2004;36(3):189–98.

    Google Scholar 

  33. 33.

    Regehr G, et al. OSCE performance evaluations made by standardized patients: comparing checklist and global rating scores. Acad Med. 1999;74(10 Suppl):S135–7.

    Article  Google Scholar 

  34. 34.

    Cunnington JP, Neville AJ, Norman GR. The risks of thoroughness: reliability and validity of global ratings and checklists in an OSCE. Adv Health Sci Educ Theory Pract. 1996;1(3):227–33. https://doi.org/10.1007/BF00162920.

    Article  Google Scholar 

  35. 35.

    Hodges B, et al. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74(10):1129–34.

    Article  Google Scholar 

Download references

Acknowledgements

We thank all of the participants of the symposium for their valuable input. We also thank the Federal Office of Public Health, Switzerland, and the Exam Committee of the Federal Licensing Exam (President Prof. Andre Perruchoud) for supporting this project.

Funding

(Also see article paragraph: “Role of the funding source”)

The Swiss Federal Office of Public Health funded from 2014 to 2018 a national project aimed at addressing the challenges of improving the assessment of communication competencies. The symposium, from which this article originates, was part of this project and was supported by the sponsor. The funding source had no involvement in the conceptualization, nor in the conduction of this research. The sponsor had no involvement in the following steps: preparation of the article, study design, collection of data, analysis and interpretation of data, writing of the report.

According to Swiss legislation concerning federal examinations, the Federal Office of Public Health must approve the submission of any publication concerning the Federal Licensing Examinations. This to verify the absence of sensitive or confidential information concerning the Swiss Federal Licensing Exam. The Swiss Federal Office of Public Health approved on 4th June 2019 the submission of the article for publication.

Author information

Affiliations

Authors

Contributions

Also see annexed document: “CRediT authors statement”. All authors made substantial contributions to each of the following steps: Conception and design of the study, Revising the article critically for important intellectual content, Final approval of the version to be submitted. Conceptualization: S. Huwendiek M. Monti, K. P. Schnabel, J. Breckwoldt, N. Junod-Perron, S. Feller, R. Bonvin. Methodology: M. Monti, R. Bonvin, S. Huwendiek. Formal Analysis: M. Monti, C. Klöckner-Cronauer, K. P. Schnabel, R. Bonvin, S. Huwendiek. Investigation: M. Monti, C. Klöckner-Cronauer, S. C. Hautz, K. P. Schnabel, J. Breckwoldt, N. Junod-Perron, S. Feller, R. Bonvin, S. Huwendiek. Resources (Provision of study materials, instrumentation, or other analysis tools): M. Monti, C. Klöckner-Cronauer, S. C. Hautz, R. Bonvin. Data Curation: M. Monti, C. Klöckner-Cronauer, K. P. Schnabel, R. Bonvin, S. Huwendiek. Writing – Original Draft: M. Monti, C. Klöckner-Cronauer, R. Bonvin, S. Huwendiek. Writing – Review & Editing: M. Monti, C. Klöckner-Cronauer, S. C. Hautz, K. P. Schnabel, J. Breckwoldt, N. Junod-Perron, S. Feller, R. Bonvin, S. Huwendiek. Visualization (Preparation, creation of the published work, data presentation): M. Monti, R. Bonvin, S. Huwendiek. Project Administration: S. Huwendiek & K.P. Schnabel. Funding Acquisition: S. Huwendiek & K. P. Schnabel. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Matteo Monti.

Ethics declarations

Ethics approval and consent to participate

The local ethics committee has exempted the study from approval because under Swiss law approval is not required for studies in which non-personal health-related data are collected (article 2 of the Swiss Federal Act on Research involving Human Beings, Federal Council. Federal Act on Research involving Human Beings. https:// www.admin.ch/opc/en/classified-compilation/20061313/index.html. Accessed 9th Dec 2019).

Consent for publication

“Not Applicable”

Competing interests

“Raphael Bonvin, Jan Breckwoldt, Kai Schnabel and & Sören Huwendiek are Associate Editors of BMC Medical Education”.

“All the other authors declare that they have no competing interests”.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Monti, M., Klöckner-Cronauer, C., Hautz, S.C. et al. Improving the assessment of communication competencies in a national licensing OSCE: lessons learned from an experts’ symposium. BMC Med Educ 20, 171 (2020). https://doi.org/10.1186/s12909-020-02079-4

Download citation