Skip to main content

Script concordance test acceptability and utility for assessing medical students’ clinical reasoning: a user’s survey and an institutional prospective evaluation of students’ scores

Abstract

Script Concordance Testing (SCT) is a method for clinical reasoning assessment in the field of health-care training. Our aim was to assess SCT acceptability and utility with a survey and an institutional prospective evaluation of students’ scores.With a user’s online survey, we collected the opinions and satisfaction data of all graduate students and teachers involved in the SCT setting. We performed a prospective analysis comparing the scores obtained with SCT to those obtained with the national standard evaluation modality.

General opinions about SCT were mostly negative. Students tended to express more negative opinions and perceptions. There was a lower proportion of negative responses in the teachers’ satisfaction survey. The proportion of neutral responses was higher for teachers. There was a higher proportion of positive positions towards all questions among teachers. PCC scores significantly increased each year, but SCT scores increased only between the first and second tests. PCC scores were found significantly higher than SCT scores for the second and third tests. Medical students’ and teachers’ global opinion on SCT was negative. At the beginning SCT scores were found quite similar to PCC scores. There was a higher progression for PCC scores through time.

Highlights

• This study is the first to evaluate students and teachers’ opinions and perceptions about SCT and to compare the SCT grades to those obtained with standard examination modalities.

• General students’ and teachers’ opinions about SCT were mostly negative.

• There was a higher progression through time for students’ scores obtained with standard examination modalities than with SCT.

Peer Review reports

Background

Script concordance testing (SCT) is a method used in the field of clinical reasoning assessment in health professions [1,2,3,4,5,6,7,8,9]. Reliability and validity of SCT in pre-graduate, graduate and post-graduate health students have been widely evaluated to date [10,11,12]. However, some threats to validity in the use of SCT have also been described [13]. Still, many issues surrounding SCT and their use to certify competence development have been evoked and many improvements have been proposed to date [13,14,15,16].

Uncertainty is linked to medical reasoning and one objective of medical education is to make students skilled in dealing with uncertainty [17]. SCT aims at assessing clinical reasoning under conditions of uncertainty in complex situations [5, 17]. It is designed to evaluate whether knowledge of examinees is efficiently organized for clinical actions [2]. SCT construction has been extensively described [8, 18]. A SCT begins with a short clinical scenario (vignette) which is an authentic situation in which examinees must interpret data in order to make decisions. Each scenario is followed by a series of questions that calls for judgment and reasoning about diagnostic possibilities or management options according to new elements provided by each question. It is mandatory that uncertainty, ambiguity or incompleteness are embedded in each case in order to simulate ambiguous conditions observed in real life. SCT scoring system is designed to measure the degree of concordance between examinees’ answers and the ones of a panel of experts. In consequence, SCT take into account the observed variability of experts’ responses to particular clinical situations. For each question, the answer provided by the greatest number of panel members (modal response) is considered as the gold standard reasoning under such circumstances. Other panel members’ answers reflect a difference of interpretation that can still be clinically valuable and worthy of partial credit depending on the number of experts who have given this answer [5, 10].

SCT are quite different from current examination modalities in French medical faculties, which consist mainly in multiple choice questions (MCQ) and progressive clinical cases (PCC).

MCQ and PCC aim at evaluating knowledge whereas SCT aim at assessing reasoning competency under uncertainty. French medical students and medical teachers are not familiar with the use of SCT which have been recently implemented in a few institutions such as the Medical School of the University of Angers, France. Thus, it seemed interesting to compare SCT standard examination modalities (PCC and MCQ) with SCT.

This was a prospective study in which all students in our Medical School were included and were followed during 3 years. The aim of this study was first: to evaluate students’ scores and their progression with an institutional prospective evaluation; then to evaluate SCT acceptability and utility for assessing medical students’ clinical reasoning using a user’s survey.

Methods

This was a prospective study in which all students at our medical school were included with a 3-year follow-up. The aim of this study was to compare in a paired analysis the students’ scores and to evaluate their evolution through time. We also performed a survey to evaluate students’ and teachers’ adhesion to SCT, as a clinical reasoning test.

Participants

This study was set at the Medical School of the University of Angers, France. Script concordance testing was used as a university examination modality, in combination with usual modalities of examination for third-, fourth- and fifth-year graduate medical students. All students and medical teachers involved in this SCT setting between September 2017 and January 2020 were included in the survey (3 academic years: 2017–2018, 2018–2019 and 2019–2020). The Medical teachers who were interviewed were involved as SCT designer and/or as expert panelists. We also prospectively analyzed the examination scores of all students who went through 3 successive examinations during this period: first examination or test 1 (T1) (first year of the study), second examination or test 2 (T2) (second year of the study) and third examination or test 3 (T3) (third year of the study). All SCTs that were used were structured similarly: a vignette (short clinical scenario) followed by a series of 1 or 3 questions that aimed at exploring any field in medical reasoning. All SCTs that were used had been validated beforehand by the teacher in charge of the concerned subject and the content of the examination questions and by the referent teacher responsible for the whole examination session. An example of SCT that has been used in this study is provided in the Supplementary data. For each SCT, a minimum of 15 experts were required. All students and teachers had been institutionally prepared for SCT. Teachers had a 1-h preparation conference for SCT conception and all SCT were reviewed by a referent teacher before submitting the SCT to the students. All students had a preparation conference and a training example before taking the examinations (a 2-h conference).

Survey procedure and analysis

All the participants have been invited to access an online survey between March 1st and March 15th, 2020. Invitations to participate to the survey were sent by e-mail (one invitation followed by 2 reminders). The survey was available through the software Microsoft Forms (License Office 365 A1 for Angers University). The design and validation of the survey was performed by all authors who were also 4 pedagogical referents in our institution. The survey is reported in Table 1. Five-item Likert scales were used for questions 1 to 20. Questions 1 to 17 assessed students’ and teachers’ opinions (Likert items: “strongly agree”, “agree”, “neutral”, “disagree” and “strongly disagree”) and questions 18 to 20 their satisfaction (Likert items: “very satisfied”, satisfied, “neutral”, “unsatisfied”, “very unsatisfied”). Questions were also divided into 4 groups: perceptions about SCT (questions 1 to 6), opinions about how should SCT be implemented and for what academic purposes (questions 7 to 14), opinion about SCT overall utility (questions 15 to 17) and satisfaction (questions 18 to 20). In order to facilitate the results overview, all answeres were also classified as “positive”, “neutral” or “negative” depending on how they were considered regarding SCT.

Table 1 Medical students’ and teachers’ satisfaction and opinion outcomes (Question 6 was for students only)

A qualitative evaluation was also performed to document the opinion of students and teachers. Question 21 was an optional open question that intended to gather comments which were not addressed by the survey. Response to question 21 was not mandatory. The original version of the survey was in French. It is available as supplementary data.

Comparative analysis of examination results before and after SCT setting

This was a prospective study in which all students at our Medical School were included with a 3-year follow-up. The aim of this study was to compare in a paired analysis the students’ scores through time.

All students were evaluated at the end of each semester with a standard examination: 4 to 5 SCT and 4 to 5 progressive clinical cases (PCC) which included 15 multiple choices questions (MCQ) (standard examination modality in French Medicine Faculties). All PCC, MCQ and SCT which were used in the present study were designed to be in line with the national guidelines in order to be as similar as possible to what is expected for French national (“Examen National Classant”) recommendations [19].

SCT scores and PCC scores were compared to each and one another for each student during the three semesters (T1, T2 and T3). The progression scores were measured for all students who went through 3 successive examinations during the study period.

Ethics

Students’ and teachers’ participation was anonymous and voluntary. All participants were informed of their participation in the study by e-mail. No written consent was required for publication. The experimental protocol was conducted in accordance with institutional guidelines and relevant regulations.

Statistical analysis

Statistical analysis was performed with the SPSS 15.0 Software® (IBM Corp., Armonk, NY, USA) and Systat statistical software v13 (Systat Software, Inc., San José, CA, USA). All data were expressed as means ± standard deviation. Qualitative and quantitative variables were compared using Chi-square and Mann–Whitney tests. Differences between SCT and PCC were searched for each subject and compared using a Wilcoxon test. Paired analysis testing was performed for each student. The Spearman rank correlation test was used to assess the correlation. Statistical significance was defined as a p < 0.05.

Results

Participants

596 medical students and 41 medical teachers were asked to participate to the study. The overall response rate to the survey was 33% (241/722). Students’ response rate was 33% (200/596). Teachers’ response rate was 32% (41/126). There was no significant difference between the 2 response rates (p = 0.953).

Survey analysis

The results of the students’ and teachers’ opinion and satisfaction surveys are summarized in Tables 2 and 3. An overall view of the mean results of both surveys is provided in Table 4. Teachers’ and students’ general positions (opinions and perceptions) regarding all questions tended to be negative: 47% and 58%, respectively. The proportion of neutral responses for satisfaction was higher for teachers than for the students (47% vs 15%, respectively; p = 0.05). The overall proportion of neutral responses for each survey was similar for students and teachers (17% vs 20%, respectively; p = 0.844). There was a lower proportion of negative responses in the teachers’ satisfaction compared to the students’: 25% vs 60%; respectively (p = 0.046). Students were globally less satisfied (60% not satisfied) whereas teachers were globally more undecided about their satisfaction (47%). There was a higher proportion of negative positions about all questions among students (58%) than among teachers (47%) (p = 0.04). There was a higher proportion of positive positions about all questions among teachers (33%) than among students (25%) (p = 0.041).

Table 2 Students’ opinion and satisfaction outcomes
Table 3 Teachers’ opinion and satisfaction outcomes
Table 4 Overall opinion and satisfaction outcomes

Qualitative outcomes: expressed opinions

Negative and positive comments raised by students and teachers who answered the optional open question (Q21) are summarized in Table 5. Finally, 44% of the students (88/200) and 27% of the teachers (11/41) who have effectively participated to the study have provided qualitative comments by answering question 21. Students’ and teachers’ feedbacks were globally negative as well. Fourteen negative points and five positive points were raised by students. Eight negative points and one positive point were raised by teachers. Negative points were also raised more frequently than positive ones by both students and teachers. Some points were often mentioned by both students and teachers: “SCT are confusing”, “SCT are too ambiguous” and “a too high variability exists between experts’ responses”. One teacher raised the point that “SCT prevent students from good medical reasoning”. Difficulties of technical order were also raised by some teachers, such as the difficulty to get enough experts. Another negative point raised by some students was that there may be mismatches between the expected answers between SCT and their lectures.

Table 5 Negative and positive elements of students’ and teachers’ feedback (question 21)

Comparative analysis of examination results obtained with SCT and progressive clinical cases

Results of comparative analysis of SCT and PCC examinations scores of students are shown in Table 6 and Fig. 1. PCC scores progressively increased each year, with a significant difference between each year (p < 0.001) and with a yearly mean progression of 9.25 ± 3.85 points (out of 100). On the other hand, SCT scores significantly increased only between the first and the second test (p = 0.004) (+ 4 points out of 100) but the difference was not significant between the second and the third test (p = 0.770) (+ 2 points out of 100). PCC scores were found higher than SCT scores for the second and third tests with significant differences (p < 0.001) (+ 7 points, + 11.5 points; respectively).

Table 6 Results of comparative analysis of SCT and PCC examinations scores obtained by students
Fig. 1
figure 1

Evolution of mean scores obtained by students at Progressive Clinical Cases (■) and at Script Concordance Tests (□) expressed in absolute value out of 20 at year one (T1) year two (T2) and year three (T3) of the study

Discussion

The response rates for the online survey were satisfactory for both teachers and students (33%, respectively). This response rate can be considered as fairly high, especially when compared to other similar studies in which reported response rates varied from 7 to 20% [20,21,22,23]. This suggests that the population of the study felt concerned by the topic. No incitement had been proposed to increase the response rate. It is also interesting to note that the response rates were the same for both students and teachers.

The present work takes place in a current context of profound changes in medical studies in France [19]. The reform of the undergraduate curriculum will be effective in 2023. The undergraduate curriculum will switch from a traditional objective-based approach to a competence-based approach. Thus, the final national examination ranking will be replaced by an evaluation system, which will assign each student a level based on three criteria: theoretical knowledge, clinical skills and the student progress training chart. Theoretical knowledge assessment will be the subject of a major diversification, with the introduction of rich context multiple choices questions (MCQ), key-feature problems (KFP) and SCT. The assessment of clinical skills will be carried out through Objective Structured Clinical Examination (OSCE). Consequently, SCT will be a mandatory new examination modality for every French medical students. It could thus be interesting to compare those standard examination modalities (PCC with MCQ) with SCT.

The existing literature has demonstrated the validity, the reliability and the feasibility of SCT at an undergraduate level and at a graduate level in order to assess clinical reasoning skills in context of uncertainty for a wide range of curricula in healthcare, [10,11,12]. Some threats to validity in the use of SCT have however been evoked to date [13]. However, even though SCT is now mandatory as a part of the national ranking examination and for all French undergraduate curricula, and even though medical schools have been instructed to train teachers and students for this assessment method for more than 4 years, we must admit that many French medical teachers remain unfamiliar with SCT. The results of the present study also demonstrate that fact. It seems obvious that, considering this specific French context, the topic of this article and these results warrant considerations.

These results might be explained by a distrust in innovation in an environment that has only known one kind of assessment tool such as MCQ. These opinion questionnaires might reflect the lack of training in the technique and the lack of information on the concepts underlying the evaluative process. For instance, the following aspects of SCT are critical in order to obtain a sufficient adhesion from both students and teachers: the understanding of the concept of clinical reasoning in context of uncertainty, the SCT scoring method (which no longer allows for a single correct answer), and the SCT construction method (which is diametrically different from MCQ). All these aspects are challenges to overcome in order to improve students’ and teachers’ adhesion to SCTs.

It could be interesting to find means to improve teachers’ and students’ satisfaction and adhesion to SCT. An interesting way could be the use of the recently described “evolving SCTs” (E-SCTs) which are considered by participants as more representative of real-life clinical reasoning than usual SCT [6]. In E-SCTs, the patient’s clinical history is “evolving” with thoughtful integration of new information at each stage, decisions related to clinical decision-making are then supposed to become increasingly clear [6]. Improvement in students’ training, teachers’ formation and/or organizational modalities could also be useful.

Uncertainty is linked to medical reasoning and one objective of medical education is to train students to deal with uncertainty [17]. SCT appears as a standardized, validated and reproducible tool to educate students to uncertainty in clinical practice but it is not the only one [5, 9, 17, 24]. We think that, despite controversial opinions among medical students and teachers, SCT remains an interesting tool in this field.

The present study is the first to evaluate students and teachers’ opinions and perceptions about SCT and to compare the SCT grades to those obtained with standard examination modalities (PCC). Medical students’ and teachers’ general opinions on SCT setting in our center was globally negative. There was a higher proportion of positive positions among teachers compared to students. PCC scores significantly increased each year, but SCT scores increased only between the first and second tests. PCC scores were found significantly higher than SCT scores for the second and third tests.

The neutral responses rates are globally low for both teachers and students. This fact also indicates that the population of the study felt concerned and that participants had strong opinions about SCT. However, the proportion of neutral responses in the teachers’ satisfaction part of the survey was very high, indicating that teachers were more torn than students regarding their satisfaction towards SCT setting. Almost twice more students than teachers have expressed feedbacks about SCT at question 21. Feedback were mostly negative for both teachers and students as well.

Negative perceptions and opinions about SCT users and the fact that SCT scores progress unlike traditional examinations modalities should be discussed. Regarding negative perceptions, it seems that it could mainly be linked to the novelty of SCT and to a lack of preparation of students and even teachers. Regarding the scores, those results seem positive, since they eliminate the hypothesis of an absence of correlation between students’ knowledge and their results to SCT. Thus, negative perceptions and opinions about SCT users could also be linked to insufficient teachers’ and students’ information, formation, and training about SCT.

Surprisingly, only one previous study evaluating students perceptions about SCT can be found in the international English or French literature [25]. In this study which aimed to evaluate SCT with undergraduate nursing students, it was shown that students appreciated SCT as part of a specific educational setting [25]. Since this data is lacking, we have no reference to compare our results. However, SCT seem to be largely used in Canada at any stage of medical studies [2, 11, 26, 27]. It seems to have been the case for years now. As a result, we can hypothesize that SCTs are better accepted by Canadian medical students and teachers than they are by the French. Differences exist between countries in how people or organizations deal with individual error which is highly cultural dependent [28]. Generally, a higher tolerance for mistakes is observed in North-America than in European countries such as France [29]. We hypothesize that this simple cultural difference about the perception of errors may explain teachers’ and students’ experience and opinions about SCTs.

It is important to note that the students in our study were only graduate students. No postgraduate, i.e., residents, had been solicited since SCT had not been set for postgraduate examinations. Perceptions and opinions of postgraduate students could have been different than graduate students.

We analyzed the evolution of PCC and SCT scores over 3 years. The students were initially inexperienced for both examination modalities: this is confirmed by the fact that the scores obtained with PCC and SCT were similar during the first year. Then we observed during the second and third years an increase in the scores for the two examination methods, but in different proportions. Indeed, PCC scores became significantly better than SCT scores. The gap even widens with time. Those results could appear astonishing. Performance improvement in SCT has been demonstrated in a few disciplines [30, 31]. Furthermore, it has been shown that SCT performance is correlated with clinical performance evaluations, unlike MCQs [32]. But in the same study, SCT appeared to be also initially less reliable and less preferred by students [32]. Similarly to our results, some studies reported that SCT scores also appeared correlated with those obtained on classical MCQ tests for undergraduate students [12]. In addition, recent large studies carried out within French faculties have confirmed the utility of SCT in the current context with good acceptability from the students' point of view and without any pejorative arguments from the teachers' point of view [12, 33].

A few limitations concerning the present study should be raised. At first, despite very good response rates, most of the solicited students and teachers did not answer the online survey. In consequence, a recruitment bias is a possibility considering that students and teachers that have answered the survey may have stronger opinions than the population of students and teachers that have not participated to the survey. Another limitation that could be raised is the data collection tool itself that was used. Indeed, other tools, such as focus group interviews for example, would have allowed to go more in-depth to assess the opinion and perceptions of the study participants. One last limitation of the present study is obviously its monocentric nature. Indeed, the results could have been different in other French centers, and more so in centers abroad. Despite those few limitations, the present study provides valuable data since it is the first to evaluate students’ and teachers’ opinions or perceptions about SCT and to compare the SCT grades to those obtained with standard examination modalities.

Finally, we should not give the wrong impression about SCTs. As already demonstrated in the literature SCT are a major improvement in medical education. However, our study shows that students and teachers might have some concerns during their initial experience with SCT. This does not mean that SCT are negative: it means that it is even more important to train both students and teachers and explain the importance of SCT.

Conclusions

SCT is a recent examination modality for French medical faculties. The aim of this study was to evaluate the first three years of SCT use for faculty examinations of graduate medical students in our institution by examining students’ and teachers’ opinions and satisfaction and students’ scores evolution through time. A prospective comparison between SCT and PCC examination results was also performed.

Medical students’ and teachers’ global opinion on SCT setting in our center was globally negative. This fact may certainly be explained by the novelty of SCT setting and because of the unusual medical reasoning required. Furthermore, at the beginning, SCT scores were found quite similar to PCC scores but a higher progression for PCC scores was observed. Despite these results, SCT could be critical for medical students training especially for advanced students. According to these outcomes, actions should be taken in French medical schools in order to improve students’ and teachers’ adhesion to SCT. The use of information documents and setting-up training programs for both students and teachers might be necessary in all French medical faculties.

Availability of data and materials

The datasets used and analyzed for this study are available from the corresponding author on reasonable request. Raw data available as supplementary files.

Abbreviations

MCQ:

Multiple Choice Questions

PCC:

Progressive Clinical Cases

SCT:

Script Concordance Testing

References

  1. Charlin B, Brailovsky C, Leduc C, Blouin D. The Diagnosis Script Questionnaire: A New Tool to Assess a Specific Dimension of Clinical Competence. Adv Health Sci Educ Theory Pract. 1998;3:51–8. https://doi.org/10.1023/A:1009741430850.

    Article  Google Scholar 

  2. Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C. The Script Concordance test: a tool to assess the reflective clinician. Teach Learn Med. 2000;12:189–95. https://doi.org/10.1207/S15328015TLM1204_5.

    Article  Google Scholar 

  3. Charlin B, Tardif J, Boshuizen HP. Scripts and medical diagnostic knowledge: theory and applications for clinical reasoning instruction and research. Acad Med. 2000;75:182–90. https://doi.org/10.1097/00001888-200002000-00020.

    Article  Google Scholar 

  4. Charlin B, van der Vleuten C. Standardized assessment of reasoning in contexts of uncertainty: the script concordance approach. Eval Health Prof. 2004;27:304–19. https://doi.org/10.1177/0163278704267043.

    Article  Google Scholar 

  5. Lubarsky S, Dory V, Duggan P, Gagnon R, Charlin B. Script concordance testing: from theory to practice: AMEE guide no. 75. Med Teach. 2013;35:184–93. https://doi.org/10.3109/0142159X.2013.760036.

    Article  Google Scholar 

  6. Cooke S, Lemay J-F, Beran T. Evolutions in clinical reasoning assessment: the evolving script concordance test. Med Teach. 2017;39:828–35. https://doi.org/10.1080/0142159X.2017.1327706.

    Article  Google Scholar 

  7. Charlin B, Boshuizen HP, Custers EJ, Feltovich PJ. Scripts and clinical reasoning. Med Educ. 2007;41:1178–84. https://doi.org/10.1111/j.1365-2923.2007.02924.x.

    Article  Google Scholar 

  8. Charlin B, Gagnon R, Sibert L, Van der Vleuten C. Le test de concordance de script, un instrument d’évaluation du raisonnement clinique. Pédagogie médicale. 2002;3:135–44. https://doi.org/10.1051/pmed:2002022.

    Article  Google Scholar 

  9. Charlin B, Kazi-Tani D, Gagnon R, Thivierge R. Le test de concordance comme outil d’évaluation en ligne du raisonnement des professionnels en situation d’incertitude. Revue internationale des technologies en pédagogie universitaire. 2005;2:22–7. https://doi.org/10.18162/ritpu.2005.79.

    Article  Google Scholar 

  10. Nouh T, Boutros M, Gagnon R, Reid S, Leslie K, Pace D, et al. The script concordance test as a measure of clinical reasoning: a national validation study. Am J Surg. 2012;203:530–4. https://doi.org/10.1016/j.amjsurg.2011.11.006.

    Article  Google Scholar 

  11. Lubarsky S, Charlin B, Cook DA, Chalk C, van der Vleuten CP. Script concordance testing: a review of published validity evidence. Med Educ. 2011;45:329–38. https://doi.org/10.1111/j.1365-2923.2010.03863.x.

    Article  Google Scholar 

  12. Aubart FC, Papo T, Hertig A, Renaud M-C, Steichen O, Amoura Z, et al. Are script concordance tests suitable for the assessment of undergraduate students? A multicenter comparative study. Rev Med Interne. 2021;42:243–50. https://doi.org/10.1016/j.revmed.2020.11.001.

    Article  Google Scholar 

  13. Lineberry M, Kreiter CD, Bordage G. Threats to validity in the use and interpretation of script concordance test scores. Med Educ. 2013;47:1175–83. https://doi.org/10.1111/medu.12283.

    Article  Google Scholar 

  14. Lineberry M, Hornos E, Pleguezuelos E, Mella J, Brailovsky C, Bordage G. Experts’ responses in script concordance tests: a response process validity investigation. Med Educ. 2019;53:710–22. https://doi.org/10.1111/medu.13814.

    Article  Google Scholar 

  15. Gawad N, Wood TJ, Cowley L, Raiche I. The cognitive process of test takers when using the script concordance test rating scale. Med Educ. 2020;54:337–47. https://doi.org/10.1111/medu.14056.

    Article  Google Scholar 

  16. Power A, Lemay J-F, Cooke S. Justify your answer: the role of written think aloud in script concordance testing. Teach Learn Med. 2017;29:59–67. https://doi.org/10.1080/10401334.2016.1217778.

    Article  Google Scholar 

  17. Belhomme N, Jego P, Pottier P. Gestion de l’incertitude et compétence médicale: une réflexion clinique et pédagogique. Rev Med Interne. 2019;40:361–7. https://doi.org/10.1016/j.revmed.2018.10.382.

    Article  Google Scholar 

  18. Fournier JP, Demeester A, Charlin B. Script concordance tests: guidelines for construction. BMC Med Inform Decis Mak. 2008;8:18. https://doi.org/10.1186/1472-6947-8-18.

    Article  Google Scholar 

  19. Official Journal of the French Republic : Arrêté du 21 décembre 2021 relatif à l'organisation des épreuves nationales donnant accès au troisième cycle des études de médecine. NOR : ESRS2138083A. Vol. 301. Paris; 2021.

  20. Payne KFB, Wharrad H, Watts K. Smartphone and medical related App use among medical students and junior doctors in the United Kingdom (UK): a regional survey. BMC Med Inform Decis Mak. 2012;12:121. https://doi.org/10.1186/1472-6947-12-121.

    Article  Google Scholar 

  21. Aitken C, Power R, Dwyer R. A very low response rate in an on-line survey of medical practitioners. Aust N Z J Publ Heal. 2008;32:288–9. https://doi.org/10.1111/j.1753-6405.2008.00232.x.

    Article  Google Scholar 

  22. Scott A, Jeon S-H, Joyce CM, Humphreys JS, Kalb G, Witt J, et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol. 2011;11:126. https://doi.org/10.1186/1471-2288-11-126.

    Article  Google Scholar 

  23. Walldorf J, Fischer MR. Risk factors for a delay in medical education: Results of an online survey among four German medical schools. Med teach. 2018;40:86–90.

    Article  Google Scholar 

  24. Sam AH, Wilson RK, Lupton M, Melville C, Halse O, Harris J, et al. Clinical prioritisation questions: A novel assessment tool to encourage tolerance of uncertainty? Med Teach. 2020;42:416–21. https://doi.org/10.1080/0142159X.2019.1687864.

    Article  Google Scholar 

  25. Deschênes M-F, Goudreau J. L'apprentissage du raisonnement clinique infirmier dans le cadre d'un dispositif éducatif numérique basé sur la concordance de scripts. Pédagogie Médicale 2020;21. https://doi.org/10.1051/pmed/2020041

  26. Ruiz JG, Tunuguntla R, Charlin B, Ouslander JG, Symes SN, Gagnon R, et al. The script concordance test as a measure of clinical reasoning skills in geriatric urinary incontinence. J Am Geriatr Soc. 2010;58:2178–84. https://doi.org/10.1111/j.1532-5415.2010.03136.x.

    Article  Google Scholar 

  27. Lubarsky S, Chalk C, Kazitani D, Gagnon R, Charlin B. The Script Concordance Test: a new tool assessing clinical judgement in neurology. Can J Neurol Sci. 2009;36:326–31. https://doi.org/10.1017/s031716710000706x.

    Article  Google Scholar 

  28. Zotzmann Y, van der Linden D, Wyrwa K. The relation between country differences, cultural values, personality dimensions, and error orientation: An approach across three continents–Asia, Europe, and North America. Saf Sci. 2019;120:185–93. https://doi.org/10.1016/j.ssci.2019.06.013.

    Article  Google Scholar 

  29. Gelfand MJ, Frese M, Salmon E. Cultural influences on errors: Prevention, detection, and management. A Hofmann & M Frese (Eds), The organizational frontiers series (SIOP) Errors in organizations: Routledge/Taylor & Francis Group.; 2011. p. 273–315.

  30. Collard A, Gelaes S, Vanbelle S, Bredart S, Defraigne JO, Boniver J, et al. Reasoning versus knowledge retention and ascertainment throughout a problem-based learning curriculum. Med Educ. 2009;43:854–65. https://doi.org/10.1111/j.1365-2923.2009.03410.x.

    Article  Google Scholar 

  31. Fournier J-P, Thiercelin D, Pulcini C, Alunni-Perret V, Gilbert E, Minguet J-M, et al. Évaluation du raisonnement clinique en médecine d’urgence: les tests de concordance des scripts décèlent mieux l’expérience clinique que les questions à choix multiples à contexte riche. Pédagogie médicale. 2006;7:20–30. https://doi.org/10.1051/pmed:2006020.

    Article  Google Scholar 

  32. Kelly W, Durning S, Denton G. Comparing a script concordance examination to a multiple-choice examination on a core internal medicine clerkship. Teach Learn Med. 2012;24:187–93. https://doi.org/10.1080/10401334.2012.692239.

    Article  Google Scholar 

  33. Peyrony O, Hutin A, Truchot J, Borie R, Calvet D, Albaladejo A, et al. Impact of panelists’ experience on script concordance test scores of medical students. BMC Med Educ. 2020;20:1–8. https://doi.org/10.1186/s12909-020-02257-4.

    Article  Google Scholar 

Download references

Acknowledgements

The authors are greatly indebted to Mrs. Aude Izar and Mrs. Charlène Laurier for their precious help in designing the study. The authors also would like to thank Pr. Denis Angoulvant and Dr. Camille Rerolle for their help and proofreading.

The authors would like to thank the students and teachers who took part in the study.

Funding

No funding sources. This project was supported by the Faculty for Health Sciences of Angers University, France.

Author information

Affiliations

Authors

Contributions

JDKD and SL: design of the study, project development; data collection and management; data analysis and interpretation; manuscript writing/editing. NL and CA contributed substantially to the study design, data analysis and interpretation, manuscript writing and critical revision of the article. All authors had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. The author(s) read and approved the final manuscript.

Authors’ information

J.D. Kün-Darbois, MD, PhD, is specialized in maxillofacial surgery. He is Associate Professor at the Faculty for Health Sciences and Medicine, University of Angers, Angers, France and head of the maxillofacial surgery department of Angers University Hospital, Angers, France. He is in charge of the faculty development program of Script Concordance Testing.

C. Annweiler, MD, PhD, is specialized in geriatric medicine. He is Professor and Director of the Medicine Department at the Faculty for Health Sciences and Medicine, University of Angers, Angers, France and head of the Geriatric department of Angers University Hospital, Angers, France.

N. Lerolle, MD, PhD, is specialized in intensive care medicine. He is Professor and Dean of the Faculty for Health Sciences and Medicine, University of Angers, Angers, France.

S. Lebdai, MD, PhD, is specialized in urology. He is Associate Professor at the Faculty for Health Sciences and Medicine, University of Angers, Angers, France.

Corresponding author

Correspondence to Jean-Daniel Kün-Darbois.

Ethics declarations

Ethics approval and consent to participate

Students’ and teachers’ participation was anonymous and voluntary. All participants were informed by email. The questionnaires were completed anonymously. No written consent was required for publication.

The experimental protocol was conducted in accordance with institutional guidelines and relevant regulations. The study conformed to the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

The study evaluated by the Ethics Committee of Angers University Hospital France (project number 2021/128). The Ethics Committee confirmed that French legislation does not require, in this context, to submit these to an ethics committee or to a committee for the protection of individuals for work carried out during the evaluation or training of nursing, medical staff or care students. In France, this type of work can be carried out without the advice and / or the approval of an ethics committee.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests. The authors alone are responsible for the content and writing of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary data 1.

Example of SCT used in the study.

Additional file 2: Supplementary data 2.

Example of multiple choice questions (MCQ) that can be found in progressive clinical cases (PCC).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kün-Darbois, JD., Annweiler, C., Lerolle, N. et al. Script concordance test acceptability and utility for assessing medical students’ clinical reasoning: a user’s survey and an institutional prospective evaluation of students’ scores. BMC Med Educ 22, 277 (2022). https://doi.org/10.1186/s12909-022-03339-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-022-03339-1

Keywords

  • Script concordance test
  • Evaluation
  • Usability and acceptability
  • Medical education
  • Clinical reasoning assessment tools
  • Uncertainty