Skip to main content

Just-in-time faculty development: a mobile application helps clinical teachers verify and describe clinical reasoning difficulties



Although clinical teachers can often identify struggling learners readily and reliably, they can be reluctant to act upon their impressions, resulting in failure to fail. In the absence of a clear process for identifying and remediating struggling learners, clinical teachers can be put off by the prospect of navigating the politically and personally charged waters of remediation and potential failing of students.


To address this gap, we developed a problem-solving algorithm to support clinical teachers from the identification through the remediation of learners with clinical reasoning difficulties, which have significant implications for patient care. Based on this algorithm, a mobile application (Pdx) was developed and assessed in two emergency departments at a Canadian university, from 2015 to 2016, using interpretive description as our research design. Semi-structured interviews were conducted before and after a three-month trial with the application. Interviews were analysed both deductively, using pre-determined categories, and inductively, using emerging categories.


Twelve clinical teachers were interviewed. Their experience with the application revealed their need to first validate their impressions of difficulties in learners and to find the right words to describe them before difficulties could be addressed. The application was unanimously considered helpful regarding both these aspects, while the mobile format appeared instrumental in allowing clinical teachers to quickly access targeted information during clinical supervision.


The value placed on verifying impressions and finding the right words to pinpoint difficulties should be further explored in endeavours that aim to address the failure to fail phenomenon. Moreover, just-in-time mobile solutions, which mirror habitual clinical practices, may be used profitably for knowledge transfer in medical education, as an alternative form of faculty development.

Peer Review reports


Clinical teachers are exposed to learners’ actual performances when they supervise them in a real clinical setting [1]. In this context, clinical teachers are generally able to identify struggling learners readily and spontaneously, and they have been shown to do so reliably [2]. When learners’ difficulties are addressed early by clinical teachers during clerkships, more time remains for adequate remediation [3]. In addition, in-training remediation that is integrated to regular clinical activities and supervised by clinical teachers is often regarded as more effective and time-efficient than extracurricular activities [1]. Early identification and remediation of learners’ difficulties by clinical teachers have therefore been described as a best practice when supervising learners in clinical settings [4].

Conversely, failure to address learners’ difficulties causes delays in remediation which sometimes allows critical incidents to occur with tangible consequences for patients before a red flag is raised [3, 5]. This situation is often referred to as failure to fail and occurs as clinical teachers are often reluctant to address learners’ difficulties in the absence of clear or familiar steps to follow once a learner in difficulty is identified [6]. Because clinical teachers can be put off by the prospect of having to navigate the politically and personally charged waters of remediation and potential failing of students, they often express their wish to be guided through these steps, particularly with respect to the first steps before formally identifying a problem [7].

Clinical teachers have expressed a need for guidance when addressing clinical reasoning difficulties more specifically [8]. These difficulties are among the most frequent causes of clinical underperformance [5] and while they lead to errors in diagnosis and treatment with potentially serious consequences, they also have a good prognosis for remediation [9]. However, when asked to supervise learners who presented such difficulties, clinical teachers surveyed by Audetat et al. [8] explicitly expressed a wish to have “a tool that would tell them what to do”.

Yepes et al. [10] identified “unsatisfactory evaluator development and evaluation tools as a barrier to failing underperforming trainees”, calling on “health professions educators to develop effective solutions” to address such barriers (p.1092–1093). To date, a few solutions have been explored to facilitate clinical teachers’ task of assessing trainee performance accurately during clinical supervision. In the existing literature, suggested solutions aimed at the clinical teacher level have mainly revolved around targeting areas for faculty development [6, 11]. While one study has demonstrated increased knowledge on how to supervise struggling learners after a faculty development workshop [12], much less is known about how such knowledge later translates into practice [13]. Another suggested solution has been to give clinical teachers feedback on their assessment of trainees [14], and one study did in fact find that the quality of documented assessment could be improved by giving clinical teachers repeated feedback [15]. None of these solutions has addressed all steps of the clinical teacher’s task from identification to remediation strategies nor explored actual use during supervision.

This article describes one endeavour to fill this gap by developing a mobile application designed to guide clinical teachers in the clinical supervision of learners with clinical reasoning difficulties. The assessment of clinical reasoning difficulties during clinical supervision is a multiple assessments and lower stake context for the evaluation of learners. Use of an educational tool by clinical teachers in such a context is generally acknowledged to be based on their subjective perceptions of the tool and its coherence with their usual practices, rather than on the psychometric qualities of the tool [16, 17]. For this reason, clinical teachers’ impressions of the application in this study were explored qualitatively. In addition, in order to foster greater coherence with clinical teachers’ usual practices, we designed our educational tool so as to draw an explicit parallel between clinical and educational diagnostic approaches. Indeed, both diagnostic approaches are aimed at problem-solving [18], using similar steps [19] and strategies [20], based on specialized knowledge organized in scripts [21,22,23,24].

Because our tool was developed to assess the clinical reasoning competence of medical trainees, our explorative approach was driven by Van der Vleuten’s conceptual framework for gauging tools used in the assessment of professional competence in medical education [16]. In this conceptual framework, the perceived utility of a tool is determined by its educational impact and acceptability to intended users, in addition to the tool’s validity and reliability. Considering that users’ subjective perception of a tool is the main predictor of actual use in such a low stake context with multiple assessors [16, 17], this study focused on the acceptability of the tool. According to Van der Vleuten, acceptability is influenced by the opinions, sentiments and traditions of teachers regarding a tool and by the perceived feasibility of using an educational tool within time constraints [16, 25].


Audetat et al. [8] have developed a reference guide and taxonomy of the five most frequent prototypes of clinical reasoning difficulties. Based on this guide, we first elaborated a decision tree. While the reference guide addresses the five prototypes separately, listing their respective signs, differential diagnoses and remediation strategies, our decision tree groups the five prototypes in one figure (Fig. 1). To increase ease of use during supervision, the decision tree was designed to begin with familiar signs of clinical reasoning difficulties as entry points, in the way that they would first be observed by clinical teachers in the field. Depending on their answers, clinical teachers are led by arrows to the relevant pedagogical diagnoses and remediation strategies.

Fig. 1
figure 1

Decision tree based on Audetat et al.’s taxonomy of clinical reasoning difficulties

Using XCode 7.2, we then translated our decision tree into a mobile application named Pdx (Fig. 2). To do so, we transposed junctions in the decision tree into clickable options, leading users to the next suggested step. The resulting application lets clinical teachers know what to observe when working with learners in difficulty, helps them make documented pedagogical diagnoses and suggests targeted remediation strategies. We designed a bilingual application, in English and French, bearing in mind that the language used should not be overly technical, while preserving the core pedagogical taxonomy of the reference guide. We chose to develop the application for iPhones and iPads as an initial platform, due to the prevalent use of these devices among physicians [26].

Fig. 2
figure 2

Sample screens of the Pdx application

Once the beta version of the application was ready, we met the co-authors of the reference guide [8] for a roundtable, to verify the fidelity and accuracy of both the decision tree and the application. Minor changes were brought to both following this consultation, mainly with the aim of clarifying certain terms.

We then proceeded to assess the application in two academic emergency departments in Canada. We chose this setting because we anticipated that the readily accessible format of the mobile application would suit the fast pace and multiple supervisors teaching context of an emergency department. Emergency medicine is also a primary care setting where many initial diagnoses are made, which can facilitate the detection of clinical reasoning difficulties by clinical teachers.

We evaluated clinical teachers’ supervising experience with the mobile application during two semi-structured interviews, before and after a three-month trial period (November 20, 2015, to January 20, 2016). Although participation was voluntary, continuing professional development credits were offered as an incentive. The project was approved by the institutional ethics committee and written consent was obtained from all participants.

Twelve clinical teachers volunteered to participate. We first met each one individually, to document previous pedagogical experience and training, and clinical use of mobile technology (Table 1).

Table 1 Characteristics of participating clinical teachers

We then briefly explained the main categories of clinical reasoning difficulties, installed the application on the clinical teacher’s device, and went through a case example. This initial interview was designed to provide a portrait of the clinical supervision context in which participants' descriptions would later be. We conducted a second set of individual interviews at the end of the trial period, to explore clinical teachers’ experience with the application. In accordance with the objectives of the study, perceived acceptability and feasibility were assessed during this second interview. The interview guide that we used for both interviews was developed specifically for this study (Additional file 1). As the interviews were conducted in French, excerpts have been translated for the purpose of this article.

This initial interview was designed to provide a portrait of the clinical supervision context in which participants’ descriptions would later be interpreted.

We analyzed the results using an interpretive description approach. A thematic coding framework was first developed by all members of the research team (EB, MCA, CSTO), based on the themes explored during the initial interviews and the adopted Van der Vleuten framework. We later expanded this first coding framework with categories emerging through open analysis of the verbatim transcripts of the interviews. We then applied the final coding framework to all transcripts (EB), using NVivo 11. We reached our final analysis and interpretations by consensus.


Clinical teachers’ overall opinion of the tool was that it was concise, concrete and easy to use. Outcomes did not correlate with previous level of experience, pedagogical training or use of mobile technology.

Clarifying learners’ difficulties

Participants unanimously stated that the application had helped them clarify the specific difficulties of their learners. As a first step, they reported that the application had helped them validate – or invalidate – their subjective impressions that a learner was experiencing difficulties.

During the initial interviews, clinical teachers had identified their greatest challenge when supervising struggling learners as having to identify where learners’ difficulties lay.

The most difficult part for me is to identify where the student’s problem lies. Spotting that a student has more difficulty is pretty easy. But being able to pinpoint exactly what the problem is, I find that particularly difficult.” P08A

Yet, when the study was over, this was precisely the aspect of the application that participants had found most helpful and they now reported feeling more confident with this task.

Sometimes you just get a feeling that the shift did not go very well. But this app helped me address these issues differently and ask: ‘Ok, where did it go wrong?’” P02B

Furthermore, most participants found that the tool had helped them find the ‘right words’ to name the problem.

The app helped me to better define the problem, with educational words, not just: ‘You’re not good’!... It helped me find the right words.” P01B

Being able to pinpoint difficulties helped participants give more specific, useful feedback, and helped some feel more efficient.

Usually, I’ll give an example and they’ll say: ‘That doesn’t count, because….’ They often find excuses and our message doesn’t seem to get through. This time when the resident said: ‘Yes, but this was because of this’, I said ‘Wait, wait, I have 3 more examples!’ I felt more much confident saying: ‘These are specifically the areas you need to work on’. The resident ended up agreeing.” P01B

Feedback was now perceived to be more credible because clinical teachers perceived it to be reliable, and some participants reported an increased sense of competence when using it:

« I know what I know from personal experience, but not because I was taught. So the app allowed me to offer the student something more concrete and evidence-based. » P06B

With respect to the remediation portion of the tool, however, the pedagogical terminology appeared to be a double-edged sword. Indeed, a few participants cited the need for further examples to understand how to apply them. That being said, most participants also stated that they did not consider remediation to be part of their task during clinical supervision.

I never got to that part in the app. I couldn’t see myself initiating remediation strategies... We’ll give a few tips here and there, but that’s not the main part of what we do.” P01B

A format that facilitates learning and translation into practice

The application was considered easy to use during clinical supervision by all participants. During the three-month trial period, all but one used it in the field to solve concrete supervision issues during emergency shifts. During this period, all but two participants worked with a range of one to three learners in difficulty, which is coherent with rates of 10 to 15% of struggling medical learners reported in the literature [27, 28]. The remaining ten participants used the application on each occasion where they supervised a learner who they felt experienced more difficulty. It was estimated that using the application increased supervision time by approximately 5 min per shift, which was deemed very feasible. During the initial interviews, all participants reported consulting applications regularly to help them through various steps of their clinical tasks. It resonated therefore with their well-integrated habit of consulting mobile applications to solve clinical problems during their shifts.

Moreover, the format itself was reported to facilitate learning. Some participants mentioned that they had previously followed formal workshops on the same topics to no avail, having largely forgotten the content once the time had come to use it.

One participant had initially mentioned:

I attended the same workshop on clinical reasoning 2 or 3 times. It seems like each time, it remains a bit vague. It doesn’t change the way I teach.” P08A

Yet this same participant described a different experience with the application.

“This time, when I worked with the same student again, I was better prepared to reevaluate him. Even afterwards, with another student who had similar problems, I was better equipped to help him.” P08B

The mobile format also allowed repetitive access to the same information, which reportedly induced effortless learning over time for some users.

This is the way I usually work, I learn new material by using it.” P05B

Thus many participants felt that their need to follow the application step-by-step would decrease, as they would gradually integrate its content.


Acceptability of the application

Participants responses reflected a good level of acceptability for this application, as it was mostly viewed as a helpful resource to pinpoint problems. A few participants however reported not consulting the remediation strategies section of the tool because they did not feel remediation to be part of their supervising task. Yet the clinical context is where remediation strategies are considered to be most effective [29]. This portion of the application may have elicited less acceptability because it did not reflect these clinical teachers’ perception of their role. This finding is coherent with the results of a study conducted by Audétat et al. [30]. The authors had found, consistently with the Theory of Reasoned Action, that clinical teachers were reluctant to engage in remediation strategies as a result of low self-efficacy beliefs toward remediation and a belief that their role as clinical teacher rested mainly on role-modeling, in keeping with the apprenticeship model traditionally used in medical training.

Using the application during clinical supervision was considered feasible by all participants and all but one did in fact use it during their emergency shifts. These findings are particularly relevant when interpreted in the context of emergency medicine, where time for teaching is notoriously scarce [31, 32].

The right words to define learners’ difficulties

Participants in this study unanimously felt that the application had helped them clarify learners’ difficulties, by helping them find the right words. At the beginning of the study, being able to better document and name learners’ difficulties had been identified as an important need of the clinical teachers interviewed. This echoes Dudek et al. [6]’s finding that lack of knowledge of what to document constitutes a major barrier for reporting poor performances. Interestingly, during the post-trial interviews, participants reported that documenting and naming difficulties was what the tool had been most successful in helping them with.

Moreover, discovering the right terminology to describe learners’ performances more accurately seems to have helped clinical teachers validate whether a learner was in fact experiencing difficulties. One hypothesis to explain why this was particularly meaningful relates to the context in which feedback is given in the emergency room. Bearman et al. [1] have identified seeking a second opinion as one strategy used by clinical teachers to validate their subjective impressions. Because clinical teachers in the emergency room work one-on-one with learners who change daily, they rarely have the opportunity to discuss their impressions with colleagues who have worked with the same learner before giving feedback. In this context, the mobile tool may have been useful to provide a “second opinion” to validate clinical teachers’ initial impressions.

Although no elaborate training had been given to participants on how to use the application, all participants stated that they had found it useful to better define their learners’ difficulties. That this finding remained consistent regardless of years of experience and previous educational training suggests that the application could be used favorably for faculty development at all levels of learning. Furthermore, the application required minimal to no training for faculty to integrate it into their supervision and its deployment among faculty requires no resources. While these advantages hold true for the mobile tool in this study, we postulate that a mobile format could be used profitably in other areas of faculty development to induce self-regulated learning by clinical teachers.

A just-in-time format mirrors clinical problem-solving practices

The mobile format provided just-in-time information at point-of-care, which seems to have acted favourably on knowledge translation and actual use of the application by participants. In addition, specific approaches to clinical reasoning difficulties were indeed used by participants during the trial period, whereas some had mentioned having followed formal workshops on this topic to no avail. The successful use of the mobile format can probably be attributed, at least partly, to the fact that consulting a mobile application to solve educational problems is coherent with clinical teachers’ well-integrated habits of clinical problem solving. Moreover, the algorithmic format of the application reflects the clinical decision rules commonly used by clinicians [33], allowing targeted information to be consulted on an as-needed basis, in response to the answers given by the user. A just-in-time solution is also coherent with principles of cognitive psychology whereby using new knowledge for problem solving is instrumental to efficient learning [34]. Finally, the mobile format allowed repetitive access to the same information, which reportedly induced effortless learning over time for some users.


An important limitation of this study is that all participating clinical teachers were recruited from the same university, through volunteer sampling. Thus, it is possible that their experience with the application reflected a local culture with regard to clinical teaching, or a prior interest in medical education, mobile technology, or both.

It seems reasonable to expect that this application could be found useful in a variety of clinical teaching contexts. Indeed, positive results in the busy setting of an emergency department suggest that using the application during supervision may be feasible in a wider range of clinical contexts. Moreover, the approach presented in the application is based on Audétat et al.’s guide, which itself has been validated with clinical teachers from various clinical domains, ranging from medicine to nutrition or physiotherapy [8]. Anecdotally, this same reference has also been used by one author (MCA) to guide clinical teachers in North American as well as European contexts.

Therefore, as a next step, a study should be conducted with a larger sample in a different clinical context in order to confirm our initial findings. If confirmed, the value placed on verifying impressions and finding the right words to pinpoint difficulties should be further explored in endeavours that aim to address the failure to fail phenomenon.


A salient outcome in this study was that although no elaborate training had been given on clinical reasoning, all participants were still able to use the decision-support tool effectively to better define their learners’ difficulties. That this finding remained consistent regardless of years of experience and previous educational training suggests that the tool may be used for faculty development at all levels of learning. Furthermore, deployment of such a mobile tool among faculty requires no resources. While these outcomes hold true for our application in a limited setting, we postulate that a mobile format could be used profitably in other areas of faculty development to induce self-regulated learning by clinical teachers.

While the pedagogical terminology used in our tool was mostly viewed by participants as helpful to pinpoint problems, for some, terminology could also represent an obstacle that kept them from using the remediation strategies. Since a glossary was available within the application but not consulted, further development of the tool could focus on adding integrated examples and descriptions, in order to maintain the advantages of precise terms, while making them more accessible to clinical teachers. We posit, however, that the clarification of terms would not be sufficient for all clinical teachers to apply remediation strategies. While making pedagogical terminology more accessible to clinical teachers may increase their sense of self-efficacy, it remains likely that clinical teachers’ perception of their role as educators would have to be addressed for the majority of clinical teachers to engage in remediation strategies in the field, where these strategies are perceived to be most effective.

Finally, our outcomes support the use of a just-in-time and algorithmic format as an alternative medium for knowledge transfer in medical education. They suggest that optimal use of a mobile format in such contexts should combine concise, targeted and gathered information at point-of-care, and that, to be most efficient for educational purposes, the dissemination platform should mirror habitual clinical practices of intended users.



Christina St-Onge


Elisabeth Boileau


Marie-Claude Audétat


  1. Bearman M, Molloy E, Ajjawi R, Keating J. ‘Is there a plan B?’: Clinical educators supporting underperforming students in practice settings. Teach High Educ. 2013;18(5):531–44.

    Article  Google Scholar 

  2. Weller JM, Misur M, Nicolson S, Morris J, Ure S, Crossley J, Jolly B. Can I leave the theatre? A key to more reliable workplace-based assessment. Br J Anaesth. 2014;112(6):1083–91.

    Article  Google Scholar 

  3. Steinert Y. The "problem" learner: whose problem is it? AMEE guide no. 76. Med Teach. 2013;35(4):e1035–45.

    Article  Google Scholar 

  4. Evans DE, Alstead EM, Brown J. Applying your clinical skills to students and trainees in academic difficulty. Clin Teach. 2010;7(4):230–5.

    Article  Google Scholar 

  5. Yao DC, Wright SM. National survey of internal medicine residency program directors regarding problem residents. Jama. 2000;284(9):1099–104.

    Article  Google Scholar 

  6. Dudek NL, Marks MB, Regehr G. Failure to fail: the perspectives of clinical supervisors. Acad Med. 2005;80(10 Suppl):S84–7.

    Article  Google Scholar 

  7. Hinson JP, Griffin A, Raven PW. How to support medical students in difficulty: tips for GP tutors. Educ Prim Care. 2011;22(1):32–5.

    Article  Google Scholar 

  8. Audetat MC, Laurin S, Sanche G, Beique C, Fon NC, Blais JG, Charlin B. Clinical reasoning difficulties: a taxonomy for clinical teachers. Medical teacher. 2013;35(3):e984–9.

    Article  Google Scholar 

  9. Audétat M-C, Laurin S, Dory V, Charlin B, Nendaz MR. Diagnosis and management of clinical reasoning difficulties: part II. Clinical reasoning difficulties: management and remediation strategies. Medical teacher. 2017;39(8):797.

    Article  Google Scholar 

  10. Yepes-Rios M, Dudek N, Duboyce R, Curtis J, Allard RJ, Varpio L. The failure to fail underperforming trainees in health professions education: a BEME systematic review: BEME guide no. 42. Medical teacher. 2016;38(11):1092–9.

    Article  Google Scholar 

  11. Monrouxe LV, Rees CE, Lewis NJ, Cleland JA. Medical educators' social acts of explaining passing underperformance in students: a qualitative study. Adv Health Sci Educ. 2011;16(2):239–52.

    Article  Google Scholar 

  12. Steinert Y, Nasmith L, Daigle N, Franco ED. Improving teachers' skills in working with 'problem' residents: a workshop description and evaluation. Med Teach. 2001;23(3):284–8.

    Google Scholar 

  13. Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, Prideaux D. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497–526.

    Article  Google Scholar 

  14. Fazio SB, Papp KK, Torre DM, Defer TM. Grade inflation in the internal medicine clerkship: a national survey. Teach Learn Med. 2013;25(1):71–6.

    Article  Google Scholar 

  15. Dudek NL, Marks MB, Bandiera G, White J, Wood TJ. Quality in-training evaluation reports--does feedback drive faculty performance? Acad Med. 2013;88(8):1129–34.

    Article  Google Scholar 

  16. Van Der Vleuten CP. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ. 1996;1(1):41–67.

    Article  Google Scholar 

  17. Nelson MS, Clayton BL, Moreno R. How medical school faculty regard educational research and make pedagogical decisions. Acad Med. 1990;65(2):122–6.

    Article  Google Scholar 

  18. Vaughn LM, Baker RC, DeWitt TG. The Problem Learner. Teach Learn Med. 1998;10(4):217–22.

    Article  Google Scholar 

  19. Langlois JP, Thach S. Managing the difficult learning situation. Fam Med. 2000;32(5):307–9.

    Google Scholar 

  20. Evans AEM, Brown J. Applying your clinical skills to students and trainees in academic difficulty. Clin Teach. 2010;7(4):230–5.

    Article  Google Scholar 

  21. Irby DM. What clinical teachers in medicine need to know. Acad Med. 1994;69(5):333–42.

    Article  Google Scholar 

  22. Audetat MC, Laurin S. Clinicien et superviseur...même combat! Le Médecin du Québec. 2010;45(5):53–7.

    Google Scholar 

  23. Irby DM. Excellence in clinical teaching: knowledge transformation and development required. Med Educ. 2014;48(8):776–84.

    Article  Google Scholar 

  24. Cote L, Bordage G. Content and conceptual frameworks of preceptor feedback related to residents' educational needs. Acad Med. 2012;87(9):1274–81.

    Article  Google Scholar 

  25. Durning AA, Boulet J, La Rochelle J, Van der Vleuten C, Arze B, Schuwirth L. The feasibility, reliability, and validity of a post-encounter form for evaluating clinical reasoning. Med Teach. 2012;34(1):30–7.

    Article  Google Scholar 

  26. Zeman E. Doctors favor iPhone, iPad over android. In: Information week. San Francisco: UBM; 2011.

    Google Scholar 

  27. Faustinella F, Orlando PR, Colletti LA, Juneja HS, Perkowski LC. Remediation strategies and students' clinical performance. Med Teach. 2004;26(7):664–5.

    Article  Google Scholar 

  28. Yates J, James D. Predicting the "strugglers": a case-control study of students at Nottingham University medical school. BMJ (Clinical research ed). 2006;332(7548):1009–13.

    Article  Google Scholar 

  29. Hauer KE, Ciccone A, Henzel TR, Katsufrakis P, Miller SH, Norcross WA, Papadakis MA, Irby DM. Remediation of the deficiencies of physicians across the continuum from medical school to practice: a thematic review of the literature. Acad Med. 2009;84(12):1822–32.

    Article  Google Scholar 

  30. Audetat MC, Dory V, Nendaz M, Vanpee D, Pestiaux D, Junod Perron N, Charlin B. What is so difficult about managing clinical reasoning difficulties? Med Educ. 2012;46(2):216–27.

    Article  Google Scholar 

  31. Bécotte G, Hamel P, St-Onge M, Vanier L. Le profil du médecin d'urgence à temps plein. Québec: Association des médecins d'urgence du Québec (AMUQ); 2009.

    Google Scholar 

  32. Chisholm CD, Weaver CS, Whenmouth L, Giles B. A task analysis of emergency physician activities in academic and community settings. Ann Emerg Med. 2011;58(2):117–22.

    Article  Google Scholar 

  33. Elstein AS. Thinking about diagnostic thinking: a 30-year perspective. Adv Health Sci Educ. 2009;14(Suppl 1):7–18.

    Article  Google Scholar 

  34. Tardif J. Pour un enseignement stratégique : l'apport de la psychologie cognitive. Montréal: Éditions Logiques; 1997.

    Google Scholar 

Download references


The authors would like to thank the research assistants from the Paul Grand’Maison de la Société des Médecins de l’Université de Sherbrooke Medical Education Research Chair who conducted the interviews: Élise Vachon Lachiver, Jean-Sébastien Dion and Kathleen Ouellet.

The authors are also grateful to the co-authors of the cited taxonomy of clinical reasoning difficulties who reviewed and commented the algorithm and application during the validation process: Caroline Béïque, Suzanne Laurin and Gilbert Sanche.


Funding for this project was received from a Medical Education Development Fund granted by the Société des Médecins de l’Université de Sherbrooke. The latter had no role in the design of the study, in the collection, analysis, and interpretation of data, nor in writing the manuscript.

Availability of data and materials

The original datasets generated and analysed during the current study are available upon request. As this study was conducted and analysed in French, these datasets are only available in their original language.

Author information

Authors and Affiliations



EB developed the algorithm and application assessed in this study, based on a taxonomy of clinical reasoning difficulties for which MCA had been the principal investigator. Interviews were conducted by research assistants. A thematic coding framework was co-developed by EB, MCA, and CSTO. EB later applied the final coding framework to all transcripts and wrote the first draft of the manuscript. All authors contributed to analysis and interpretation of the data, revised the manuscript and approved its final version.

Corresponding author

Correspondence to Christina St-Onge.

Ethics declarations

Authors’ information

Élisabeth Boileau, MD MSc CCFP(EM), is an Emergency Physician and an Associate Professor in the Department of Family and Emergency Medicine at the Université de Sherbrooke (Sherbrooke, Canada).

Marie-Claude Audétat, MPs, MA (Ed), PhD, is an Associate Professor in the Faculty of medicine at the Family medicine Unit (UIGP) and Unit of Development and Research (UDREM) at the Université de Genève (Geneva, Switzerland), and an Associate Professor in the Department of Family and Emergency Medicine at the Université de Montréal (Montreal, Canada).

Christina St-Onge, Ph.D., holds the Paul Grand’Maison de la Société des Médecins de l’Université de Sherbrooke Medical Education Research Chair. She is an Associate professor in the Department of Medicine and is affiliated with the Centre de pédagogie des sciences de la santé at the Université de Sherbrooke.

Ethics approval and consent to participate

This study was approved by the Ethics Committee for Research, Education and Social Sciences of the Université de Sherbrooke (Reference number #2015–33). Written consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Interview Guide. Interview guide used in the study, for the initial (t1) and follow-up (t2) interviews. (DOCX 19 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boileau, E., Audétat, MC. & St-Onge, C. Just-in-time faculty development: a mobile application helps clinical teachers verify and describe clinical reasoning difficulties. BMC Med Educ 19, 120 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: