Skip to main content

Pilot study of the influence of self-coding on empathy within an introductory motivational interviewing training



Motivational interviewing (MI) is a framework for addressing behavior change that is often used by healthcare professionals. Expression of empathy during MI is associated with positive client outcomes, while absence of empathy may produce iatrogenic effects. Although training in MI is linked to increased therapeutic empathy in learners, no research has investigated individual training components’ contribution to this increase. The objective of this study was to test whether a self-coding MI exercise using smartphones completed at hour 6 of an 8-h MI training was superior in engendering empathy to training as usual (watching an MI expert perform in a video clip for the same duration at the same point in the training).


This was a pilot study at two sites using randomization and control groups with 1:1 allocation. Allocation was achieved via computerized assignment (site 1, United Kingdom) or facedown playing card distribution (site 2, United States). Participants were 58 students attending a university class at one of two universities, of which an 8-h segment was dedicated to a standardized MI training. Fifty-five students consented to participate and were randomized. The intervention was an MI self-coding exercise using smartphone recording and a standardized scoring sheet. Students were encouraged to reflect on areas of potential improvement based on their self-coding results. The main outcome measure was score on the Helpful Responses Questionnaire, a measure of therapeutic empathy, collected prior to and immediately following the 8-h training. Questionnaire coding was completed by 2 blinded external reviewers and assessed for interrater reliability, and students were assigned averaged empathy scores from 6 to 30. Analyses were conducted via repeated-measures ANOVA using the general linear model.


Fifty-five students were randomized, and 2 were subsequently excluded from analysis at site 2 due to incomplete questionnaires. The study itself was feasible, and overall therapeutic empathy increased significantly and substantially among students. However, the intervention was not superior to the control condition in this study.


Replacing a single passive learning exercise with an active learning exercise in an MI training did not result in a substantive boost to therapeutic empathy. However, consistently with prior research, this study identified significant overall increases in empathy following introductory MI training. A much larger study examining the impact of selected exercises and approaches would likely be useful and informative.

Peer Review reports


Motivational interviewing (MI)

Motivational Interviewing (MI) has a 35-year research history and is considered an efficacious clinical framework for resolving ambivalence and addressing behavior change, especially related to behavioral healthcare and addictions [1]. For example, MI is often included as an element in education and training on screening, brief intervention, and referral to treatment (SBIRT) [2]. As research on MI training and applications has progressed, increasing focus has been placed on the positive influence of therapeutic empathy on MI-consistent counseling behaviors [3], synchrony of language used between client and counselor [4], direct client-level behavioral outcomes [5], and general cohesion with the spirit of MI [6]. Notably, low therapist empathy may predict poor treatment outcomes [5]. There is therefore value in focusing specifically on acquisition of therapeutic empathy within MI training.

At the same time, measurement of MI training outcomes is complicated by the fact that training formats vary in terms of delivery and methods. For example, one meta-analysis of 28 MI training studies identified seven studies lasting fewer than 8 hours, 16 studies lasting between nine and 16 h, and five studies featuring extended timeframes [7]. MI trainings typically are delivered in a workshop format, though trainings can also include add-ons such as teleconferencing and booster sessions [8]. Research has indicated that a variety of workshop-driven formats, including those incorporating feedback and coaching, but also standalone workshops, produce superior proficiency to self-study controls [9]. MI skills development appears to be more sustainable when coaching and feedback are provided post-training [8]. Of particular interest for this study, researchers have also used the Helpful Responses Questionnaire (HRQ) [10], a measure of learner empathy, as a means of assessing the impact of MI training [11,12,13]. This work has generally found that MI training improves HRQ scores by a significant and meaningful amount.

Teaching techniques within MI workshops

The existence of a formal Motivational Interviewing Network of Trainers (MINT) and competency requirements [14] provides some internal consistency of training workshop components. MI workshops with a MINT trainer often begin with a two-day workshop (e.g., [15]). The workshop generally includes didactic content, role-play and real-play (role-play in which the individual processes a scenario as him/herself in a realistic context), and video observation of expert MI practitioners. Role-play and real-play are thought to be especially important, not only in terms of practicing applicable skills, but also because the type of learning that occurs in the context of self-reflection produces stronger outcomes than those attributed to an exclusively didactic style of delivery [16].


The present investigation began with a supposition based on observations of the lead author that a self-coding exercise was the point in his own MI training workshops where learners seemed to grasp the clinical application of MI. There has been little research into MI self-coding within workshops, with 1 notable exception [17], and no research has been conducted regarding the effects of specific components of MI training workshops on development of learning outcomes, including therapeutic empathy. At the same time, the importance of investigating ‘within workshop’ MI training elements was noted in a recent editorial outlining necessary directions for MI research [18]. General health and medical education research suggests that a self-coding exercise following a brief real-play may be an especially effective MI training element, as it combines aspects of experiential adult learning [19, 20] and structured assessment following role-play [21]. However, there is no extant research regarding the effect on learner outcomes, including development of therapeutic empathy, attributable to any single component of an MI workshop.

This paper therefore describes a pilot study conducted among undergraduate students in both the United States (USA) and United Kingdom (UK). The study investigated whether a standard eight-hour MI workshop with an MI self-coding exercise (intervention) delivered 6 hours into the workshop was superior in building participant empathy when compared with the same workshop with students watching a video of an MI expert performing MI (control) in place of the self-coding exercise.



The institutional review boards at both study sites approved this study (Sheffield Hallam University, #ER5231303, and Indiana State University, #1151112–2).


During the semester designated for the study, all students who either registered for and attended an undergraduate screening, brief intervention, and referral to treatment elective class within the Department of Social Work (of which 8 h were MI training) at Indiana State University, USA, or who registered for and attended a third year undergraduate nutrition class (of which 8 h were MI training) at Sheffield Hallam University, UK, were recruited. These potential participants were healthcare students either studying to become social workers or nutritionists. The MI approach can be used by a wide variety of fields, and has been taught to numerous healthcare disciplines, including social work and nutrition [22]. Thus, the only exclusion criterion was refusal to participate after reading the study information sheet. Excluded students still participated in the eight-hour training but were not asked to complete any study questionnaires.


All participants first received a six-hour training block of introductory MI training conducted by one of two study authors (TS and MD), who are members of MINT; the training content was commensurate with recommendations by MINT for an introductory MI training [23]. Then, participants randomized to the intervention were led to a separate area to complete a self-coding exercise with a partner. Participants randomized to the control group remained in the classroom and watched a video of an expert performing MI. All participants completed the remainder of the MI training (approximately 100 additional minutes) after completing either the intervention or the control exercise.

The self-coding intervention was a real-play experience where each participant was asked to identify an aspect of their lives that they felt ambivalent about changing and were comfortable both discussing with a classmate and recording. Exemplar topics included physical activity, diet, smoking, or alcohol consumption, but no topic was specifically excluded. Each member of each pair counseled the other about the identified behavior using applicable MI skills. Participants were instructed to audio record their session as the helping professional. Audio recording was completed using each participant’s personal smartphone (using memo recording, voice recording, or a camera function without video enabled), with recording devices placed between members of the pair. After recording was completed for both partners, each participant listened to his/her own recording (where they were the helping professional) and completed a self-coding exercise using a coding sheet developed by the first author (see Additional file 1).

For the coding exercise, participants were instructed to mark the appropriate box for both MI-consistent (e.g., Affirmations) and MI-inconsistent (e.g., Authoritarian statements) behaviors using tally marks to indicate the number of times each behavior occurred. Space was also provided for participants to add examples. Participants were told that they could pause, rewind, and re-play the recording as needed. Finally, participants were asked to reflect to themselves, after completing the coding sheet, what went well during their recorded sessions and what, if anything, they would change about their practice in subsequent sessions. To reduce social desirability bias, the self-coding sheet was neither collected nor evaluated by the instructor.

Study structure

This study was a pilot project using a two-group parallel, randomized controlled design with 1:1 allocation.

Outcome measure

The HRQ is a six-item free-response questionnaire measuring therapeutic empathy [10] and commonly used to assess learner outcomes in MI training [7]. Participants completed the HRQ at the beginning of the study, and again at the end of the eight-hour training. The tool asked participants to respond to a series of vignettes in an open-ended style, and they were instructed to “think about each paragraph as if you were really in the situation… in each case write the next thing that you would say if you wanted to be helpful” (p. 444) [10]. HRQ scoring was completed by independent expert reviewers using standard criteria; each open-ended response was scored by external reviewers from one to five, with a ‘1’ not only indicating no reflection, but also a ‘roadblock’ (a response that interrupts dialogue between counselor and client), and a ‘5’ indicating a complex reflection of the client’s feeling (or similar metaphor) with no roadblock content present. Total scores therefore can range from 6 to 30. The reviewers were not part of the study team and were blinded to both the group assignment (intervention/control) and the administration time (pre/post). HRQ scores were the mean of coders’ ratings for each individual at each administration point.

Interrater reliability

Interrater reliability of the two coders was calculated at baseline and follow-up using Krippendorff’s alpha [24] with the level of measurement set as interval and 1000 bootstrap samples used to generate confidence intervals. This metric can range from zero to one, with ‘1’ representing perfect reliability. At both baseline and follow-up, coders exhibited excellent agreement (Baseline: α = .965, LL95%CI = .944, UL95%CI = .983; Follow-Up: α = .961, LL95%CI = .940, UL95%CI = .975).

Sample size and randomization

There was no precedent for an estimated effect size of a training modification such as this intervention on learners’ therapeutic empathy. Because of this, and given the naturalistic setting of our pilot study within preexisting university classes, the protocol did not utilize an a priori power analysis, choosing instead to invite all enrolled students to participate in the study (n = 79 eligible students, n = 53 analytic sample; see Participant Flow).

In the US cohort, simple randomization was achieved using facedown playing cards, and in the UK it was achieved using a computerized random number generator to separate participants [25]. We selected which card suits (US) or numbers (UK) were intervention and control indicators prior to using the mechanisms to sort participants. In the US, an assistant, rather than a member of the study team, passed out the facedown cards. In the UK, a study team member applied the randomly sequenced numbers to the participants as generated. In this way, allocation concealment can be inferred. All individuals generating outcome measure scores (the ‘coders’) were blinded to both group assignment and measurement point (pre/post).

Statistical assumptions and methodology

The outcome of interest was the interaction effect of HRQ administration time and group allocation, as it was expected that both groups would naturally display improved therapeutic empathy, but that the experimental group’s improvement would be significantly greater. Thus, repeated measures ANOVA was used to generate statistical estimates of effect size and significance via the general linear model, IBM SPSS Statistics 25, and then the plot of means was interpreted [26, 27]. Separate analyses of pre-post data by group were completed using Student’s t-test and included in Table 1 to more clearly illustrate changes in measured therapeutic empathy over time as a result of the full training, but these analyses should not be used to interpret the effects of the intervention.

Table 1 Comparison of pre and post-training scores by group assignment

Data exhibited high levels of skewness and kurtosis, especially at baseline (skew = 2.346 [SE = .327]; kurt = 4.549 [SE = .644]), and Shapiro-Wilk tests of normality indicated violations in both cases (Baseline w = .544, df = 53, p < .001; Follow-Up w = .928, df = 53, p = .003). This is typical for pilot data of this type [28]. There was one univariate outlier slightly exceeding an absolute value of Z = 3.29, but this case did not meaningfully affect overall skewness and kurtosis, so it was retained [29]. Multiple transformations (log, modified log, reciprocal, exponential) were attempted but were unable to achieve non-significant Shapiro-Wilk test values. However, parametric comparison of means is generally robust to violations of normality in the absence of extreme outliers and at least 20 degrees of freedom [29]. Parametric tests also allow for estimation of effect size, in keeping with CONSORT 2010 recommendations [30]. Therefore, the planned comparison strategy was retained over the potential alternative of using non-parametric tests [31].


Participant flow

Seventy-nine undergraduates (n = 50 UK, n = 29 US) were eligible for this trial. Only the first 29 students in the UK arm were utilized for analysis to avoid potential overrepresentation bias from different instructors, field of study, or course location in the UK versus the US. After potential participants were provided with a study information sheet, three US students declined to participate. The remaining 55 students were randomized into the self-coding (n = 27) intervention group and the video viewing (n = 28) control group. One US student failed to complete the pre-test (but completed the post-test), and a separate US student failed to complete the post-test (but completed the pre-test). Both students were excluded from primary analyses but their data were included in calculations of interrater reliability. A full participant flow diagram is included as Fig. 1.

Fig. 1

Participant Flow Chart

Empathy characteristics

At baseline, both the control and experimental groups demonstrated little therapeutic empathy, with mean scores of 7.00 (SD = 2.74) and 8.17 (SD = 3.79), respectively, (within a possible range of 6 to 30). Both groups presented significantly improved empathy (p < .001) by the end of the MI training, with mean scores of 12.48 (SD = 4.40) and 15.41 (SD = 4.05), respectively (see Table 1).

Primary analysis

A mixed ANOVA using the general linear model found a significant main effect for the MI training program across all students (F1,51 = 110.83, p < .001). The partial ƞ2 statistic (.685, LL90%CI = .554, UL90%CI = .757) suggested that the training resulted in a large increase in mean therapeutic empathy for all students, in aggregate. Although baseline differences between the control and experimental groups were, by definition, random, the between subjects main effect of group allocation was significant (F1,51 = 5.79, p = .020) with a partial ƞ2 statistic of .102 (LL90%CI = .001, UL90%CI = .240).

The interaction effect measured the degree to which the change in therapeutic empathy over time was different for the experimental and control groups. This effect was non-significant (F1,51 = 2.12, p = .151), with a partial ƞ2 statistic of .040 (LL90%CI = .000, UL90%CI = .154), a small effect but one with potential practical implication [32] (see Table 2). The plot of estimated marginal means (Fig. 2) illustrates the implications of the GLM output, as the slope of the experimental group’s increase is somewhat sharper, but both groups increased relatively uniformly.

Table 2 Mixed ANOVA (General Linear Model)
Fig. 2

Graph of Estimated Marginal Means



The notion that experiential learning is useful alongside or instead of didactic delivery of information is not a new concept. Role-playing and self-evaluation are often used when developing adult learning curricula [33]. The question of whether a single exercise within a MI workshop might, by itself, increase therapeutic empathy above more passive information transfer via observation of an expert, was heretofore unexplored. This pilot study used randomization and a control group to test the hypothesis that a self-coding exercise at hour six of an eight-hour MI training was superior in building therapeutic empathy to watching a video of an MI expert performing MI. The study outcome did not support rejecting the null hypothesis.

While we had speculated that the isolated self-coding exercise might, in and of itself, result in a substantial boost in therapeutic empathy relative to passive learning, our measured effect was non-significant and small (.040), even at the upper bound of the 90% CI. One possible implication of failing to reject the null hypothesis may be that there is no one single point where learners experience a large increase in ability to express empathy, but rather that each separate component of the MI training synergistically builds on the others in increments, resulting in the aggregate gain in therapeutic empathy at workshop conclusion observed in this and other studies. An assessment of whether that is the case would require a larger sample size and, ideally, multiple study arms testing additional learning conditions and approaches.

In addition to the general finding about MI workshops, there are two supplemental areas where education research might be influenced. First, prior to this study, the range of realistic effects on therapeutic empathy that might be expected from a single exercise within an MI workshop was unknown. While it is not recommended to base study power analyses solely on effect sizes from pilot tests [34], data from this study suggest that a medium or large effect would likely not be reasonable to expect from a single training modification of this type. Second, our failure to reject the null hypothesis does not imply that the self-coding exercise did not support building therapeutic empathy, but rather that it was not measurably superior, within the context of an introductory MI training, to a passive learning exercise (video viewing). Madson and colleagues [18] described a need to: “seek to better understand the effective training ingredients.” For practitioners interested in this work, the present study is one of the first steps in this undoubtedly long and complex process.

Strengths and limitations

This study has several limitations. First, outcomes were observed only among undergraduate students enrolled in universities, so extrapolation of the findings to other commonly-trained groups (e.g., experienced therapists) should be done with caution. Second, both the trainers involved in the present investigation are members of MINT, limiting generalizability to workshops run by trainers who are not MINT members (e.g. potentially less experienced). Third, prior experience with MI was not elicited at enrollment for this study. At the same time, since these were undergraduate courses, it is somewhat unlikely that any student would have had extensive prior MI experience. Finally, the study’s focus was solely on therapeutic empathy, so findings cannot be generalized to other potential outcomes from MI training, such as lower-level skills (e.g., use of affirmations). This study also has several strengths: The study included students from two different countries (USA and UK), and included students studying several different disciplines, allowing increased generalizability outside of the field of social work to other health-supportive fields that may use MI. We also note a correspondence with prior research on MI workshops that captured HRQ data, as the overall significance and effect size of the MI training on therapeutic empathy in this study mirrors that work [11,12,13]. This supports the overall validity of the study.


Our findings suggest that a single active learning exercise within an MI workshop for undergraduate learners in social work and nutrition may not be superior to a passive learning exercise in building therapeutic empathy. However, the pilot study itself was eminently feasible, with few barriers to completion, even across continents, raising the potential of developing a larger and more thorough assessment of MI workshop content in order to optimize within-training outcomes across desired domains like empathy. Further, our findings continue to reinforce the probability that even brief (8-h) MI training workshops are likely to increase participants’ empathy.

Availability of data and materials

Data are available from the corresponding author on request.



Helpful Responses Questionnaire


Motivational Interviewing


Motivational Interviewing Network of Trainers


Screening, brief intervention, and referral to treatment


United Kingdom


United States of America


  1. 1.

    Miller WR, Rollnick S. Motivational interviewing: helping people change. 3rd ed. New Work: The Guilford Press; 2013.

    Google Scholar 

  2. 2.

    Reho K, Agley J, DeSalle M, Gassman RA. Are we there yet? A review of screening, brief intervention, and referral to treatment (SBIRT) implementation fidelity tools and proficiency checklists. J Prim Prev. 2016;37(4):377–88.

    Article  Google Scholar 

  3. 3.

    Pace BT, Dembe A, Soma CS, Baldwin SA, Atkins DC, Imel ZE. A multivariate meta-analysis of motivational interviewing process and outcome. Psychol Addict Behav. 2017;31(5):524–33.

    Article  Google Scholar 

  4. 4.

    Lord SP, Sheng E, Imel ZE, Baer J, Atkins DC. More than reflections: empathy in motivational interviewing includes language style synchrony between therapist and client. Behav Ther. 2015;46:296–303.

    Article  Google Scholar 

  5. 5.

    Moyers TB, Miller WR. Is low therapist empathy toxic? Psychol Addict Behav. 2013;27(3):878–84.

    Article  Google Scholar 

  6. 6.

    Miller WR, Rose GS. Toward a theory of motivational interviewing. Am Psychol. 2009;64(6):527–37.

    Article  Google Scholar 

  7. 7.

    Madson MB, Loignon AC, Lane C. Training in motivational interviewing: a systematic review. J Subst Abus Treat. 2009;36(1):101–9.

    Article  Google Scholar 

  8. 8.

    Schwalbe CS, Oh HY, Sweben A. Sustaining motivational interviewing: a meta-analysis of training studies. Addiction. 2014;109:1287–94.

    Article  Google Scholar 

  9. 9.

    Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. J Consult Clin Psychol. 2004;72(6):1050–62.

    Article  Google Scholar 

  10. 10.

    Miller WR, Hedrick KE, Orlofsky DR. The helpful responses questionnaire: a procedure for measuring therapeutic empathy. J Clin Psychol. 1991;47(3):444–8.

    Article  Google Scholar 

  11. 11.

    Baer JS, Rosengren DB, Dunn CW, Wells EA, Ogle RL, Hartzler B. An evaluation of workshop training in motivational interviewing for addiction and mental health clinicians. Drug Alcohol Depend. 2004;73(1):99–106.

    Article  Google Scholar 

  12. 12.

    Lazare K, Moaveni A. Introduction of a motivational interviewing curriculum for family medicine residents. Fam Med. 2016;48(4):305–8.

    Google Scholar 

  13. 13.

    Zeligman M, Dispenza F, Chang CY, Levy DB, McDonald CP, Murphy T. Motivational interviewing training: a pilot study in a master’s level counseling program. Counsel Outcome Res Eval. 2017;8(2):91–104.

    Article  Google Scholar 

  14. 14.

    Motivational Interviewing Network of Trainers. Pathways to membership. 2018. Retrieved 9 November 2018.

    Google Scholar 

  15. 15.

    Simper TN, Breckon JD, Kilner K. Effectiveness of training final-year undergraduate nutritionists in motivational interviewing. Patient Educ Couns. 2017;100(10):1898–902.

    Article  Google Scholar 

  16. 16.

    Kolb DA. Experiential learning: experience as the source of learning and development. Upper Saddle River: Pearson Education, Inc.; 2014.

    Google Scholar 

  17. 17.

    Schoo AM, Lawn S, Rudnik E, Litt JC. Teaching health science students foundational motivational interviewing skills: use of motivational interviewing treatment integrity and self-reflection to approach transformative learning. BMC Med Educ. 2015;15:228.

    Article  Google Scholar 

  18. 18.

    Madson MB, Schumacher JA, Baer JS, Martino S. Motivational interviewing for substance use: mapping out the next generation of research. J Subst Abus Treat. 2016;65:1–5.

    Article  Google Scholar 

  19. 19.

    Cronin M, Connolly C. Exploring the use of experiential learning workshops and reflective practice within professional practice development for post-graduate health promotion students. Health Educ J. 2007;66(3):286–303.

    Article  Google Scholar 

  20. 20.

    Poore JA, Cullen DL, Schaar GL. Simulation-based interprofessional education guided by Kolb’s experiential learning theory. Clin Simul Nurs. 2014;10(5):e241–7.

    Article  Google Scholar 

  21. 21.

    Joyner B, Young L. Teaching medical students using role play: twelve tips for successful role plays. Med Teach. 2006;28(3):225–9.

    Article  Google Scholar 

  22. 22.

    Madson MB, Landry AS, Molaison EF, Schumacher JA, Yadrick K. Training MI interventionists across disciplines: a descriptive project. Motiv Interviewing. 2014;1(3):20–4.

    Google Scholar 

  23. 23.

    Motivational Interviewing Network of Trainers. Training motivational interviewing. 2019. Retrieved 25 January 2020.

    Google Scholar 

  24. 24.

    Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1:77–89.

    Article  Google Scholar 

  25. 25.

    Suresh KP. An overview of randomization techniques: an unbiased assessment of outcome in clinical research. J Hum Reprod Sci. 2011;4(1):8–11.

    Article  Google Scholar 

  26. 26.

    Vickers AJ. Analysis of variance is easily misapplied in the analysis of randomized trials: a critique and discussion of alternative statistical approaches. Psychosom Med. 2005;67(4):652–5.

    Article  Google Scholar 

  27. 27.

    Vickers AJ. Parametric versus non-parametric statistics in the analysis of randomized trials with non-normally distributed data. BMC Med Res Methodol. 2005;5:35.

    Article  Google Scholar 

  28. 28.

    Blanca MJ, Arnau J, López-Montiel D, Bono R, Bendayan R. Skewness and kurtosis in real data samples. Methodology. 2013;9(2):78–84.

    Article  Google Scholar 

  29. 29.

    Tabachnick BG, Fidell LS. Using multivariate statistics. 6th ed. Upper Saddle River: Pearson Education, Inc.; 2013.

    Google Scholar 

  30. 30.

    Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomized trials. J Clin Epidemiol. 2010;63(8):e1–e37.

    Article  Google Scholar 

  31. 31.

    Glass GV, Peckham PD, Sanders JR. Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Rev Educ Res. 1972;42(2):237–88.

    Article  Google Scholar 

  32. 32.

    Ferguson CJ. An effect size primer: a guide for clinicians and researchers. Prof Psychol Res Pr. 2009;40(5):532–8.

    Article  Google Scholar 

  33. 33.

    Carpenter-Aeby T, Aeby VG. Application of andragogy to instruction in an MSW practice class. J Instruct Psychol. 2013;40(1):3–13.

    Google Scholar 

  34. 34.

    Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006;63(5):484–9.

    Article  Google Scholar 

Download references


The authors would like to thank Stephanie Dickinson and Dr. Mikyoung Jun for their review of the statistical analyses.


This study was not directly funded by any entity. However, the US arm of the trial took place within a course that was offered to students using funding from the Substance Abuse and Mental Health Services Administration (SAMHSA) via award TI025977 to Jennifer Todd. SAMHSA did not have any direct role in design, collection, analysis, interpretation, or writing of this manuscript. The views and findings expressed in this manuscript do not necessarily represent the views of SAMHSA.

Author information




TS conceptualized the study, implemented one arm of the intervention, and was a major contributor in writing the manuscript. JA helped conceptualize the study, was a major contributor in writing the manuscript, and conducted statistical analyses with the guidance of two individuals acknowledged above. MD helped conceptualize the study and implemented one arm of the intervention. TD conducted literature reviews and helped write the manuscript. JT helped conceptualize the study and helped write the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jon Agley.

Ethics declarations

Ethics approval and consent to participate

Approved at site IRBs; Sheffield Hallam University, #ER5231303, and Indiana State University, #1151112–2. Consent was collected via written study information sheets.

Consent for publication

N/A (no individual person’s data included).

Competing interests

JA, JT, and TD report no conflicts of interest related to the content of this manuscript. TS and MD are both members of the Motivational Interviewing Network of Trainers (MINT).

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Simper, T., Agley, J., DeSalle, M. et al. Pilot study of the influence of self-coding on empathy within an introductory motivational interviewing training. BMC Med Educ 20, 43 (2020).

Download citation


  • Empathy
  • Motivational interviewing
  • MI
  • Health professionals
  • Education