Skip to main content

Development and implementation of a formative instructional coaching program using the Teaching Practices Inventory within a health professions program

Abstract

Background

A growing body of literature describes teaching practices that are positively associated with student achievement. Observing, characterizing, and providing feedback on these teaching practices is a necessary, yet significant challenge to improving teaching quality. This study describes the design, implementation, and evaluation of an instructional coaching program created to provide formative feedback to instructors based on their use of evidence-based teaching practices.

Methods

The program was designed for formative purposes utilizing an instrument adapted from the Teaching Practices Inventory. All faculty were invited to participate in the program on a voluntary basis when the program launched in Fall 2019. Program coaches included any School personnel who completed required training. Two rounds of instrument development were conducted with multiple observers and assessed using Krippendorff’s Alpha. The program was evaluated using an anonymous post-session survey.

Results

Interrater reliability of the form improved over two rounds of piloting and no differences were found in scoring between trainees and education professionals. Seventeen observations were completed by nine coaches. Instructors indicated that feedback was practical, timely, specific, and collegial, suggesting that including student perspectives (e.g., focus groups, student course evaluations) in the coaching program might be helpful.

Conclusions

Creating programs that emphasize and foster the use of evidence-based teaching are critical for health professions education. Additional research is needed to further develop coaching programs that ensure teaching practices in the health professions are optimizing student learning.

Peer Review reports

Background

Utilizing teaching practices that effectively promote student development is critical to preparing the next generation of healthcare providers [1]. Despite widespread emphasis on evidence-based teaching, [2,3,4,5,6] health professions schools have faced considerable scrutiny concerning teaching quality [7, 8]. Studies indicate that, although health professions educators are experts in the content they teach, they rarely receive training on effective teaching practices [7, 8].

Observing, characterizing, and providing feedback on teaching practices is a necessary, yet significant challenge to improving teaching quality [9]. Debate surrounds the role and qualifications of the observer, characteristics and behaviors of the instructor, the observation process and criteria, and use of results [10,11,12,13] While student evaluations of teaching are a common strategy, research suggests that students lack crucial knowledge regarding how to appropriately evaluate effective teaching skills and that results are often subjective [11, 14,15,16,17]. Student evaluations can be biased against females, who are often rated according to personality and appearance, and faculty of color, who are subject to systemic biases like racial stereotyping [16, 17].

Peer observations are often recommended as additional evidentiary sources of teaching quality [11]. Two types of peer observations exist: summative observation and formative observation [18]. Summative observation, commonly known as “peer evaluation” or “peer review,” is designed to provide information that informs decision-making by the institution and is intended for use by others. These types of observations are routinely used by institutions for promotion, tenure/post-tenure review, reappointment, and merit awards, among others. As such, practically all health profession institutions have a summative peer observation process, and several have published on their programs [19,20,21,22,23].

Formative observation (also called peer feedback, peer observation, or peer coaching), is designed to provide feedback with the intent of personal use to improve the instructor’s quality of teaching. While summative observation is a necessary component for faculty, literature suggests that faculty also appreciate the availability of formative observation [24]. Institutions who have implemented formative observation programs have commented on their association with increased collegiality, acceptance of evidence-based changes in teaching pedagogy, and validation of good teaching practices [24,25,26,27,28,29,30]. However, formative observation processes can be resource and time intensive, with some concerned that only an expert in adult learning or curricular design could adequately conduct an observation [24,25,26,27,28,29,30].

Another core challenge of student and peer observations is instrument quality [31, 32]. Observation instruments are often limited by lack of specificity, poor psychometrics, and confusing directions [10]. Research suggests, for example, that observer ratings are often biased by preconceived notions of what constitutes effective teaching and the tendency to look for characteristics of themselves in the teaching of others [33].

Taken together, the complexities associated with observing and generating feedback about teaching practices can hinder programs as they strive to assess and improve instructional practice. The overarching purpose of this work was to develop a formative, evidence-based instructional coaching program (ICP) aimed at supporting and encouraging instructors to implement effective pedagogical strategies. This paper describes the development of the ICP, design of the ICP instrument, and initial evaluation of the ICP implementation.

Methods

Program development

The School’s Center for [BLINDED for REVIEW] at the [BLINDED for REVIEW] was asked by School leadership to develop a standardized peer observation process for faculty. While peer observations were already required at the School, observers used various instruments and criteria for the observations. The following steps describe the process utilized to develop, implement, and evaluate the ICP (Fig. 1). During each step, decisions were vetted with School leadership to ensure buy-in and support for the program.

Fig. 1
figure 1

Process utilized to develop an instructional coaching program

Step 1: Establish a clear purpose

A development team was established that consisted of four faculty from varying tracks (i.e., fixed-term, tenure-track) and ranks (i.e., Assistant, Associate, Full) with experience and expertise in teaching across multiple degree programs at the School. The first decision in the development of the process was whether it should be formative, summative, or a combination of the two. In other words, what was the purpose and intended outcomes of the program? The development team agreed that a formative process was needed to provide instructors with evidence-based feedback about their teaching practices for professional growth and development. The team further agreed that combining formative and summative evaluations into one program could overshadow the value of the formative feedback and bias the evaluation provided by observers. Therefore, the team created two separate processes: one solely dedicated to summative purposes, and one solely dedicated to formative purposes. The formative process described below was termed the ICP.

Step 2: Develop a peer observation instrument

The ICP instrument was adapted from the Teaching Practices Inventory (TPI), which was developed by Wieman and Gilbert to reduce the subjectivity commonly associated with characterizing college teaching. 17 Each TPI item is evidence-based; in other words, each item was derived from research that demonstrated the extent to which different teaching practices were associated with student learning. 9 An item was assigned points based on the number of times the practice was used and its related effect size from the literature; for example, Posed a question followed by a small group discussion was scored as a “2” if the practice was used more than once and scored as a “0” if the practice was not used at all. 9.

Given the ability of the TPI to objectively and reliably characterize teaching practices, [9, 34,35,36] it was selected as the foundation for our coaching instrument. To focus the instrument on observable classroom teaching practices and enable the use of the instrument during a teaching observation, we identified 10 TPI items that aligned with the teaching philosophy of the School, which emphasizes learner-centered teaching pedagogies such as the flipped classroom model and problem-based learning [37]. The development of the instrument was led by three faculty members, who shared and vetted the instrument with various stakeholders, including academic leadership, course instructors, and educational researchers. The TPI items were piloted and evaluated, as described below. Feedback and edits were applied and the final ICP instrument contained the TPI items along with narrative feedback for instructors, including strategies to keep, strategies to start, strategies to stop (i.e. Keep, Start, Stop), and a summary with 1–3 prioritized evidence-based recommendations.

Step 3: Design a coaching program

The following program design was established with the hope of generating and providing feedback to instructors that promoted the awareness and uptake of evidence-based teaching practices. Ideally, two to three observers (also called “coaches”) would be available for each observation, with one serving as the lead coach (e.g., facilitating communication with instructor, collecting and aggregating observation data). Prior to the observation, the instructor and coaches would review the observation process, identify instructor needs/interests, and share any relevant materials. During the observation, the coaches would arrive (e.g., for in-person teaching) or log in virtually (e.g., for online teaching) and sit towards the back of the room or turn off video to minimize distractions; coaches would be instructed to not participate or intervene during an observation. The observations occurred in a typical teaching setting. Following the observation, the coaches would discuss their observations and create recommendations for the instructor. During a post-observation meeting with the instructor, the coaches would provide feedback, recommendations, and relevant handouts with supporting literature. At the conclusion of the post-observation meeting, instructors would be encouraged to watch the recording of their class session and reflect on their teaching practices and the feedback provided. Since the ICP was designed to provide formative feedback, ICP results would be provided only to the instructor, with the instructor allowed to share their own observation results at their discretion.

Step 4: Identify and train coaches

Based on the findings from the instrument evaluation, it was determined that, with appropriate training and a reliable observation instrument, any member within the School could serve as an observer (e.g., faculty, staff, trainees) for any classroom instructor. Participation as a coach was voluntary and was incentivized by recognizing the effort as service to the School during the annual review process. Coach training consisted of a pre-training assignment, which involved watching a class recording and completing the observation form. During the 60-minute in-person training session, coaches discussed how they marked each item and provided suggestions for improving the instrument.

Step 5: Implement and evaluate the coaching program

Since the focus of the program was growth and development, it was decided that participation would be voluntary for all School instructors. Reminders about the program were shared regularly via email and announcements at meetings. On an annual basis, the center worked with School leadership to identify any additional recommendations for ICP participants. As described in more detail below, data from ICP observation instruments and participant surveys were collected for each session.

Step 6. Identify improvements

Key to the success and sustainability of any program is the ability to adapt and improve. Feedback was solicited from instructors and coaches in an effort to identify opportunities for improvement of the instrument and program.

ICP evaluation

To develop and refine the ICP instrument, two pilot evaluations were conducted. In the first round, 14 individuals completed the instrument while watching a recorded 50-minute biostatistics classroom session. The instrument asked for frequency counts representing the number of times each teaching practice was observed. The researchers converted the frequency counts to true scores, according to the quantification schema of the original TPI (Table 1). The total score for the instrument could range from 0 to 13. Based on results and feedback from round 1, five items were revised with minor wording changes. In round 2, the revised instrument was provided to 11 new pilot observers, who completed the same video observation. Observer position (e.g., faculty, postdoctoral fellow, student, staff) was collected from all pilot observers to examine differences in scores based on education position. Convenience sampling was used to identify and recruit all pilot observers, and all agreed to participate. Overall interrater reliability was calculated using Krippendorff’s Alpha [38] According to Krippendorff, [38] alpha levels above 0.67 are considered acceptable, with alpha levels above 0.80 considered ideal. To examine group differences, Mann Whitney U tests were applied for continuous items and chi-square tests for categorical items, with Fisher’s Exact Test used as needed. Nonparametric tests were utilized due to small sample sizes.

Table 1 Round 2 modified TPI questions and true score values

To evaluate the ICP program, participating instructors were emailed an anonymous 3-item open text survey regarding their experience with the program after each ICP session. Descriptive statistics were used to summarize the number of faculty participants, observations, coaches, and time spent dedicated to the ICP. Qualitative data collected through open-ended responses on the participant survey were thematically coded by one researcher. Results are presented as median (range). This study was submitted to the University of North Carolina Institutional Review Board and the study did not constitute human subjects research as defined under United States federal regulations [45CFR 46.102 (d or f) and 21 CFR 56.102(c)(e)(l)]. Verbal consent was obtained.

Results

In the ICP instrument evaluation, pilot observers were a combination of trainees (7 in round 1 and 4 in round 2) and education professionals (7 in round 1 and 7 in round 2). No differences were found in instrument scores between groups based on observer position. In addition, interrater reliability increased for categorical items from round 1 (α = 0.33) to round 2 (α = 0.60) and continuous data from round 1 (α = 0.61) to round 2 (α = 0.73).

Since the ICP launched in Fall 2019, we have completed 17 classroom observations of 16 individual instructors who requested coaching. Twelve observations were of a professional program course and five observations were of a graduate education course. Professional program courses included foundational courses and clinically focused courses. Fifteen instructors observed were faculty (8 assistant, 5 associate, 2 full) and one was a postdoctoral fellow. Two faculty were tenured, three were untenured on the tenure-track, and ten were fixed-term. Six faculty (four fixed-term assistant, one tenured associate, one fixed-term full) and three postdoctoral fellows completed training and served as a coach for at least one observation. Nineteen coaching hours were spent observing class sessions, eight hours for post-session coaches debrief, 8.5 h for post-session instructor debrief preparation (e.g., creating summary document), and 17 h for post-session instructor debrief.

Ten instructors (response rate = 63%) completed the optional, anonymous survey regarding their ICP experience. Participants found the feedback provided in the “Keep, Start, Stop” format with specific evidence-based recommendations particularly useful. As one participant shared, I thought that the use of keep/start/stop was very helpful. The feedback was practical, timely, and specific and I plan to incorporate into my teaching right away. Common areas of teaching that peer observers highlighted include wait time after prompting for questions, use of small group discussions, summarizing take-home points at the end of the class session, and assessments within pre-class materials. Participants also appreciated the opportunity to connect with other instructors as coaches and have two-way dialogue, which as one participant described, was an opportunity to discuss what my [teaching] concerns were and to get feedback on if they’re actual things I need to work on or just misconceptions I have about teaching and learning.

When asked for suggestions for improvement of the ICP, three participants suggested including a second observation as part of the ICP. Additional suggestions included having a pre-meeting with the instructor before the session being observed, having the instructor provide a written reflection on the class ahead of the session, and providing the instructor with key papers that support the recommendations provided. When asked to select additional aspects of teaching that they would find helpful in the ICP, participants most frequently selected student perspectives (e.g., coach-led focus groups with students from class) [n = 6], course evaluation review (e.g., discussion of most recent student course evaluation results) [n = 6], and pre-class review (e.g., organization, clarity, length of materials) [n = 5].

Discussion

Conducting peer observations of teaching is a complex undertaking that can be influenced by a number of factors, including observer bias and instrument quality [26, 39]. By design, the TPI was developed to address these issues; however, it has largely been used as a self-reflection instrument since its release [9]. This study explored the utility of the instrument for observing classroom instruction as part of a formative coaching program. Our findings advance previous research on the TPI, namely by (1) expanding the use of the TPI to formative peer observations and (2) demonstrating that an adapted TPI can be utilized by various observers to generate feedback aimed at supporting educator development.

Studies suggest that health professions schools, and higher education in general, fall short of their potential to assess the teaching practices of their educators [11]. Students, peer-colleagues, and administrators are often not trained in effective teaching practices [40,41,42]. Continued development and use of instruments that limit subjectivity and promote evidence-based teaching practices is crucial for improving health professions curricula. Although other instruments exist, such as the Classroom Observation Protocol for Undergraduate STEM (COPUS) [43] and the Practice Observation Rubric to Assess Active Learning (PORTAAL), [44] the TPI provides a relatively simple, frequency-based system for characterizing research-based teaching practices with little to no training.

Summative peer observations are frequently utilized for personnel and award decisions, yet their usefulness for individual faculty growth and development are limited. The ICP provides a framework and process for providing formative feedback to instructors and engaging in discussion that emphasizes teaching practices known to promote student achievement. One strength of our program was the utilization of the post-observation meeting with the instructor and coaches. This allowed dedicated time for the instructor to reflect on their class session, receive feedback and recommendations from the coaches, and discuss potential ideas and next steps for their teaching. Since mentoring can also play an important role in the development of educators, consideration should be given to the potential role of mentors in instructional coaching [45].

The number of coaches trained for the ICP also alleviated the time and effort required by any one person. This was possible as the use of the adapted TPI as the observation form and coaching training program opened availability for any School member to serve as a coach. We also believe that keeping ICP feedback confidential (i.e. providing results only to the instructor) fostered a trusting environment with the main emphasis on providing constructive feedback and specific recommendations for improvement.

Despite the promising results of this study, many questions about the TPI and formative coaching programs remain. While the instrument and ICP described in this study may hold promise for providing more objective and reliable evidence of observable teaching practice for the purpose of improving teaching, it may not be appropriate for all learning environments. Namely, the TPI was developed using literature for classroom-based learning in which the instructor played an active role in facilitating learning. As such, learning environments in which the instructor is less active (e.g. small group facilitation) or learning is experiential (e.g. clinical rotations) may not be well-suited for the TPI.

There are several noteworthy limitations to this work. First, the results were drawn from a small, convenience sample, which may reduce the generalizability of these findings. The sample size also influenced the type of data analysis that was completed and statistical power. Second, the interrater reliability results while piloting the adapted TPI were lower than expected, suggesting that individuals may have interpreted items differently. While interrater reliability improved from round 1 to round 2, future efforts may focus on further refinement to increase interrater reliability. Lastly, the long-term impact of the ICP is unknown, including the percentage of recommendations implemented within an instructor’s own teaching practices and ICP coach satisfaction. Since the ICP is ongoing, more research will be done to advance this work.

Conclusions

Understanding the effectiveness of teaching practices in the health professions is a complex undertaking. Common strategies for peer observation of teaching are often limited by observer bias, poor instrumentation, and a lack of focus on practices known to be associated with student learning. Further, educator development during peer observation is often hindered by the pressures and stakes associated with summative use of the results. The formative instrument and program developed in this study focused on observable classroom teaching practices and feedback that can enhance educator understanding of evidence-based teaching. The tool and strategies described could be applied to a variety of health professions educational settings (e.g., pharmacy, medicine, nursing, social work) with the goal of informing teaching practices, enhancing student learning, and ultimately improving patient outcomes. Additional research is needed to further develop the instrument and program to ensure that teaching practices in the health professions are preparing students for healthcare practice.

Availability of data and materials

The datasets generated and analyzed during the current study are not publicly available due to the small sample size and possibility of compromising anonymity/individual privacy; however, data may be made available from the corresponding author on reasonable request.

References

  1. Loyola S. Evidence-based teaching guidelines: transforming knowledge into practice for better outcomes in healthcare. Crit Care Nurs Q. 2010;33(1):19–32. https://doi.org/10.1097/CNQ.0b013e3181c8e309.

    Article  Google Scholar 

  2. Bandiera G, Kuper A, Mylopoulos M, Whitehead C, Ruetalo M, Kulasegaram K, Woods N. Back from basics: integration of science and practice in medical education. Med Edu. 2017;52(1):78–85. https://doi.org/10.1111/medu.13386.

    Article  Google Scholar 

  3. Riley BA, Riley G. Innovation in graduate medical education – using a competency based medical education curriculum. Int J Osteopath Med. 2017;23:36–41. https://doi.org/10.1016/j.ijosm.2016.07.001.

    Article  Google Scholar 

  4. Liaison Committee on Medical Education. Function and structure of a medical school: standards for accreditation of a medical education programs leading to the MD Degree. https://med.virginia.edu/ume-curriculum/wp-content/uploads/sites/216/2016/07/2017-18_Functions-and-Structure_2016-03-24.pdf.

  5. Hammer D, Piascik P, Medina M, Pittenger A, Rose R, Creekmore F, Soltis R, Bouldin A, Schwarz L, Scott S. Recognition of teaching excellence. Am J Pharm Edu. 2010;74(9):164.

    Article  Google Scholar 

  6. American Association of Colleges of Nursing. About AACN. https://www.aacnnursing.org/

  7. Hartford W, Nimmon L, Stenfors T. Frontline learning of medical teaching: “you pick up as you go through work and practice.” BMC Med Edu. 2017;17(1):1–0.

    Article  Google Scholar 

  8. MacDougall J, Drummond MJ. The development of medical teachers: an enquiry into the learning histories of 10 experienced medical teachers. Med Edu. 2005;39(12):1213–20.

    Article  Google Scholar 

  9. Wieman C, Gilbert S. The teaching practices inventory: a new tool for characterizing college and university teaching in mathematics science. CBE-Life Sci Edu. 2014;13:552–69. https://doi.org/10.1187/cbe.14-02-0023.

    Article  Google Scholar 

  10. Arah OA, Hoekstra JBL, Bos AP, Lombarts KMJMH. New tools for systematic evaluation of teaching qualities of medical faculty: results of an ongoing multi-center survey. PLoS ONE. 2011;6(10). https://doi.org/10.1371/journal.pone.0025983.

  11. Berk RA. Top five flashpoints in the assessment of teaching effectiveness. Med Teach. 2013;35(1):15–26. https://doi.org/10.3109/0142159X.2012.732247.

    Article  Google Scholar 

  12. Fan Y, Shepherd LJ, Slavich E, Waters D, Stone M, Abel R, Johnston EL. Gender and cultural bias in student evaluations: why representation matters. PLoS ONE. 2019;14(2). https://doi.org/10.1371/journal.pone.0209749.

  13. Parpala A, Lindblom-Ylanne S, Rytkonen H. Students’ conceptions of good teaching in three different disciplines. Assess Eval High Edu. 2011;36(5):549–63. https://doi.org/10.1080/02602930903541023.

    Article  Google Scholar 

  14. Hammonds F, Mariano GJ, Ammons G, Chambers S. Student evaluations of teaching: improving teaching quality in higher education. Perspect: Pol Pract Edu. 2017;21(1):26–33. https://doi.org/10.1080/13603108.2016.1227388.

    Article  Google Scholar 

  15. Marsh HW, Roche LA. Making students’ evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility. Am Psychol. 1997;52(11):1187–97. https://doi.org/10.1037/0003-066X.52.11.1187.

    Article  Google Scholar 

  16. Mitchell KMW, Martin J. Gender bias in student evaluations. Pol Sci Politics. 2018;51(3):648–52. https://doi.org/10.1017/S104909651800001X.

    Article  Google Scholar 

  17. Reid LD. The role of perceived race and gender in the evaluation of college teaching on RateMyProfessors.com. J Div Higher Edu. 2010;3(3):137–52. https://doi.org/10.1037/a0019865.

    Article  Google Scholar 

  18. Davis TS. Peer observation: A faculty initiative. Curr Pharm Teach Learn. 2011;3:106–15.

    Article  Google Scholar 

  19. Buchanan J, Parry D. Engagement with peer observation of teaching by a dental school faculty in the United Kingdom. Euro J Dent Edu. 2019;23(1):42–53.

    Article  Google Scholar 

  20. Cobb KL, Billings DM, Mays RM, Canty-Mitchell J. Peer review of teaching in Web-based courses in nursing. Nurse Educator. 2001;26(6):274–9.

    Article  Google Scholar 

  21. Hansen LB, McCollum M, Paulsen SM, Cyr T, Jarvis CL, Tate G, Altiere RJ. Evaluation of an evidence-based peer teaching assessment program. Am J Pharm Edu. 2007;71(3):45.

    Article  Google Scholar 

  22. Lundeen JD, Warr RJ, Cortes CG, Wallis F, Coleman JJ. The Development of a Clinical Peer Review Tool. Nursing Edu Pers. 2018;39(1):43–5.

    Article  Google Scholar 

  23. Richard CL, Lillie E, Mathias K, McFarlane T. Impact and Attitudes about Peer Review of Teaching in a Canadian Pharmacy School. Am J Pharm Edu. 2019;83(6):6828.

    Article  Google Scholar 

  24. Schultz KK, Latif D. The planning and implementation of a faculty peer review teaching project. Am J Pharm Edu. 2006;70(2):32.

    Article  Google Scholar 

  25. Carlson K, Ashford A, Hegagi M, Vokoun C. Peer Coaching as a Faculty Development Tool: A Mixed Methods Evaluation. J Grad Med Edu. 2020;12(2):168–175.

    Article  Google Scholar 

  26. Kumar P, Bostwick JR, Klein KC. A pilot program featuring formative peer review of faculty teaching at a college of pharmacy. Curr Pharm Teach Learn. 2018;10(9):1280–1287.

    Article  Google Scholar 

  27. Pattison AT, et al. Foundation Observation of Teaching Project – A Developmental Model of Peer Observation of Teaching. Med Teach. 2012;34(2):e136-42.

    Article  Google Scholar 

  28. Sullivan PB, Buckle A, Nicky G, Atkinson SH. Peer observation of teaching as a faculty development tool. BMC Med Edu. 2012;12:26.

    Article  Google Scholar 

  29. Trujillo JM, DiVall MV, Barr J, Gonyeau M, Van Amburgh JA, Matthews SJ, Qualters D. Development of a peer teaching-assessment program and a peer observation and evaluation tool. Am J Pharm Edu. 2008;72(6):147.

    Article  Google Scholar 

  30. Wellein MG, Ragucci KR, Lapointe M. A peer review process for classroom teaching. Am J Pharm Edu. 2009;73(5):79.

    Article  Google Scholar 

  31. Abrami PC. Improving judgments about teaching effectiveness using rating forms. The student ratings debate: are they valid? How can we best use them? In: Theall M, Abrami PC, Mets LA, editors. New Directions for Institutional Research, No. 109. San Francisco, CA: Jossey-Bass; 2001. p. 59–87.

    Google Scholar 

  32. Ory JC, Ryan K. How do student ratings measure up to a new validity framework? The student ratings debate: are they valid? How can we best use them? In: Theall M, Abrami PC, Mets LA, editors. New Directions for Institutional Research, No. 109. San Francisco, CA: Jossey-Bass; 2001. p. 27–44.

    Google Scholar 

  33. Courneya CA, Pratt DD, Collins J. Through what perspective do we judge the teaching of peers?. Teach and Teach Edu. 2008;24(1):69–79.

  34. Williams CT, Walter EM, Henderson C, Beach AL. Describing undergraduate STEM teaching practices: a comparison of instructor self-report instruments. Int J STEM Edu. 2015;2(18). https://doi.org/10.1186/s40594-015-0031-y.

  35. Hsieh S-J. Teaching practices inventory for engineering education. Paper presented at: American Society for Engineering Education 123rd Annual Conference; June 26–29, 2016; New Orleans, LA. https://peer.asee.org/teaching-practices-inventory-for-engineering-education.pdf.

  36. Wieman C. A better way to evaluate undergraduate teaching. Change. 2015;41(1):6–15. https://doi.org/10.1080/00091383.2015.996077.

    Article  Google Scholar 

  37. Roth MT, Mumper RJ, Singleton SF, et al. A renaissance in pharmacy education at the University of North Carolina at Chapel Hill. N C Med J. 2014;75(1):48–52. https://doi.org/10.18043/ncm.75.1.48.

  38. Krippendorff K. Content analysis: an introduction to its methodology. 2nd ed. Thousand Oaks, CA: Sage Publications, Inc.; 2004.

    Google Scholar 

  39. Terry CB, Heitner KL, Miller LA, Hollis C. Predictive relationships between students’ evaluation ratings and course satisfaction. Am J Pharm Edu. 2017;81(3):53. 10.5688%2Fajpe81353

  40. Clayson DE. Student evaluations of teaching: are they related to what students learn?: a meta-analysis and review of the literature. J Marketing Edu. 2008;31(1):16–30.

    Article  Google Scholar 

  41. Hornstein HA. Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. Cogent Educ. 2017;4:1–8. https://doi.org/10.1080/2331186X.2017.1304016.

    Article  Google Scholar 

  42. Linse AR. Interpreting and using student ratings data: guidance for faculty serving as administrators and on evaluation committees. Stud Edu Eval. 2017;54:94–106. https://doi.org/10.1016/j.stueduc.2016.12.004.

    Article  Google Scholar 

  43. Smith MK, Jones FHM, Gilbert SL, Wieman CE. The classroom observation protocol for undergraduate STEM (COPUS): a new instrument to characterize university STEM classroom practices. CBE-Life Sci Edu. 2013;12:618–27. https://doi.org/10.1187/cbe.13-08-0154.

    Article  Google Scholar 

  44. Eddy SL, Converse M, Wenderoth M-P. PORTAAL: a classroom observation tool assessing evidence-based teaching practices for active learning in large science, technology, engineering, and mathematics classes. CBE-Life Sci Edu. 2017;14(2):23. https://doi.org/10.1187/cbe.14-06-0095.

    Article  Google Scholar 

  45. Minshew LM, Zeeman JM, Olsen AA, Bush AA, Patterson JH, McLaughlin JE. Qualitative Evaluation of a Junior Faculty Team Mentoring Program. Am J Pharm Edu. 2021; 1;85(4).

Download references

Acknowledgements

The research team would like to thank and acknowledge the faculty members, postdoctoral fellows, students, and staff who helped refine and revise the instrument used in this study, including but not limited to Thomas Angelo, EdD and Scott Singleton, PhD. In addition, we would like to thank those who participated as evaluators in this study and coaches in the ICP.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

AO made substantial contributions to the design of the work, data analysis and interpretation, and writing of the manuscript. KM made substantial contributions to the design of the work, data analysis and interpretation, and writing of the manuscript. SZ contributed to the design of the work, data collection and analysis, and critical review of the manuscript. JZ contributed to the design of the work, data interpretation, and critical review of the manuscript. AP contributed to the design of the work and critical review of the manuscript. AB contributed to the design of the work and critical review of the manuscript. JM made substantial contributions to the design of the work, data analysis and interpretation, and writing of the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Jacqueline E. McLaughlin.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and determined to be exempt by the University of North Carolina institutional review board. All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from all participants. Verbal consent was obtained.

Consent for publication

Not applicable.

Competing interests

There are no financial disclosures or conflicts of any kind regarding this study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Olsen, A.A., Morbitzer, K.A., Zambrano, S. et al. Development and implementation of a formative instructional coaching program using the Teaching Practices Inventory within a health professions program. BMC Med Educ 22, 554 (2022). https://doi.org/10.1186/s12909-022-03616-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-022-03616-z

Keywords