The PEAK program is a feasible educational program for promoting physical therapist use of research evidence to inform clinical practice across three clinical sites in a university-based healthcare system. All participants completed the 6-month educational program and most reported high levels of involvement. The group developed, and agreed to implement, a Best Practices List consisting of 38 evidence-based behaviors around caring for individuals with lumbar spine conditions. Participants’ reaction to the PEAK program was consistently positive and quantitative measures demonstrated that the program was associated with improvements in EBP self-efficacy and self-reported behaviors. Four themes from our mixed-methods analysis provide insight into the program and implications for its future use around the topics of: benefits of the collaborative nature of the program, improved self-efficacy for integrating research evidence, need for more detailed understanding of statistics, and belief that patient care was improved by informing clinical practice with research evidence.
Most physical therapy-specific KT studies have focused on changing clinical decision-making around a single clinical practice guideline [26–29] or pre-packaged evidence summary . Two have focused on development of more generalizable EBP and KT skills [4, 31]; both reported limited change in therapist outcomes. The PEAK program addresses the need for physical therapists to use a wide variety of resources (as opposed to a single clinical practice guideline) to support clinical decision-making. In addition, it addresses not only individual-level barriers to EBP but also takes into account the need to address organizational resources and cultural issues to support KT across the continuum of care in a dispersed healthcare system.
The PEAK program’s foundation in social cognitive theory  offers an explanation for the individual change observed among participants. By using small groups to generate a sense of community, participants felt engaged and motivated to use the knowledge and skills they had gained to search for and critically appraise the research evidence. They accepted verbal knowledge from a credible source during the 2-day workshop, observed each other searching for and critically appraising key journal articles, and experienced personal success through guided learning. Each of these elements is likely to have motivated participants to repeat their behaviors , ultimately leading to successful completion of the Best Practices List and the self-reported increase in use of research evidence in practice.
The use of adult learning theory concepts  resulted in a program that was driven by participants, for their own practical benefit. Participants selected the topic for the Best Practices List and self-selected into small groups that worked independently towards meeting an immediate clinical need. Similarly participants reported that the creation of the Best Practices List was the most important part of the program and that they were motivated by a commitment to provide high quality patient care. It is likely that this helped to generate a sense of ownership in the process. Further, use of the PARiHS  and Knowledge to Action  Cycle frameworks drove elements of the program that were deemed important by participants, including: leadership support, provision of resources, and emphasis on adapting research evidence to support local needs.
Despite the feasibility of the program, we learned several lessons that we expect will improve future versions. Most importantly, qualitative and quantitative data strongly suggest that participants needed additional knowledge and skills to understand and interpret statistics. Although the 2-day workshop and monthly meetings included some education around statistics and interpretation of results, it was clearly insufficient. This challenge must be met with sensitivity to the fact that it may not be feasible to expect clinicians to become experts in statistics. We also learned that some participants needed more assistance with technologic resources and that while monthly meetings for supplemental education and discussion were valuable, poor performance of our video conferencing system was frustrating for all.
This study is the first, to our knowledge, to use the CREATE model  as a foundation for assessing EBP learning. The CREATE model provided a cohesive method for evaluating the complexity of component interventions within our educational program. By comparing quantitative and qualitative results across the CREATE framework we gained a deeper understanding of which components were valued by participants, and how these contributed to improved self-report scores. Based on the early work by Kirkpatrick , the CREATE assessment categories are expected to build on each other – from the most direct impact (reaction to the educational program) to the most complex (improving patient outcomes) . Yet, although participants experienced quantitative change in self-efficacy and self-reported behavior, we did not observe a quantitative change in the intervening categories of knowledge and skills. Qualitative data suggest that while knowledge may not have changed, participants’ felt that their skills in searching for, appraising, and integrating research evidence into practice had improved. This suggests that the mFT may be an insufficient tool for identifying changes in EBP skills distinct from EBP knowledge. Furthermore, while we did not assess change in patient outcomes, therapists felt strongly that their patients had benefited. This supports future work to assess patient reported outcomes and clinical improvement in association with therapist participation in PEAK.
This study has four important limitations. First, from the perspective of quantitative results, the number of participants was small. Although the population was relatively diverse (age, years of experience, degrees, clinical setting), a larger sample size with a subset used for the qualitative analysis would have been a stronger design. Second, the participant population lacked diversity in that they all worked at USC. There is a selection bias among individuals who pursue, and get the opportunity to, work at a university teaching hospital or clinic. While previous studies have established that physical therapists routinely report strongly positive attitudes about EBP [2, 15, 34], our volunteer participants may represent the far end of the spectrum for positive attitudes. Additionally, and perhaps more importantly, all participants had access to a high quality medical library and medical librarian. Replication of the PEAK program without full-text access to most rehabilitation journals will pose an additional challenge. Third, this analysis does not assess long term outcomes. Further study is needed to determine whether improvements associated with participation were sustained and whether the Best Practices List was effectively implemented in patient care. Finally, two of the standardized quantitative assessment tools (EBP Belief Scale and EBP Implementation Scale) were modified to ensure that a single domain (EBP attitudes and behavior for using research evidence, respectively) was being assessed. While the items used from each tool had strong face validity, neither was validated in their abbreviated format. We felt that these modifications were reasonable for a feasibility study given that better, single domain, tools were not available. However, this is an important area for development to support further investigations of the PEAK program and implementation research in general.
Finally, it is important to note that the PEAK program was designed to influence one component of clinical decision-making—the integration of research evidence. Clinical decision-making is influenced by a complex host of issues (e.g. culture, emotion, moral, political, etc.) and often involves tensions between scientific reason and social reality . While the PEAK program addressed the integration of research evidence with patient perspective, it did not explicitly address the broader context of collaborative and patient-centered shared decision-making.