This article has Open Peer Review reports available.
Promoting physical therapists’ use of research evidence to inform clinical practice: part 2 - a mixed methods evaluation of the PEAK program
© Tilson et al.; licensee BioMed Central Ltd. 2014
Received: 10 October 2013
Accepted: 10 June 2014
Published: 25 June 2014
Clinicians need innovative educational programs to enhance their capacity for using research evidence to inform clinical decision-making. This paper and its companion paper introduce the Physical therapist-driven Education for Actionable Knowledge translation (PEAK) program, an educational program designed to promote physical therapists’ integration of research evidence into clinical decision-making. This, second of two, papers reports a mixed methods feasibility study of the PEAK program among physical therapists at three university-based clinical facilities.
A convenience sample of 18 physical therapists participated in the six-month educational program. Mixed methods were used to triangulate results from pre-post quantitative data analyzed concurrently with qualitative data from semi-structured interviews and focus groups. Feasibility of the program was assessed by evaluating change in participants’ attitudes, self-efficacy, knowledge, skills, and self-reported behaviors in addition to their perceptions and reaction to the program.
All 18 therapists completed the program. The group experienced statistically significant improvements in evidence based practice self-efficacy and self-reported behavior (p < 0.001). Four themes were supported by integrated quantitative and qualitative results: 1. The collaborative nature of the PEAK program was engaging and motivating; 2. PEAK participants experienced improved self-efficacy, creating a positive cycle where success reinforces engagement with research evidence; 3. Participants’ need to understand how to interpret statistics was not fully met; 4. Participants believed that the utilization of research evidence in their clinical practice would lead to better patient outcomes.
The PEAK program is a feasible educational program for promoting physical therapists’ use of research evidence in practice. A key ingredient seems to be guided small group work leading to a final product that guides local practice. Further investigation is recommended to assess long-term behavior change and to compare outcomes to alternative educational models.
The World Confederation for Physical Therapy asserts that physical therapists have a responsibility to integrate research evidence into practice as a foundation of patient care . While most physical therapists embrace this concept in principle, the reality of integrating research evidence into everyday clinical practice has proven challenging [2–6]. One potential method for addressing this problem is the use of theoretically founded, evidence-based educational programs to improve therapists’ capacity and proclivity to use research in practice. This, the second of two companion papers, introduces the Physical therapist-driven Education for Actionable Knowledge translation (PEAK) program – an educational program designed to promote physical therapists’ integration of research evidence into clinical decision-making at the individual and organizational level.
In the companion to this paper  we describe the pedagogical foundations, frameworks, and research evidence used to develop the PEAK program. The program’s pedagogy is based on social cognitive theory  and adult learning theory . Further, the organizational implementation of research evidence is informed by the Promoting Action on Research Implementation in Health Services (PARiHS)  and Knowledge to Action cycle  frameworks for knowledge translation (KT). Finally, previous work identifying successful educational models for promoting the use of research in practice were consulted [12–14] as well as useful descriptions of barriers to research implementation [2, 5, 15].
A mixed methods “triangulation design model”  was adopted to simultaneously collect and analyze quantitative and qualitative data from a single cohort of physical therapists. As quantitative and qualitative data were compared, a deeper level of understanding of the key components of the program and their impact on physical therapists’ clinical practice was evident. This is important to establish the feasibility of the program and to facilitate a deeper evaluation of the underpinning theoretical foundations.
Therapists practicing in one of three geographically dispersed USC patient care centers (2 outpatient; 1 inpatient) were invited to participate through staff meetings and individual email. Therapists were required to have a minimum of 6 months clinical experience, be providing patient care at USC at least 20 hours per week, be able to attend both days of an introductory workshop, and be willing to commit to study activities at least 1 hour per month for 6 months. Therapists that also served as onsite clinic managers were included as long as they met the inclusion criteria. The study was approved by the USC Health Science Campus Institutional Review Board (HS-10-00593) and all participants gave consent to participate.
The PEAK program, described in detail in this paper’s companion manuscript , was 6 months in duration and consisted of four consecutive, interdependent components: 1) securing resources and leadership support; 2) a two-day training workshop; 3) guided small group work to develop a locally relevant list of evidence-based actionable behaviors – the “Best Practices List”; and 4) review, revision, and agreement to implement the Best Practices List (Figure 1).
All components of the PEAK program supported a participant-driven learning experience: to work as a group to generate a Best Practice List around a common, participant-selected clinical area. The Best Practices List is a locally generated list of evidence-based, actionable behaviors that participants agreed (as a group) to implement in their clinical practice. Participants self-organized into small groups to review literature and generate evidence-based actionable behaviors. The actionable behaviors were reviewed and revised through a process of peer and expert review until all participants felt that they could implement the Best Practices List in practice .
Evaluation of the PEAK program
Participants completed four standardized assessments collated into one computer-based survey immediately before and after participating in the educational program. This evaluation will focus on changes observed immediately following the PEAK program.
Attitudes toward EBP were assessed using the attitudes items from the EBP Beliefs Scale. The 16-item EBP Beliefs Scale measures EBP attitudes and self-efficacy and has demonstrated construct and criterion validity among practicing nurses . To exclusively measure attitudes about EBP, we summed responses to six Likert-type items from the EBP Beliefs Scale (1,4,5,9,11,13); higher scores (total possible = 30) indicate more positive attitudes. Because it has established face and content validity among healthcare professionals, including physical therapists, the Evidence-based Practice Confidence (EPIC) Scale was used to assess self-efficacy for EBP . EPIC consists of 11 items with responses ranging from 0 to 100% confidence (in 10 percentile increments); responses are averaged to generate a mean confidence between 0 and 100%. EBP knowledge and skills were assessed with the 13-item modified Fresno Test (mFT) which has demonstrated reliability and content and construct validity among physical therapists . The mFT consists of open-ended questions graded with a standardized scoring rubric and results in scores from 0 to 224 with higher scores representing better knowledge and skills. Self-reported EBP behavior was assessed using the EBP Implementation Scale which has demonstrated construct and criterion validity among nurses . The 18-item EBP Implementation Scale assesses implementation of EBP and the collection, analysis, presentation, and reaction to patient data. Because the PEAK program did not ask participants to collect, analyze, present, or react to patient data, 5 items addressing these behaviors (5, 7, 15-17) were not relevant to our study and risked masking any observable changes in self-reported EBP behavior. Therefore, to exclusively measure self-reported behaviors associated with the 5-steps of EBP, we summed responses to 13 items from the EBP Implementation Scale (1-4,6,8-14,18); higher scores (total possible = 65) indicate greater frequency of EBP implementation.
Participants were asked to rate their participation in developing the Best Practices List and to rate the educational value of 11 elements of the PEAK program [2-day workshop, USC medical library resources, customized library webpage, Backpack™ online collaboration tool, local Skype™ access (with webcams), EndNote Web® library, monthly video conference meetings, small group tutorial sessions, access to study librarian, intra-clinic collaboration, inter-clinic collaboration, developing the Best Practices List] on a 5-point likert-type scale. The complete assessment consisted of 70 individual items and was expected to take 60-75 minutes. Scoring was computerized with the exception of the mFT. A trained, blinded assessor scored the mFT.
Participants attended either a face-to-face semi-structured interview (1 participant:1 interviewer) or focus group (3-4 participants:1 interviewer) within 2 weeks of completing the PEAK program. A common interview template was developed to explore the range of experiences and subjective reactions to the PEAK program (Appendix A). Questions addressed all seven categories of the CREATE framework. Participants were initially asked to describe their own engagement in, and reaction to, the PEAK program. They were then facilitated to describe the impact of the program on their EBP attitudes, self-efficacy, knowledge, skills and practice behaviors. They were also asked to consider, from their professional experience, whether the program provided a benefit to patients. Finally, they were asked to comment on the feasibility of transferring the PEAK program to other clinical settings (e.g. other institutions and/or patient populations).
Individual interviews and focus groups were conducted by an independent and experienced investigator. The investigator explained to participants that the purpose of the interview was to understand the feasibility of the program and that both supportive and critical comments were welcome. Clinic manager participants were not interviewed with non-manager participants. Interviews and focus groups were conducted at USC. Individual interviews averaged 39 minutes (SD = 14; range = 20-76 minutes) and focus groups averaged 66 minutes (SD = 13; range = 58-78 minutes). All interview sessions were audio taped and transcribed in full. Participants were assigned ID numbers and identifying comments were removed from transcripts to ensure participant anonymity.
Using a triangulation design model, quantitative and qualitative data were first analyzed independently (in parallel). This parallel analysis was followed by integrated discussion and analysis between two authors (JT, SM) to achieve concurrent triangulation . All analyses used the CREATE model as the organizing theoretical framework.
Independent quantitative analysis
Change in standardized quantitative assessments was assessed using paired two-tailed t-tests for normally distributed data (Shapiro-Wilk of p > 0.05). When the normality assumption was met, parametric tests were used for likert-type scales (EPIC, EBP Beliefs and Implementation Scales) based on the fact that each scale represents an underlying continuous concept and has relatively equal intervals . Alpha was set at 0.05 and 95% confidence intervals were calculated. Because this was a feasibility study, only complete data sets were analyzed. Quantitative analyses were conducted using SPSS 18.0.
Independent qualitative analysis
All participants were sent their own electronic transcripts for clarification and validation and no changes were requested. Text from all interviews was read and allocated to the appropriate categories on the CREATE framework. The initial coding process was piloted independently by two authors (JT, SM) across four transcripts. Differences in coding were discussed and a final coding system agreed upon that included additional topics beyond the CREATE framework. All transcripts were independently coded by two authors using NVivo software (QSR International). Coding differences were resolved through discussion. It was noted that for the last few interviews, no new ideas emerged, suggesting that saturation was reached after 11 individual interviews and 3 focus groups.
Integrated data analysis
Following independent analyses, an iterative process of comparison and further analysis was conducted to integrate the quantitative and qualitative data sets. Preliminary qualitative themes suggested that there might be important differences in participant responses on the standardized questionnaires depending on which of the 5 EBP steps a particular item addressed. We anticipated that item-by-item analysis of standardized assessments that showed overall statistical change would provide exploratory information about participants’ responses across the 5 steps of EBP. Hence, post-hoc Wilcoxon signed-rank tests were conducted for individual items when a statistically significant change in the entire assessment was observed (EPIC and EBP Implementation scales). With this new quantitative information, concurrent thematic analysis of qualitative data continued consistent with recommendations by Pope and colleagues . Explanatory themes were noted as repetitive clusters of meaning that, when combined with quantitative results, offered insight into the PEAK program’s feasibility.
Both investigators responsible for analyzing the qualitative data have academic appointments with responsibility for teaching and facilitating the implementation of EBP and KT. One investigator (JKT) was the primary developer of the PEAK program. Both investigators used their experience educating healthcare practitioners in EBP to inform the analysis. They remained open to identifying positive and negative feedback and to identifying expected and unexpected explanations. Authors were sensitive to the complexity of this educational program and recognized that negative feedback provided opportunity for improvement.
Age, mean (range)
Years in practice, mean (range)
Clinical hours per week
Primary clinic setting
Reaction to the educational experience
Ten of eleven elements of the PEAK program were rated as having ‘good’ or ‘excellent’ value by at least 80% of participants. Of note, ‘Local Skype™ access with webcams’ was rated as having ‘good’ or ‘excellent’ value by just 47% of participants.
“I think it was good to collaborate with each other and [for] the final product we were going through each of the behaviors and discussing that as a group. I think that was the most beneficial.” (Participant [P] 18)
“I think we all felt accountability for each other that we had this group project that we had to work together for that was going to improve the way we deliver care. And I feel like pretty much everyone was dedicated to that.” (P 10)
“Developing a best practices guideline, going through the literature, looking at clinical guidelines, and actually every single person agreeing that these are good, these need to be used.” (P1)
“Backpack was almost like Facebook, where it kind of provides us this medium to see what everybody else is thinking and sharing information.” (P14)
“I didn't participate as much in this group as much as I think I could have otherwise… I don't have the time to keep up with that technology.” (P8)
“And those Skype meetings made it really easy although we had a lot of Skype™ problems. … We spent an hour trying to get everybody online. So that was a little frustrating.” (P17)
“It’s great to see how all the different clinicians across the board can get together and actually focus on clinical practice. And it’s great to see how therapists in different settings can contribute.” (P1)
“It takes more time and its more energy and all that. But really, you know, you're doing the best thing for your patients. ” (P4)
“The managers here definitely are supportive of implementing this [but] from a time perspective, are challenged.” (P5)
Mean (SD) % of scale
Mean (95% CI) % improvement
EBP Beliefs Scale [6 attitude-specific items, high score = 30]
Evidence-based Practice Confidence Scale [11 items, high score = 100%]
Knowledge and Skills
Modified Fresno Test [13 items, high score = 224]
EBP Implementation Scale [13 study-specific items, high score = 65]
“I definitely think that evidence based practice is a very essential component of patient management. I think it’s the gold standard of patient care and so something I take seriously and I understand the importance of it” (P14)
“And I feel like I can better access the literature very efficiently with those patients. I'll do it on my phone or very, very quickly. And so I feel like I do that more because I'm more efficient. Whereas before I would go, oh, well, that might take me a little while.” (P5)
“I think confidence is a huge validation that you're doing the right thing, this is a huge piece in making sure the patient gets better and certainly, if I feel confident in the way I'm treating you you're going to feel more like you're in the right place to be treated.” (P21)
“I would have to say that my ability to look at statistics is not as strong as I would like it to be even still.” (P20)
Knowledge and skills
The PEAK program was not associated with any change in knowledge and skills as measured by the mFT (p = 0.10; Table 2). After the program, the cohort had comparatively high scores for knowledge and skills associated with searching (62-84% of possible points) and low scores for items requiring statistical knowledge and skill (5-29% of possible points).
“So I think it’s very important, but I'll have to admit, I don't know how to analyze it properly when you get into the detailed aspects of statistics.” (p15)
“Oh, I'm so much better. I've revised all the way I've done the clinical questions. And so that's been kind of a learning opportunity for me.” (F5)
“…learning how to narrow my questions…initially I thought I was missing out on a lot, but now it’s very focused.” (P19)
“My approach has changed in that often I'm trying to find clinical guidelines in different areas of treatment to get a broad array of, what are the approaches that current experts in the field are taking?” (F15)
“Being able to go onto a database like Pub Med and have a more effective, efficient search was very helpful…the more efficient you are at something, the less time you spend doing it, the more apt you are to do it again.” (P14)
“Being a more savvy consumer of research, looking at methods and results, is this a reliable and valid study, does it match my patient population, is their sample size large enough, how did they select, include or exclude their test subjects, to be able to look at a strength of a study based on that, and then to make my independent assessment of the results without having the author's coloring of the data, I think has been very nice.” (P20)
The PEAK program was associated with a 20.4% increase in mean score for 13 items of the EBP Implementation Scale (p = 0.001; Table 2). Post-hoc analyses showed that 7 items had statistically significant increases from baseline (item number and topic): 2 and 7 - reading and critically appraising research studies; 3 - generating PICO questions; 6 - presenting evidence to >2 colleagues; 8 - sharing an EBP guideline with a colleague; 10 - sharing evidence with a multi-disciplinary team; 12 - accessing Cochrane Systematic Reviews; and 13 - accessing National Guidelines Clearinghouse).
“I find more that I’m incorporating it in my education of the patients. I’m always talking to them. This is why we’re doing this and so on. And I feel like they really appreciate that. Or I’ll use it to ask them, ‘these are the recommendations, what do you really prefer?’” (F5)
“I see people sitting around the table discussing articles, sharing articles, and people that weren't involved in that previously are now involved.” (P2)
“Let me figure out why I haven't done that with my last patient and see if there's a reason. So now I can check myself and see, am I doing those things, am I not doing those things, why or why not and what is the evidence that supports it?” (F7)
“You’re putting the patients’ needs first, you’re using your own past experience just to manage the patient, and then you're using the top evidence. So I think that would be the key and I think then your average patient would want to participate or have a clinician treating and using those standards” (P14)
“We’ve started kind of monthly in-services and having discussion. What’s the latest within the literature, what are the recommended outcome measures we should be using for these various patient populations?” (P3)
Best Practices List
Participants selected the topic of ‘Lumbar Spine Conditions’ with five sub-topics (outcome measures, stenosis, spine tumors, non-specific low back pain, and post-surgical) for which they reviewed the literature and identified locally relevant, actionable, evidence-based behaviors that should be implemented in their practice. At the conclusion of the program, participants had created a Best Practices List consisting of 38 evidence-based behaviors (see Additional file 1) drawn from clinical practice guidelines, systematic reviews, randomized controlled trials, cohort studies, case series, and narrative reviews. Fourteen participants (77%) rated themselves as being ‘Involved’ or ‘Very Involved’ in development of the Best Practices List.
Explanatory themes from mixed methods analysis
The collaborative nature of PEAK was engaging and motivating.
PEAK participants experienced improved self-efficacy which created a positive cycle where success reinforced engagement with the research evidence.
Participants’ need to understand how to interpret statistics was not fully met by the PEAK program.
Participants believed that the process of using relevant research evidence to develop the Best Practices List would lead to better patient outcomes.
Participants felt that the process of identifying and appraising research evidence to develop the Best Practices List would lead to better clinical outcomes for patients. They emphasized the practical benefits of developing consistent and routine patterns of care that were informed by research evidence. Furthermore, participants described being able to provide more effective care with higher confidence – both elements they expected would have positive effects on patient outcomes. Finally, they anticipated improved continuity of care as they all agreed to use the locally generated Best Practices List in their clinical practice.
The PEAK program is a feasible educational program for promoting physical therapist use of research evidence to inform clinical practice across three clinical sites in a university-based healthcare system. All participants completed the 6-month educational program and most reported high levels of involvement. The group developed, and agreed to implement, a Best Practices List consisting of 38 evidence-based behaviors around caring for individuals with lumbar spine conditions. Participants’ reaction to the PEAK program was consistently positive and quantitative measures demonstrated that the program was associated with improvements in EBP self-efficacy and self-reported behaviors. Four themes from our mixed-methods analysis provide insight into the program and implications for its future use around the topics of: benefits of the collaborative nature of the program, improved self-efficacy for integrating research evidence, need for more detailed understanding of statistics, and belief that patient care was improved by informing clinical practice with research evidence.
Most physical therapy-specific KT studies have focused on changing clinical decision-making around a single clinical practice guideline [26–29] or pre-packaged evidence summary . Two have focused on development of more generalizable EBP and KT skills [4, 31]; both reported limited change in therapist outcomes. The PEAK program addresses the need for physical therapists to use a wide variety of resources (as opposed to a single clinical practice guideline) to support clinical decision-making. In addition, it addresses not only individual-level barriers to EBP but also takes into account the need to address organizational resources and cultural issues to support KT across the continuum of care in a dispersed healthcare system.
The PEAK program’s foundation in social cognitive theory  offers an explanation for the individual change observed among participants. By using small groups to generate a sense of community, participants felt engaged and motivated to use the knowledge and skills they had gained to search for and critically appraise the research evidence. They accepted verbal knowledge from a credible source during the 2-day workshop, observed each other searching for and critically appraising key journal articles, and experienced personal success through guided learning. Each of these elements is likely to have motivated participants to repeat their behaviors , ultimately leading to successful completion of the Best Practices List and the self-reported increase in use of research evidence in practice.
The use of adult learning theory concepts  resulted in a program that was driven by participants, for their own practical benefit. Participants selected the topic for the Best Practices List and self-selected into small groups that worked independently towards meeting an immediate clinical need. Similarly participants reported that the creation of the Best Practices List was the most important part of the program and that they were motivated by a commitment to provide high quality patient care. It is likely that this helped to generate a sense of ownership in the process. Further, use of the PARiHS  and Knowledge to Action  Cycle frameworks drove elements of the program that were deemed important by participants, including: leadership support, provision of resources, and emphasis on adapting research evidence to support local needs.
Despite the feasibility of the program, we learned several lessons that we expect will improve future versions. Most importantly, qualitative and quantitative data strongly suggest that participants needed additional knowledge and skills to understand and interpret statistics. Although the 2-day workshop and monthly meetings included some education around statistics and interpretation of results, it was clearly insufficient. This challenge must be met with sensitivity to the fact that it may not be feasible to expect clinicians to become experts in statistics. We also learned that some participants needed more assistance with technologic resources and that while monthly meetings for supplemental education and discussion were valuable, poor performance of our video conferencing system was frustrating for all.
This study is the first, to our knowledge, to use the CREATE model  as a foundation for assessing EBP learning. The CREATE model provided a cohesive method for evaluating the complexity of component interventions within our educational program. By comparing quantitative and qualitative results across the CREATE framework we gained a deeper understanding of which components were valued by participants, and how these contributed to improved self-report scores. Based on the early work by Kirkpatrick , the CREATE assessment categories are expected to build on each other – from the most direct impact (reaction to the educational program) to the most complex (improving patient outcomes) . Yet, although participants experienced quantitative change in self-efficacy and self-reported behavior, we did not observe a quantitative change in the intervening categories of knowledge and skills. Qualitative data suggest that while knowledge may not have changed, participants’ felt that their skills in searching for, appraising, and integrating research evidence into practice had improved. This suggests that the mFT may be an insufficient tool for identifying changes in EBP skills distinct from EBP knowledge. Furthermore, while we did not assess change in patient outcomes, therapists felt strongly that their patients had benefited. This supports future work to assess patient reported outcomes and clinical improvement in association with therapist participation in PEAK.
This study has four important limitations. First, from the perspective of quantitative results, the number of participants was small. Although the population was relatively diverse (age, years of experience, degrees, clinical setting), a larger sample size with a subset used for the qualitative analysis would have been a stronger design. Second, the participant population lacked diversity in that they all worked at USC. There is a selection bias among individuals who pursue, and get the opportunity to, work at a university teaching hospital or clinic. While previous studies have established that physical therapists routinely report strongly positive attitudes about EBP [2, 15, 34], our volunteer participants may represent the far end of the spectrum for positive attitudes. Additionally, and perhaps more importantly, all participants had access to a high quality medical library and medical librarian. Replication of the PEAK program without full-text access to most rehabilitation journals will pose an additional challenge. Third, this analysis does not assess long term outcomes. Further study is needed to determine whether improvements associated with participation were sustained and whether the Best Practices List was effectively implemented in patient care. Finally, two of the standardized quantitative assessment tools (EBP Belief Scale and EBP Implementation Scale) were modified to ensure that a single domain (EBP attitudes and behavior for using research evidence, respectively) was being assessed. While the items used from each tool had strong face validity, neither was validated in their abbreviated format. We felt that these modifications were reasonable for a feasibility study given that better, single domain, tools were not available. However, this is an important area for development to support further investigations of the PEAK program and implementation research in general.
Finally, it is important to note that the PEAK program was designed to influence one component of clinical decision-making—the integration of research evidence. Clinical decision-making is influenced by a complex host of issues (e.g. culture, emotion, moral, political, etc.) and often involves tensions between scientific reason and social reality . While the PEAK program addressed the integration of research evidence with patient perspective, it did not explicitly address the broader context of collaborative and patient-centered shared decision-making.
The multifaceted, learner-centered PEAK program is a feasible educational program for promoting physical therapists’ use of research evidence in clinical decision-making. Four themes informing feasibility of the program relate to 1) the collaborative nature of the program; 2) improved self-efficacy for using research evidence to inform practice; 3) the need for greater learning around statistics; and 4) participant expectation that the list of evidence-based practices developed and agreed to by the group would lead to better patient outcomes. A key ingredient seems to be guided small group work leading to a final product (Best Practices List) that guides local practice. Further investigation is recommended to assess long-term behavior change and to compare outcomes to alternative educational models.
Core interview template
Introduction, Reaction to the EBP Experience:
What small group were you in?
How was the Fellowship?
What aspects were most helpful to you?
How has your work environment impacted on your ability to engage in the Fellowship?
How have you used the librarian and library resources?
Knowledge about EBP principles:
Has the Fellowship influenced the way you think about and interpret research?
Has there been a change in how you use clinical guidelines in your practice?
Skills for performing EBP:
Has the Fellowship taught you new skills?
[if YES] How did the Fellowship help you learn these skills?
How do you keep up to date with the research evidence?
Behavior as part of patient care:
Has this Fellowship impacted on the way you work with your patients?
Has the Fellowship influenced the way you work with your colleagues?
Self-Efficacy for conducting EBP:
In what way has this Fellowship changed your ability to use research evidence (ask across 5 steps)?
What will you do differently, as a result of participating in this Fellowship?
Benefits to patients associated with EBP training:
Have there been any benefits for your patients, from your involvement in the Fellowship?
Attitudes about EBP:
Does EBP impact on the way you work with your patients?
What is the most important aspect of EBP in your clinical practice?
If you could change one thing in your current work to improve EBP, what would it be?
If this Fellowship were to be expanded to over 100 people, what would you do differently?
The authors would like to recognize important contributions from the following individuals: Pamela C. Corley, MLS, study librarian, gave extensive support to authors and participants; Adrianne Green, SPT, research assistant, provided valuable administrative support; and Linda Fetters, PhD, PT, FAPTA provided valuable feedback on the manuscript. The University of Southern California, Division of Biokinesiology and Physical Therapy provided financial support. Faculty reviewers (Joe Godges, DPT; Daniel Kirages, PT, DPT, OCS, FAAOMPT; Kornelia Kulig, PhD, PT, FAPTA; Mike O’Donnell, PT, DPT, OCS, FAAOMPT; Marisa Perdomo, PT, DPT) provided thoughtful and informative feedback to participants. Finally, we are grateful for the time, effort, and dedication of each and every participant.
- World Confederation for Physical Therapy: Policy Statement: Evidence Based Practice. [http://www.wcpt.org/policy/ps-EBP]
- Jette DU, Bacon K, Batty C, Carlson M, Ferland A, Hemingway RD, Hill JC, Ogilvie L, Volk D: Evidence-based practice: beliefs, attitudes, knowledge, and behaviors of physical therapists. Phys Ther. 2003, 83: 786-805.Google Scholar
- Salbach NM, Guilcher SJT, Jaglal SB, Davis DA: Factors Influencing Information Seeking by Physical Therapists Providing Stroke Management. Phys Ther. 2009, 89: 1039-1050. 10.2522/ptj.20090081.View ArticleGoogle Scholar
- Schreiber J, Stern P, Marchetti G, Provident I: Strategies to promote evidence-based practice in pediatric physical therapy: a formative evaluation pilot project. Phys Ther. 2009, 89: 918-933. 10.2522/ptj.20080260.View ArticleGoogle Scholar
- Bridges PH, Bierema LL, Valentine T: The propensity to adopt evidence-based practice among physical therapists. BMC Health Serv Res. 2007, 7: 103-10.1186/1472-6963-7-103.View ArticleGoogle Scholar
- Thomas S, Mackintosh S, Halbert J: Determining current physical therapist management of hip fracture in an acute care hospital and physical therapists’ rationale for this management. Phys Ther. 2011, 91: 1490-1502. 10.2522/ptj.20100310.View ArticleGoogle Scholar
- Tilson J, Mickan S, Sum J, Zibell M, Dylla J, Howard R: Promoting physical therapists’ use of research evidence to inform clinical practice: part 1 - theoretical foundation, evidence, and description of the PEAK program. BMC Med Educ. 2014, 14: 125-10.1186/1472-6920-14-125.View ArticleGoogle Scholar
- Bandura A: Self-Efficacy - toward a unifying theory of behavioral change. Psychol Rev. 1977, 84: 191-215.View ArticleGoogle Scholar
- Knowles MS: Andragogy in Action. 1984, San Francisco: Jossey-Bass, 1Google Scholar
- Rycroft-Malone J, Harvey G, Seers K, Kitson A, McCormack B, Titchen A: An exploration of the factors that influence the implementation of evidence into practice. J Clin Nurs. 2004, 13: 913-924. 10.1111/j.1365-2702.2004.01007.x.View ArticleGoogle Scholar
- Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, Robinson N: Lost in knowledge translation: time for a map?. J Contin Educ Health Prof. 2006, 26: 13-24. 10.1002/chp.47.View ArticleGoogle Scholar
- Coomarasamy A, Khan KS: What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. Br Med J. 2004, 329: 1017-1019. 10.1136/bmj.329.7473.1017.View ArticleGoogle Scholar
- Flores-Mateo G, Argimon JM: Evidence based practice in postgraduate healthcare education: A systematic review. BMC Health Serv Res. 2007, 7: 8-10.1186/1472-6963-7-8.View ArticleGoogle Scholar
- Menon A, Korner-Bitensky N, Kastner M, McKibbon KA, Straus S: Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review. J Rehabil Med. 2009, 41: 1024-1032. 10.2340/16501977-0451.View ArticleGoogle Scholar
- Salbach NM, Jaglal SB, Korner-Bitensky N, Rappolt S, Davis D: Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Phys Ther. 2007, 87: 1284-1303. 10.2522/ptj.20070040. discussion 1304-1286View ArticleGoogle Scholar
- Craig P, Dieppe P, Macintyre S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008, 337: a1655-10.1136/bmj.a1655.View ArticleGoogle Scholar
- Campbell NC, Murray E, Darbyshire J, Jon E, Farmer A, Griffiths F, Guthrie B, Lester H, Wilson P, Kinmonth AL: Designing and evaluating complex interventions to improve health care. BMJ. 2007, 334: 455-459. 10.1136/bmj.39108.379965.BE.View ArticleGoogle Scholar
- Creswell JW, Fetters MD, Ivankova NV: Designing a mixed methods study in primary care. Ann Fam Med. 2004, 2: 7-10.1370/afm.104.View ArticleGoogle Scholar
- Tilson J, Kaplan S, Harris J, Hutchinson A, Ilic D, Niederman R, Potomkova J, Zwolsman S: Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ. 2011, 11: 78-10.1186/1472-6920-11-78.View ArticleGoogle Scholar
- Melnyk BM, Fineout-Overholt E, Mays MZ: The evidence-based practice beliefs and implementation scales: psychometric properties of two new instruments. Worldviews Evid Based Nurs. 2008, 5: 208-216. 10.1111/j.1741-6787.2008.00126.x.View ArticleGoogle Scholar
- Salbach NM, Jaglal SB: Creation and validation of the evidence-based practice confidence scale for health care professionals. J Eval Clin Pract. 2010, 17: 794-800.View ArticleGoogle Scholar
- Tilson JK: Validation of the modified Fresno test: assessing physical therapists’ evidence based practice knowledge and skills. BMC Med Educ. 2010, 10: 38-10.1186/1472-6920-10-38.View ArticleGoogle Scholar
- Rauscher L, Greenfield BH: Advancements in contemporary physical therapy research: use of mixed methods designs. Phys Ther. 2009, 89: 91-100. 10.2522/ptj.20070236.View ArticleGoogle Scholar
- Carifio J, Perla RJ: Ten common misunderstandings, misconceptions, persistent myths and urban legends about Likert scales and Likert response formats and their antidotes. J Soc Sci. 2007, 3: 106.Google Scholar
- Pope C, Ziebland S, Mays N: Qualitative research in health care. Analysing qualitative data. BMJ. 2000, 320: 114-116. 10.1136/bmj.320.7227.114.View ArticleGoogle Scholar
- Rebbeck T, Maher CG, Refshauge KM: Evaluating two implementation strategies for whiplash guidelines in physiotherapy: a cluster randomised trial. Aust J Physiother. 2006, 52: 165-174. 10.1016/S0004-9514(06)70025-3.View ArticleGoogle Scholar
- Bekkering GE, van Tulder MW, Hendriks EJM, Koopmanschap MA, Knol DL, Bouter LM, Oostendorp RAB: Implementation of clinical guidelines on physical therapy for patients with low back pain: Randomized trial comparing patient outcomes after a standard and active implementation strategy. Phys Ther. 2005, 85: 544-555.Google Scholar
- Gross DP, Lowe A: Evaluation of a knowledge translation initiative for physical therapists treating patients with work disability. Disabil Rehabil. 2009, 31: 871-879. 10.1080/01443610802355965.View ArticleGoogle Scholar
- Leemrijse CJ, Plas GM, Hofhuis H, van den Ende CH: Compliance with the guidelines for acute ankle sprain for physiotherapists is moderate in the Netherlands: an observational study. Aust J Physiother. 2006, 52: 293-299. 10.1016/S0004-9514(06)70010-1.View ArticleGoogle Scholar
- Brown CJ, Gottschalk M, Van Ness PH, Fortinsky RH, Tinetti ME: Changes in physical therapy providers’ use of fall prevention strategies following a multicomponent behavioral change intervention. Phys Ther. 2005, 85: 394-403.Google Scholar
- Stevenson K, Lewis M, Hay E: Do physiotherapists’ attitudes towards evidence-based practice change as a result of an evidence-based educational programme?. J Eval Clin Pract. 2004, 10: 207-217. 10.1111/j.1365-2753.2003.00479.x.View ArticleGoogle Scholar
- Kaufman DM: Applying educational theory in practice. BMJ. 2003, 326: 213-216. 10.1136/bmj.326.7382.213.View ArticleGoogle Scholar
- Kirkpatrick D: Evaluation of Training. Training and Development Handbook. Edited by: Development. ASfTa, Craig RL, Bittel LR. 1967, New York: McGraw-Hill, xii, 650 pGoogle Scholar
- Barnard S, Wiles R: Evidence-based physiotherapy: physiotherapists’ attitudes and experiences in the Wessex area. Physiotherapy. 2001, 87: 115-124. 10.1016/S0031-9406(05)61078-4.View ArticleGoogle Scholar
- Trede F: Emancipatory physiotherapy practice. Physiother Theory Pract. 2012, 28: 466-473. 10.3109/09593985.2012.676942.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/14/126/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.