Skip to main content
  • Research article
  • Open access
  • Published:

A modified evidence-based practice- knowledge, attitudes, behaviour and decisions/outcomes questionnaire is valid across multiple professions involved in pain management



A validated and reliable instrument was developed to knowledge, attitudes and behaviours with respect to evidence-based practice (EBB-KABQ) in medical trainees but requires further adaptation and validation to be applied across different health professionals.


A modified 33-item evidence-based practice scale (EBP-KABQ) was developed to evaluate EBP perceptions and behaviors in clinicians. An international sample of 673 clinicians interested in treatment of pain (mean age = 45 years, 48% occupational therapists/physical therapists, 25% had more than 5 years of clinical training) completed an online English version of the questionnaire and demographics. Scaling properties (internal consistency, floor/ceiling effects) and construct validity (association with EBP activities, comparator constructs) were examined. A confirmatory factor analysis was used to assess the 4-domain structure EBP knowledge, attitudes, behavior, outcomes/decisions).


The EBP-KABQ scale demonstrated high internal consistency (Cronbach’s alpha = 0.85), no evident floor/ceiling effects, and support for a priori construct validation hypotheses. A 4-factor structure provided the best fit statistics (CFI =0.89, TLI =0.86, and RMSEA = 0.06).


The EBP-KABQ scale demonstrates promising psychometric properties in this sample. Areas for improvement are described.

Peer Review reports


Evidence-based practice (EBP) is defined as the integration of the best research evidence with patients’ interests and clinical circumstances in decision making [1]. As EBP is associated with improved clinical decision-making and patient care [2], health professional organizations have advocated for increased training in EBP for all health care professionals at all levels of education [3],[4]. Understanding how EBP is understood and implemented across different health professionals can identify educational needs and outcomes, and predict where new research evidence is more likely to be implemented. As such, a validated and reliable instrument is required to evaluate an individual’s perceptions of EBP.

A systematic review [5] which studied 104 instruments on EBP suggested that evaluation of EBP could be divided into the following definable components: EBP knowledge, attitudes toward EBP, application/use of EBP and practitioners’ EBP behaviors in the clinical setting. Knowledge about EBP means that clinicians have knowledge of fundamental EBP concepts and terminology and concepts related to quality or levels of evidence. It also includes the ability to search the literature and critically appraise the evidence for its validity, impact and applicability. Attitude toward EBP includes the intuitive appeal of EBP, the likelihood of adopting EBP given professional requirements to do so, openness to new practices, and the perceived divergence between research-based/academically developed interventions versus current practice [6]. Application and use of EBP refers to whether health professionals are able to apply their EBP knowledge to the specific clinical scenarios. This includes: capability to generate clinical question(s) regarding disease prevention, diagnosis and management as well as implementation of evidence with integrity of clinical circumstances. EBP behaviors refer to practitioners’ performance of the instrumental activities associated with EBP such as searching and obtaining higher quality evidence in their own practice.

Although the rise of EBP awareness has led to the development of instruments to assess its integration into clinical practice, there are gaps in the evidence supporting these tools [5]. There is a lack of empirical data that can be applied to a wider range of experience and types of clinicians, in particular nurses and allied health professionals [3]. Moreover, as most scales have targeted samples with minimal experience in clinical practice, the questionnaires may not accurately reflect the perception of EBP by clinicians who have been practicing in different clinical settings.

Among available scales, one that has taken a multi-dimensional approach and shown early promise is the The knowledge, attitude and behavior questionnaire (KAB) originally developed by Johnson and colleagues [7]. The KAB scale was designed to evaluate EBP teaching and learning in the undergraduate medical education setting. With permission from the developers, two study authors (JMD and ML) developed a modified KAB scale (EBP-KABQ), to be applicable to health professionals other than physicians using expert review and pilot testing. This process resulted in removal of items that were perceived by users as redundant or unclear.

The goal of this study was to validate the modified scale (EBP-KABQ) for use in a multidisciplinary group of clinicians by determining: (1) Scaling properties- internal consistency, floor/ceiling effects, and (2) Construct validity- based on predetermined hypotheses on the relationship of subcomponents of EBP, and (3) Structural validity: the integrity of a 4-domain structure based on confirmatory factor analysis.


The EBP-KABQ incorporates 33 items in four domains of EBP: knowledge (8 items, 6 ordinal items), attitudes (14 items, 14 ordinal items), behaviour (8 items, 5 ordinal items) and outcomes/decisions (3 items, 3 ordinal items) (KABQ). The knowledge items retain a 7-point Likert scale with lower scores indicating a lower level of EBP knowledge. The Attitudes towards EBP items retain a 7-point Likert scale. High scores indicate positive attitude after several items were reversely scored. For EBP behaviour, lower scores indicate a lower frequency of using EBP in current practice. A 6-point Likert scale is used for responses to the items in the outcomes/decisions domain. Lower scores indicate unfavorable patient outcomes and poor clinical evidence-based decision making. Detail of the EBP-KABQ scale and a summary of the changes to original scale are presented in Additional files 1 and 2.

Subject recruitment and data collection

All participants were recruited from a clinical trial assessing use of pain research evidence about pain [8]. Eligible practitioners were (1) physicians, nurses, occupational therapists (OTs), physical therapists (PTs), or psychologists who were currently working in clinical practice at least one day/week; (2) fluent in English; (3) able to access a computer at home or at work that provided unrestricted access to the World Wide Web; (4) possessed an active email account;(5) consent to participate in this research studyA total of 870 clinicians met the inclusion criteria and were invited to participate. From August 2011 to February 2013, 673 clinicians (physicians, nurses, OTs/PTs, psychologists etc.) completed an online EBP-KABQ scale prior to receiving new pain information. Demographic and practice characteristics were also obtained. The study received Ethics Approval from the McMaster University Research Ethics Board.

Data analysis

Quality checks, descriptive statistics and checks for normality were completed prior to analysis. Item 33 “I don’t use evidence-based practice for another reason (specify)” was removed from the analyses because the specified reason varied across respondents, making it a nonstandard item. Therefore, 27 ordinal items across the following four domains of EBP were analyzed in this study: knowledge (n = 6 items), attitudes (n = 13 items), behavior (n = 5 items) and outcomes/decisions (n = 3 items).

Scaling properties (internal consistency and floor/ceiling effects)

Internal consistency reliability scores were assessed for both the full EBP-KABQ scale and its corresponding 4 subscales using Cronbach’s alpha, where >0.7 was considered as minimum [9] and >0.9 was desirable [10]. Scaling properties such as floor/ceiling effects, which was observed in >15% of scores at minimum or maximum scale/subscale were also assessed [11].

Construct validation

Four hypotheses were tested to assess the construct validation of EBP-KABQ scale. First, we hypothesized that the mean item score in “knowledge” would be higher than those in “behaviour”, “outcomes/decisions” and “attitude” domains because knowledge is considered a necessary precursor, but not a sufficient guarantee, for changes in practice and outcomes. Secondly, we hypothesized that the domain of “outcomes/decisions” would be more strongly correlated to the other 3 domains since it focuses on how EBP influences the decision making process. Thirdly, we hypothesized that EBP-KABQ subscale scores would be correlated with corresponding EBP activities assessed by relevant open ended questions. For example, the frequency that clinicians search for evidence should be correlated with subtotal score of “behaviour” to a greater extent than other domains such as “knowledge” or “EBP outcomes/decisions”. Finally, we hypothesized that following demographic variables would be associated with total EBP-KABQ scale score in the multivariate modeling: age, highest level of education, and possession of advanced clinical training since these have been suggested in the literature on EBP. Details of all construct validity testing and a priori hypotheses are provided in the Results section.

Structural validity

Confirmatory factor analysis (CFA, maximum likelihood estimation) was conducted to examine our proposed 4-domain model. Four conceptual domains of EBP (knowledge, attitudes, behavior and outcomes/decisions) were tested as second-order factors (latent variables) based on the originally defined conceptual framework. We evaluated the model fit with a number of goodness-of-fit statistics including Root Mean Square Error of Approximation (RMSEA) <0.06 (ideal) and <0.08 (acceptable), comparative fit index (CFI) ≥0.90–0.95 (acceptable), Tucker Lewis Index (TLI) ≥0.90–0.95 (acceptable) and Chi-square test (P > 0.05, acceptable) [12]-[15]. We considered RMSEA, CFI and TLI as primary statistics because Chi-square is vulnerable to a large sample size (sample size > 300) [12]. We also examined modification indices to identify the potential to improve the model fit. We modified our model when it was indicated by theoretical and statistical findings [16]. We considered standardized coefficients (i.e., factor loadings) ≥0.30 (p < 0.05) as ‘representing’ a hypothesized dimension [17].

All analyses except CFA were conducted by SAS (version 9.3, SAS Institute Inc, Cary, NC, USA). We used IBM SPSS v20 Amos statistical software for CFA.


Sample characteristics

In total, 673 health professionals completed EBP-KABQ questionnaire. The description of demographic characteristics is presented in Table 1. Half of participants were age 45 or younger. Nearly half of clinicians were OTs or PTs, while 1/4 were nurses and 1/5 were physicians. One quarter of the sample had more than 5 years of clinical training; and they had a mean time in clinical practice of almost 18 years. Most participants practiced in an urban setting, while 15% were in a rural practice area.

Table 1 Characteristics of 673 participants of EBP-KABQ study

Scaling properties (internal consistency and floor/ceiling effects)

Overall, EBP-KABQ scale achieved acceptable satisfactory internal consistency (Cronbach’s alpha α = 0.85) although the subscale of “knowledge” still showed marginal acceptable internal consistency with Cronbach’s alpha = 0.66 after removal of item 3. However, this was improved compared to the original 6-item “knowledge” subscale (Cronbach’s alpha = 0.56). This finding supported the decision to remove item 3 (“Clinical trials and observational methods are equally valid in establishing treatment effectiveness”).

Table 2 presents a summary of the item-level properties of EBP-KABQ. The mean and median total score of EBP-KABQ scale was 117.93 (SD: 15.10) and 118 respectively, with no floor/ceiling effects detected. The mean scores of four subscales ranged from 11.22 to 64.58. Similarly, no obvious floor/ceiling effects were observed in all four subscales although some individual items particularly in “knowledge” presented a ceiling effect.

Table 2 Descriptive statistics of the EBP-KABQ scale, scaling properties and internal consistency (n = 673)

Construct validity

Details of the construct validity testing and a priori hypotheses were provided in Table 3. As we expected, mean item score in “knowledge” was 5.91, significantly higher than the rest of the domains (p < 0.05). Our constructed hypotheses were supported in that the correlation coefficients between “outcomes/decision” and “knowledge”, “behaviour” and “attitude” were 0.54, 0.40 and 0.57 respectively, which were higher correlations than observed between other subscales. Construct validity was also supported in that there was a significant relationship between the frequency of searching reported by clinicians and the “behaviour” score, with correlation coefficient ranges from 0.32 to 0.41 (hypothesis 3). Regression analyses supported our a priori hypothesis that health professionals who had higher levels of education (β = 4.63, P < 0.01), longer years in clinical training (β = 2.36, P < 0.01) and possession of advanced clinical training (β = 4.37, P < 0.01) were more likely to use EBP (Table 4). Although younger age was related to EBP practice in the direction anticipated, it did not reach statistical significance (β = −0.32, P = 0.06).

Table 3 Results of construct validity against a series of theoretical constructs
Table 4 Unadjusted and adjusted linear regression coefficients for EBP-KABQ total score

Structural validity

The Initial second-order model demonstrated poor model fit (x2 = 1838.24, df = 269, P < 0.001, CFI = 0.73, TLI = 0.70, RMSEA = 0.093). Modification indices suggested overall model fit would be improved by adding the correlation of six pairs of error terms (item 4 & 5 within “knowledge”, 12 & 13 in “application”, 21 & 24, 23 & 31, 27 & 30, and 31 & 32 in “attitude”). After the modification was executed, statistical fit of the model was improved to as follows: ×2 = 1205.20, df = 312, P < 0.001, CFI = 0.86, TLI = 0.84, RMSEA = 0.065. Although the overall fit improved, model fit indices especially CFI and TLI were still inadequate. We observed factor loading (β = 0.05) of the item 3 (“Clinical trials and observational methods are equally valid in establishing treatment effectiveness”) was significantly lower than the other five items on the dimension of knowledge. After removing this item from the scale, goodness-of-fit statistics improved to ×2 = 1056.65, df = 287, P < 0.001, CFI = 0.89, TLI = 0.86, RMSEA = 0.06 (Figure 1) which was very close to our a priori threshold (CFI/TLI ≥ 0.90, RMSEA < 0.08).

Figure 1
figure 1

Standardized parameter estimates for the refined EBP-KABQ factor structure model. Rectangles represent the scale items and ellipses represent the proposed factor constructs. Values on the single-headed arrows leading from the factors are standardized factor loadings. Values on the curved double-headed arrows between rectangles are correlations between error terms. Values on the curved double-headed arrows between ellipses are correlations between latent variables.


This study provided support for the use of a modified EBQ-KABQ questionnaire to understand different aspects of EBP knowledge, attitudes, behavior and outcomes/decisions in a variety of healthcare professionals with respect to EBP. We confirmed that the 26 ordinal items in the modified EBP-KABQ exhibit a four-domain construct consistent with the proposed four aspects of EBP. Our scale was modified based on our need to change wording to make the scale more broadly applicable to different disciplines since the original version targeted medical students. We also made changes based our experiences in pilot testing the measure since an expert committee and pilot users found some items to be redundant or difficult to understand. Our work builds on that of the developers who targeted medical trainees by providing a more broadly applicable and validated version. The newly proposed subscale construct of “outcomes/decisions” contains the items previously termed “future use” in the original scale. Outcomes/decisions more accurately reflect the item content and the targeting of the EBP-KABQ. Whereas, as the original instrument was focused on trainees who might be responding about future use, experienced clinicians will be reporting how they use EBP in current clinical decision-making and whether they attribute better outcomes to their evidence-based decisions. This domain is considered an important aspect of self-reported EBP since its focuses on the impact on practice and outcomes. We found the “outcomes/decisions” domain was moderately correlated with the other three domains, suggesting it played a role in perception of EBP. The shorter measure has improved measurement characteristics, retains conceptual domains and may be save administration time.

We found the EBP-KABQ scale demonstrates promising psychometric properties when measuring EBP in practicing health professionals because our analysis supported hypotheses posed for construct validity, and we found appropriate scaling properties. The overall Cronbach’s alpha (0.85) was superior to that of the original KAB scale (0.75) which may be attributed to deletion of problematic items.

The correlation between the knowledge and attitude/application domains was relatively weak. This suggests that these are relatively distinct domains. One explanation for this low correlation may be that increased focus on EBP in entry-level and post-professional education may have had more impact on knowledge than on attitudes and application of EBP [18]. However, measurement error may also have contributed. We observed lower internal consistency of the “knowledge” domain compared to other subscales and compared to the original KAB [7]. Low internal consistency suggested that the six items within the construct of “knowledge” were not adequately correlated. As item 3 (Clinical trials and observational methods are equally valid in establishing treatment effectiveness) demonstrated low factor loading to domain of “knowledge”, we questioned the content validity of this item. One explanation for this misfit item could be that clinicians might have confused the words “observational study” with “clinical observation”. However, we suspect that controversy over the “level of evidence” or “quality” of observational studies [19],[20] may have contributed to misfit on this item. In fact, more recent trends in evidence rating have acknowledged large observational studies as offering high quality evidence [21]. Respondents may value large observational studies more than small trials and not endorse this item despite strong knowledge of EBP. Since this item does not appear to reflect the domain of “knowledge”, and did not fit in CFA, we proposed removal. We suggest caution when using the “knowledge” subscale on its own to evaluate EBP knowledge, as further investigation is warranted to improve this sub-scale.

We found items in EBP knowledge skewed to the high extreme, whereas the others subscales did not demonstrate this. As evidence-based practice has become accepted around the world, it is now commonly integrated in the clinical training of many professionals [22]. Hence, knowledge about what evidence-based practice is, becomes prevalent over time [9]. Our finding may be explained by the fact that traditional evidence-based training focuses on providing knowledge to help practitioners enhance their techniques and skill level when searching and appraising evidence [23]-[27] but less consistently focuses on implementation behaviours for integrating EBP into daily clinical activities nor resolving attitudinal barriers towards EBP [28]-[30]. For instance, clinicians may enhance their knowledge of methods to find and appraise evidence, including the importance of systematic reviews in the evidence-based practice paradigm, but not be willing to able to incorporate this into their day-to-day clinical decision-making. Continuing medical education events often focus on providing content knowledge rather than active approaches, although the latter is more effective in promoting behavior change [31]. This may contribute to the findings observed in the study.

We found several factors were associated with better uptake of EBP. People with a higher level of education, more years of training, completion of advanced clinical training and those practicing in rural areas reported a greater willingness to implement EBP in their daily practice. Our findings were consistent with other studies [32]-[34] that also found health professionals with a higher level of education were more willing to adopt evidence-based practice. On the other hand, our finding that age was not a factor influencing EBP is in contrast to the literature [32],[34] that shows recent graduates are more likely to accept EBP than clinicians who are older. Our findings were narrowly insignificant (p < 0.06) suggesting a small effect of age may not have reached significance. However, age may be less important over time as EBP spreads through post-professional training.

Out findings suggest clinicians who practices in rural areas are more amenable to EBP which was an unexpected finding. This may be explained by several reasons. First, clinicians in rural areas are more likely to seek evidence because they have fewer colleagues in their work environment to discuss clinical issues when questions emerge in day-to-day practice. As a consequence, they would be more accustomed to going to the Internet looking for online evidence as a medical resource. Secondly, geography is no longer a barrier for clinicians to acquire evidence based education. McColl [35] reported only 16% of physicians in England received official education regarding literature search techniques. Therefore, clinicians in rural areas may have access to gaining skills in EBP during their professional training, or through other avenues and be motivated to use these skills to solve their clinical questions.

Our study has some limitations. While it was a strength that we had different professions and a geographically diverse sample, we were unable to explore how contextual factors contributed to our findings. Local differences regarding the EBP training, culture and language among these participants were not captured in our data collection and we could not test for the influence of many potential covariates and limited covariate testing to factors suggested as important in the literature. However, h a broader sample improves the generalizability of our findings. Since the survey was only offered in English, our findings may not represent contexts where English was not a common language. A further consideration is that the data were self-reported. We have no external criterion to examine whether the self-reported evidence-based practice behaviors are consistent with actual practice. The impact of EBP decisions on patient outcomes may be overestimated if physicians overestimate their ability to improve outcome [36]. Studies of EBP that measure patient outcomes by patient-report or objective measures are preferable indicators of the impact of EBP, but can be challenging to measure [37],[38]. We had to make decisions about deletion of items based on expert review and statistical performance. Studies of the reasons for poor item performance that included qualitative techniques such as cognitive interviewing may have identified ways to reform problematic items or captured new concepts. However, since our goal was to stay true to the original KABQ, if possible, our approach was reasonable. Finally, since our sample was derived from clinicians interested in pain, it may not reflect all. Since pain is the most common patient complaint and one relevant across different professions it represented an ideal context to test the EBP-KABQ across professions and contexts.


This study provides evidence in a large sample of experienced clinicians from a range of professions interested in pain management that the EBP-KABQ can be used to assess four domains of EBP: Knowledge, attitude, behavior, outcomes/decisions.

Additional files



Confirmatory factor analysis


Comparative fit index


Evidence–based practice


Knowledge, attitudes, behavior


Root mean square error of approximation


Tucker Lewis index


  1. Haynes RB, Devereaux PJ, Guyatt GH: Clinical expertise in the era of evidence-based medicine and patient choice. Evid Base Med. 2002, 7 (2): 36-38. 10.1136/ebm.7.2.36.

    Article  Google Scholar 

  2. Titler MG: The evidence for evidence-based practice implementation. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Volume 1. 2008, Agency for Healthcare Research and Quality, Rockville, MD, 113-161.

    Google Scholar 

  3. Greiner A, Knebel E, Institute of Medicine: Committee on the Health Professions Education Summit. Health Professions Education: A Bridge to Quality. 2003, National Academy Press, Washington, DC

    Google Scholar 

  4. Association of American Medical Colleges (Ed): Contemporary Issues in Medicine, II: Medical Informatics and Population Health. Washington, DC; 1998.

  5. Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, Whelan C, Green M: Instruments for evaluating education in evidence-based practice. JAMA. 2006, 296 (9): 1116-1127. 10.1001/jama.296.9.1116.

    Article  Google Scholar 

  6. Aarons GA, Glisson C, Hoagwood K, Kelleher K, Landsverk J, Cafri G: Psychometric properties and US national norms of the evidence-based practice attitude scale (EBPAS). Psychol Assess. 2010, 22 (2): 356-10.1037/a0019188.

    Article  Google Scholar 

  7. Johnston JM, Leung GM, Fielding R, Tin KY, Ho LM: The development and validation of a knowledge, attitude and behaviour questionnaire to assess undergraduate evidence-based practice teaching and learning. Med Educ. 2003, 37 (11): 992-1000. 10.1046/j.1365-2923.2003.01678.x.

    Article  Google Scholar 

  8. MacDermid JC, Law M, Buckley N, Haynes RB: “Push” versus “Pull” for mobilizing pain evidence into practice across different health professions: a protocol for a randomized trial. Implement Sci. 2012, 7: 115-10.1186/1748-5908-7-115.

    Article  Google Scholar 

  9. Hughes RG, Titler MG: The Evidence for Evidence-Based Practice Implementation. 2008

    Google Scholar 

  10. Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, Bouter LM, de Vet HC: Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007, 60 (1): 34-42. 10.1016/j.jclinepi.2006.03.012.

    Article  Google Scholar 

  11. Streiner DL, Norman GR: Health Measurement Scales: A Practical Guide to Their Development and Use. 2008, Oxford University Press, USA

    Book  Google Scholar 

  12. Kline RB. Principles and Practice of Structural Equation Modeling: Guilford Press; 2010.

  13. Norman GR, Streiner DL: Biostatistics: The Bare Essentials. 2007, PMPH, USA

    Google Scholar 

  14. Hu L, Bentler PM: Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods. 1998, 3 (4): 424-10.1037/1082-989X.3.4.424.

    Article  Google Scholar 

  15. Hu L, Bentler PM: Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999, 6 (1): 1-55. 10.1080/10705519909540118.

    Article  Google Scholar 

  16. Jöreskog KG: Testing structural equation models. Sage Focus Editions. 1993, 154: 294-294.

    Google Scholar 

  17. Brown TA: Confirmatory Factor Analysis for Applied Research: Guilford Press; 2006.

  18. Coomarasamy A, Khan KS: What is the evidence that postgraduate teaching in evidence based medicine changes anything? A systematic review. BMJ. 2004, 329 (7473): 1017-10.1136/bmj.329.7473.1017.

    Article  Google Scholar 

  19. Concato J, Horwitz RI: Beyond randomised versus observational studies. Lancet. 2004, 363 (9422): 1660-1661. 10.1016/S0140-6736(04)16285-5.

    Article  Google Scholar 

  20. Concato J: Observational versus experimental studies: what’s the evidence for a hierarchy?. NeuroRx. 2004, 1 (3): 341-347. 10.1602/neurorx.1.3.341.

    Article  Google Scholar 

  21. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ, GRADE Working Group: GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008, 336 (7650): 924-926. 10.1136/bmj.39489.470347.AD.

    Article  Google Scholar 

  22. Walshe K, Rundall TG: Evidence‐based management: from theory to practice in health care. Milbank Q. 2001, 79 (3): 429-457. 10.1111/1468-0009.00214.

    Article  Google Scholar 

  23. Dirschl DR, Tornetta P, Bhandari M: Designing, conducting, and evaluating journal clubs in orthopaedic surgery. Clin Orthop Relat Res. 2003, 413: 146-157. 10.1097/01.blo.0000081203.51121.25.

    Article  Google Scholar 

  24. Fliegel JE, Frohna JG, Mangrulkar RS: A computer-based OSCE station to measure competence in evidence-based medicine skills in medical students. Acad Med. 2002, 77 (11): 1157-1158. 10.1097/00001888-200211000-00022.

    Article  Google Scholar 

  25. Maher CG, Sherrington C, Elkins M, Herbert RD, Moseley AM: Challenges for evidence-based physical therapy: accessing and interpreting high-quality evidence on therapy. Phys Ther. 2004, 84 (7): 644-654.

    Google Scholar 

  26. Ely JW, Osheroff JA, Ebell MH, Chambliss ML, Vinson DC, Stevermer JJ, Pifer EA: Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ. 2002, 324 (7339): 710-10.1136/bmj.324.7339.710.

    Article  Google Scholar 

  27. McCluskey A: Occupational therapists report a low level of knowledge, skill and involvement in evidence‐based practice. Aust Occup Ther J. 2003, 50 (1): 3-12. 10.1046/j.1440-1630.2003.00303.x.

    Article  Google Scholar 

  28. Taylor RS, Reeves BC, Ewings PE, Taylor RJ: Critical appraisal skills training for health care professionals: a randomized controlled trial [ISRCTN46272378]. BMC Med Educ. 2004, 4 (1): 30-10.1186/1472-6920-4-30.

    Article  Google Scholar 

  29. Coomarasamy A, Taylor R, Khan K: A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Med Teach. 2003, 25 (1): 77-81. 10.1080/0142159021000061468.

    Article  Google Scholar 

  30. McCluskey A, Lovarini M: Providing education on evidence-based practice improved knowledge but did not change behaviour: a before and after study. BMC Med Educ. 2005, 5 (1): 40-10.1186/1472-6920-5-40.

    Article  Google Scholar 

  31. Davis DA, Thomson MA, Oxman AD, Haynes RB: Evidence for the effectiveness of CME. A review of 50 randomized controlled trials. JAMA. 1992, 268 (9): 1111-1117. 10.1001/jama.1992.03490090053014.

    Article  Google Scholar 

  32. Parrish DE, Rubin A: Social workers’ orientations toward the evidence-based practice process: a comparison with psychologists and licensed marriage and family therapists. Soc Work. 2012, 57 (3): 201-210. 10.1093/sw/sws016.

    Article  Google Scholar 

  33. Salbach NM, Jaglal SB, Williams JI: Reliability and validity of the evidence-based practice confidence (EPIC) scale. J Contin Educ Heal Prof. 2013, 33 (1): 33-40. 10.1002/chp.21164.

    Article  Google Scholar 

  34. Simpson PM, Bendall JC, Patterson J, Middleton PM: Beliefs and expectations of paramedics towards evidence-based practice and research. Int J Evid Base Healthc. 2012, 10 (3): 197-203. 10.1111/j.1744-1609.2012.00273.x.

    Article  Google Scholar 

  35. McColl A, Smith H, White P, Field J: General practitioners’ perceptions of the route to evidence based medicine: a questionnaire survey. BMJ. 1998, 316 (7128): 361-365. 10.1136/bmj.316.7128.361.

    Article  Google Scholar 

  36. Covell DG, Uman GC, Manning PR: Information needs in office practice: are they being met?. Ann Intern Med. 1985, 103 (4): 596-599. 10.7326/0003-4819-103-4-596.

    Article  Google Scholar 

  37. Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003, 362 (9391): 1225-1230. 10.1016/S0140-6736(03)14546-1.

    Article  Google Scholar 

  38. Rosenberg W, Donald A: Evidence based medicine: an approach to clinical problem-solving. BMJ. 1995, 310 (6987): 1122-10.1136/bmj.310.6987.1122.

    Article  Google Scholar 

Download references


The authors thank Margaret Lomotan for study coordination.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Qiyun Shi.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JCM, ML and RBH conceived the study. QS and JCM designed the study. QS created the analytic model with contributions from JCM and BC. QS undertook the statistical analysis. QS contributed to the writing of the first draft of the manuscript. All of the authors contributed to and have approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Q., Chesworth, B.M., Law, M. et al. A modified evidence-based practice- knowledge, attitudes, behaviour and decisions/outcomes questionnaire is valid across multiple professions involved in pain management. BMC Med Educ 14, 263 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: