- Correspondence
- Open access
- Published:
Sicily statement on classification and development of evidence-based practice learning assessment tools
BMC Medical Education volume 11, Article number: 78 (2011)
Abstract
Background
Teaching the steps of evidence-based practice (EBP) has become standard curriculum for health professions at both student and professional levels. Determining the best methods for evaluating EBP learning is hampered by a dearth of valid and practical assessment tools and by the absence of guidelines for classifying the purpose of those that exist. Conceived and developed by delegates of the Fifth International Conference of Evidence-Based Health Care Teachers and Developers, the aim of this statement is to provide guidance for purposeful classification and development of tools to assess EBP learning.
Discussion
This paper identifies key principles for designing EBP learning assessment tools, recommends a common taxonomy for new and existing tools, and presents the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework for classifying such tools. Recommendations are provided for developers of EBP learning assessments and priorities are suggested for the types of assessments that are needed. Examples place existing EBP assessments into the CREATE framework to demonstrate how a common taxonomy might facilitate purposeful development and use of EBP learning assessment tools.
Summary
The widespread adoption of EBP into professional education requires valid and reliable measures of learning. Limited tools exist with established psychometrics. This international consensus statement strives to provide direction for developers of new EBP learning assessment tools and a framework for classifying the purposes of such tools.
Background
"No single assessment method can provide all the data required for judgment of anything so complex as the delivery of professional services by a successful physician." [1]
Evidence-based practice (EBP), the integration and implementation of best available evidence with clinical expertise and patients' values and circumstances,[2] is a foundation for healthcare education across disciplines and international borders and has become an essential requirement for certification and re-certification in many health professions. Assessing EBP learning is hampered, however, by a relative dearth of validated and practical assessment tools [3]. Although the most recent systematic review[4] identified 104 unique EBP assessment tools, the majority of these tools have not been validated and address only limited constructs of EBP [4–6].
The aim of this consensus statement is to provide guidance for purposeful classification and development of EBP assessment tools. It highlights principles to be considered during tool development, provides a framework for classifying EBP assessment tools, and identifies the types of tools that are needed to promote more consistent evaluation of EBP teaching outcomes. This consensus statement also proposes principles and priorities for future efforts in tool development. It does not evaluate the effectiveness of different educational approaches in promoting evidence-based behaviour change and quality improvement.
Methods
This statement was conceived by the delegates of the 5th International Conference of Evidence-Based Health Care (EBHC) Teachers and Developers held in Sicily in October 2009 (http://www.ebhc.org) at its final plenary session. The initial structure of the statement and manuscript was developed following two conference round table discussions (4 hours) on how to classify and develop EBP learning assessment tools. Each author selected a section of the manuscript to research and develop, and their submissions were organized to focus the paper. An iterative process of shaping occurred through 4 conference calls and 7 draft reviews of the manuscript. The authors solicited input on the statement from non-author EBHC conference delegates in 2 phases - first from the conference steering committee (n = 12) and second from conference delegates (n = 66). Seventeen non-author conference attendees (22%) responded to the survey. Responses were incorporated into the final statement which was approved by author consensus. Authors and survey respondents constitute 25 individuals from 12 countries and 7 healthcare professions.
Discussion
Part I: Principles to consider when developing assessment tools
Categories of educational assessment
Educators can assess different dimensions of EBP learning, including reaction to initial exposure to EBP, knowledge attainment, or the ability to use EBP skills to improve patient care. The challenge of assessing healthcare professionals' learning is not unique to EBP. Numerous educators have contributed to defining the complex nature of assessing professional competencies like EBP. Table 1 illustrates our proposal for categorizing assessment of EBP educational outcomes.
The proposed model is based upon work by Freeth et al. [7] Freeth and colleagues used their systematic review of interprofessional learning in healthcare to develop categories specific to assessment of healthcare education based upon those originally proposed in Kirkpatrick's Hierarchy of Levels of Evaluation [8]. We believe that these categories are well suited for assessing EBP learning because they allow educators to classify the impact of an EBP educational intervention from the most proximal phenomenon (the learners' experiences) to the most distal (patient care outcomes).
Linking assessment to learning aims and learner audiences
Tools for assessing the effectiveness of teaching EBP need to reflect the aims of the curriculum. Learning aims will ideally be matched to the needs and characteristics of the learner audience. Straus et al. [9] classified three unique EBP learner aims: to 'replicate' the EBP of others, to 'use' evidence summaries for EBP, and to 'do' the more time-intensive 5 steps of EBP defined by Dawes et al. in the original Sicily Statement [2]. Students, for example, may need a comprehensive grounding in the 5 steps of EBP, whereas health professionals who manage services may require skills in using evidence summaries. Educators need to match assessment tools to the unique learning aims of different learner groups. For example, a skills assessment of EBP 'users' may need to focus on ability to find and appropriately apply rigorously pre-appraised evidence. Conversely, a skills assessment of EBP 'doers' may need to focus on ability to find, critically appraise, and apply primary research [9].
The context and motivations for teaching, learning, and using EBP also need to be considered during assessment. Ability to master EBP skills can be influenced by contextual elements such as personal beliefs, organization barriers, variations in learning styles, and prior exposure to EBP. Motivations for using EBP also influence assessment. Practitioners learning to apply EBP in clinical practice may need to be assessed differently than undergraduate students, administrators, payers, and policy makers [10, 11]. Likewise, assessment of EBP learning in the context of interprofessional education models may require special consideration [12]. Thus, developers of EBP assessment tools are encouraged to explicitly identify the type of learner and learning aims that a tool is designed to assess.
Objective of the assessment
Broadly speaking, the objective of an assessment can be one of two types - formative or summative. Formative assessment provides learners and teachers with information about competency development concurrent with the learning process, and can be used to influence the educational process and facilitate competence in real time. Summative assessments are used to establish competence or qualification for advancement. The potential high-stakes nature of summative assessments demand a greater degree of psychometric rigor compared to formative assessments [13]. EBP assessment developers should clearly articulate the objectives of EBP assessment tools to signpost their utility for different learners and learning aims.
Part II: Framework for describing EBP learning assessment tools
We propose the Classification Rubric for EBP Assessment Tools in Education (CREATE; Figure 1) for classifying EBP learner assessment tools. Using this framework, the nature of an assessment can be characterized with regard to the 5-step EBP model, type(s) and level of educational assessment specific to EBP, audience characteristics, and learning and assessment aims. In contrast to Table 1, the assessment categories are listed in reverse order to illustrate how one category may build on the next from most simple (Reaction to the Educational Experience) to most complex (Benefit to Patients). The type of assessment generally used for each category is also described. Sample questions for each element of the CREATE framework are provided in Figure 2. To facilitate consistent use of the CREATE tool, descriptions, examples when available, and discussion of each level of assessment follow.
Assessment Category
Reaction to the Educational Experience
In the CREATE framework, Reaction to the Educational Experience refers to learners' perspectives about the learning experience, including structural aspects (e.g. organization, presentation, content, teaching methods, materials, quality of instruction) and less tangible aspects such as support for learning [7]. These aspects represent potential covariates for the efficacy of an educational intervention [14], providing important information for educators and course designers although they are not direct measures of learning. Assessment of learner's reaction to an educational intervention is common in practice. For example, the Student Instructional Report II is a generic university-level teaching assessment developed by the Educational Testing Service that assesses learner reaction to education experiences [15]. We were not able to identify formally validated tools for this level of assessment specific to EBP learning. Learner reaction to an educational experience can be assessed through surveys that use questions appropriate to teaching different EBP steps, such as:
-
Did the instructor's teaching style enhance your enthusiasm for asking questions during ward rounds? (Ask)
-
Was the lecture on literature searching at an appropriate level for your learning needs? (Search)
-
Were the critical appraisal checklists understandable? (Appraise)
-
Were the patient case presentations informative? (Integrate)
Attitudes
In the CREATE framework, Attitudes refers to the values ascribed by the learner to the importance and usefulness of EBP to inform clinical decision-making. Attitudes are strong predictors of future behaviour [16] and there is emerging evidence that learners' beliefs in the positive benefits of practising EBP are related to the degree with which they implement EBP in their work setting [17]. An example of an attitudes assessment tool is the Evidence-Based Practice Attitude Scale (EBPAS-50); it consists of 50 questions that assess attitude toward EBP and has been validated among mental healthcare and social service providers [18]. For example, the EBPAS-50 [18, 19] has survey questions about EBP attitudes answered with a Likert scale ranging from '0' Not at all to '4' To a very great extent. Respondents rate statements such as:
-
I like to use new types of therapy/interventions to help my clients.
-
I know better than academic researchers how to care for my clients.
-
I am willing to use new and different types of therapy/interventions developed by researchers.
-
Research based treatments/interventions are not clinically relevant
When assessing attitudes about EBP, it is important to remember that attitudes are hypothesized to be modified by the assessment process [20]. Any tool designed to assess EBP attitudes must consider the manner in which the question is framed. The easiest method of assessing attitudes about EBP may be a written questionnaire. However, it is noted that questionnaires may cause the individual to over-analyse why they hold such attitudes toward the object, thereby distorting their actual attitudes [21]. A more rigorous approach would be to adopt a qualitative methodology during tool development to identify themes on which to base questions, or to triangulate survey data with actual use of an activity in practice [22, 23].
Self-Efficacy
Within the CREATE framework, Self-Efficacy refers to people's judgments regarding their ability to perform a certain activity [24]. For example, an individual's confidence in their ability to search for evidence may be associated with their likelihood to engage in searching [25]. The Evidence-Based Beliefs Scale (EBBS) [26] consists of 16 items that assess confidence in individuals' ability to use EBP (e.g. "I am sure that I can implement EBP") and their beliefs about EBP (e.g. "I believe that EBP results in the best clinical care for patients"). The EBBS demonstrated strong psychometric properties among a large cohort of nurses [17]. Likewise, face and content validity have been reported for the Evidence-based Practice Confidence (EPIC) scale among a variety of healthcare professionals [27].
Knowledge
Within the CREATE framework, Knowledge refers to learners' retention of facts and concepts about EBP. Hence, assessments of EBP knowledge might assess a learner's ability to define EBP concepts, list the basic principles of EBP, or describe levels of evidence. Knowledge assessment questions might ask learners to identify the most appropriate study design to answer a clinical question or to define Number Needed to Treat. Paper and pencil tests lend themselves well to this level of cognitive assessment. Examples of current EBP knowledge assessment tools are described below.
Skills
Within the CREATE framework, Skills refer to the application of knowledge, ideally in a practical setting [7]. Assessment of skill would require that learners 'do' a task associated with EBP, such as conduct a search, use a critical appraisal tool to summarize study quality, or calculate Number Needed to Treat. Tools can assess different dimensions of skills, such as the correct application, thoroughness of the process, or the efficiency with which a learner can complete some or all of the processes.
To our knowledge there are two validated instruments that assess a combination of EBP knowledge and skills - the Berlin Questionnaire[28] and the Fresno Test[29]. Both tests ask learners to recall knowledge and describe how they would apply EBP skills in the context of clinical scenarios.
Behaviour as Part of Patient Care
Within the CREATE framework, Behaviour refers to what learners actually do in practice. It is inclusive of all the processes that a clinician would use in the application of EBP, such as assessing patient circumstances, values, preferences, and goals along with identifying the clinician's own competence relative to the patient's needs in order to determine the focus of an answerable question. EBP-congruent behaviour is essential to translation of EBP-congruent attitudes, knowledge, and skills into benefits for patients and, for assessment, is measured by some form of activity monitoring. The monitored behaviours need to reflect the learning aims and learner audience. Thus, it may be more appropriate for busy front-line clinicians to be monitored on their use of evidence summaries rather than their frequency of searching and appraising primary research [30].
Assessment of EBP behaviour can help to 'lift the lid' on what learners take for granted, and expose the differences between their espoused theories (how they would consciously describe what they do) and their theories in use (what they actually do) [31]. When used for formative purposes, behaviour assessments may help learners identify their learning needs, and help teachers evaluate how well their curriculum equips learners to use EBP in patient care.
Assessing behaviour is not straightforward. Importantly, there may be a Hawthorne effect - the very act of measuring may affect behaviours. Some researchers have assessed behaviour by electronically capturing the searching behaviour of learners [32]. Such approaches, although potentially useful in themselves, cannot fully assess the EBP-congruence of a learner's behaviour. For example, they cannot identify clinical questions that were not pursued, or even recognised, and they may not capture what was done with information that was found.
Melnyk and Fineout-Overholt developed and validated the EBP Implementation Scale which assesses learners' self-report of attempts to implement EBP in the workplace [26]. Although Shaneyfelt et al. [4] note the potential biases in retrospective self-reporting behaviours, rigorous critical reflection may address this. Learning portfolios provide a potential strategy for reflection about and evaluation of EBP implementation [33, 34], but can be time consuming to complete and assess, and require additional skills in reflective practice. Portfolio use in summative assessment, especially 'high-stakes' assessments, is currently open to question and research is needed to develop these and/or alternative tools [4]. The broader science and theory of behavior change may provide useful alternatives for measuring changes in EBP behaviors [35].
The lines between assessment of knowledge, skills, and behavior can be difficult to discern. Table 2 delineates these three elements of the CREATE framework to facilitate consistent classification of current and future assessment tools.
Benefit to Patients
Within the CREATE framework, Benefit to Patients refers to the impact of EBP educational interventions on the health of patients and communities. The ultimate goal of EBP is to improve care outcomes for patients within the context of complex healthcare systems. Hence, there is a need to assess the impact of EBP education (generally for healthcare providers) on the benefit to patients [9, 36]. Measuring benefit to patients as a result of EBP learning is a complex process due to the influence of other variables in the process. In many cases assessment would occur at the institutional level. For example, if all of the care providers on a stroke rehabilitation unit learned how to integrate a clinical practice guideline into their care, would patients on that unit experience better outcomes? This question is intertwined with the much broader issue of how healthcare is delivered. Nevertheless, we propose that it is an important concept to consider within the narrower construct of the outcomes of EBP learning. When a healthcare professional learns to use EBP, we expect that he or she will identify more efficacious care behaviours and ultimately achieve better patient outcomes.
To measure the benefit of EBP for patients, tool developers must identify endpoints in patient care that can be improved through application of EBP. Appropriate endpoints may be different depending on the perspective taken. An individual patient's perspective may be different from the healthcare provider's perspective, and that may be different from a group or an institution's focus on appropriate care endpoints [37]. Measures of individual patient outcomes may include change in a patient's disease state, impairments, functional or social status; their satisfaction with service delivery; or the cost incurred to receive services. Benefit to patients from the healthcare providers' perspective may include change in diagnostic status, functional status, or quality of life. Institutional outcomes may focus on comparisons of service costs with and without EBP, [38] and patient outcomes within diagnostic groupings following implementation of EBP recommendations [39]. Potential endpoints are not specific to steps in the EBP process but rather to patient outcomes, so the CREATE framework does not delineate between the 5-steps for this level of assessment.
Direct measures of patient health outcomes can be derived from clinical documentation to evaluate the impact of EBP approaches. For example, use of EBP approaches are associated with improved outcomes for patients with neck pain [40]; length of stay, overall costs of care, and readmission rates for children with asthma [41]; and the likelihood to prescribe evidence-based interventions in a general hospital [42]. Direct surveys could be used to assess the impact of EBP-based services on patient perceptions about their functional outcomes, health status or satisfaction with services [43]. Care must be taken to avoid wrongful identification of 'best outcomes' based on settings that are easier to study (such as controlled research settings), rather than outcomes with greater ecological validity (such as whole communities) [44]. Additionally, patient care outcomes may need to be measured in conjunction with measures of clinician EBP behaviours to ensure that outcomes can be linked to EBP processes as opposed to other variables that impact patient outcomes.
Instruments that Combine Categories
Several tools address more than one category of EBP assessment in a single instrument. The Evidence Based Practice Questionnaire (EBPQ) assesses attitudes, knowledge, and implementation, and has been validated with nurses[45] and social workers [46]. Additionally, the Knowledge, Attitudes, Access and Confidence Evaluation (KACE) has demonstrated adequate discriminative validity, responsiveness to training, and test-retest reliability with dental students and faculty [47].
Putting the CREATE Framework to Use
The need to assess the effectiveness of education and validate the usefulness of EBP approaches is clearly being grappled with by many health professions. As more tools are created, it becomes more important that there is a mechanism for classifying their singular or multiple purposes. The elements of EBP learning assessed by outcome measures referenced in this manuscript are illustrated in the CREATE framework (Figure 3) to demonstrate how the classification process might help developers identify gaps and to help teachers select the best available tools. Tests that are placed within the CREATE model will necessarily need to be weighed in the context of how they are to be used, including who the learners are, the intent of evaluation process, and the environmental contexts in which learning and assessment take place. The CREATE framework is not a model of EBP, but rather it is a tool to classify the intent of EBP educational assessments.
Part III: Recommendations for EBP assessment tool development
There are substantial needs for development of EBP assessment tools across the categories outlined in this paper: reaction to the educational experience, attitudes, self-efficacy, knowledge, skills, behaviour, and benefit to patients. As noted earlier, assessment tools need to be valid and practical for use by educators and researchers. Validation across learner characteristics (e.g. students vs. clinicians, nurses vs. physicians, users vs. doers) is most useful for broad adoption within EBP education, but as a minimum, tools should identify the type(s) of learner(s) for which they are validated. Guidance for appropriate study design to establish outcome measure validity is beyond the scope of this statement, however many quality references are available [48–50].
Based upon author recommendations and feedback from Sicily 2009 delegates, we propose 4 general recommendations for developers of new EBP learning assessment tools:
-
1.
Use the CREATE framework to classify new tools with regard to EBP steps assessed, assessment category (or categories) addressed, and the audience characteristics and assessment aim for which the tool is intended and/or validated.
-
2.
Clearly state the foundational principles of learning and assessment upon which a new assessment tool is developed.
-
3.
Clearly state how the design of a new tool is linked to the learning aims it is intended to measure.
-
4.
Develop, validate, and use a standardized method for translation of tools into new languages.
Beyond these overarching recommendations, there is need for development of EBP learning assessment tools in each assessment category in the CREATE model:
Reaction to the Educational Experience:
-
a)
A common framework and standardized questions are needed to assess learners' reactions to EBP educational interventions. A standardized assessment would allow reaction to be compared across interventions.
Attitudes and Self-Efficacy:
-
a)
There is a need to build upon existing tools (e.g., EBPAS [19], EBBS[26], EPIC[27], EBPQ[45], KACE[47]) to facilitate measurement of self-reported attitudes, beliefs, and self-efficacy across different learner populations and educational settings.
-
b)
There is a need for reliable qualitative methods to assess EBP attitudes and self-efficacy that can be compared across studies.
Knowledge and Skills:
-
a)
Developers are encouraged to continue psychometric testing of the Fresno Test[29] and Berlin Test[28]) to establish sensitivity to change over time and minimum 'competency' performance for different learner populations and educational settings.
-
b)
The Berlin and Fresno assessments emphasize searching and critical appraisal skill for primary research evidence. Assessments of learners that require different skills are needed (e.g. practitioners that primarily rely on evidence summaries need to be assessed regarding their knowledge of how to appraise and skill for applying evidence summaries and clinical guidelines).
-
c)
Further investigation is warranted to ascertain ability to obtain and integrate patient values and perspectives in the context of EBP.
-
d)
Assessments that address the performance of EBP skills across clinical environments are needed, including assessment through observation.
Behaviour:
-
a)
Generic self-monitoring tools are needed that measure clinician use of EBP processes in clinical decision-making including, but not limited to: frequency of performing each EBP step, resources used, patient involvement in evidence-based decision-making, frequency of change in clinical management due to newly found evidence, and rate of positive vs. negative outcomes associated with EBP use.
-
b)
Valid, practicable methods are needed for monitoring learners' EBP behaviours that can be used for both formative and summative purposes, particularly 'high stakes' assessments.
Benefit to patients:
-
a)
Tools are needed that measure patient outcomes concurrently with the application of evidence-based approaches to care that inform the impact of EBP behaviours on patient outcomes.
-
b)
Implementation of appropriate qualitative methodologies are needed to determine important outcomes from patients' perspectives with regard to EBP that can be used in diverse healthcare settings.
Finally, within the context of using EBP learning assessment tools in research studies, benefit may be gained from:
-
1.
Using a common set of outcome tools and adopting the operational terms presented in this paper to allow comparison across studies.
-
2.
Including a measure of learners' reaction to the intervention as this may impact effectiveness in other outcome categories.
-
3.
Developing methodologies for assessing the efficacy of interventions designed to target different elements of EBP as defined by the CREATE framework.
-
4.
Assessing the correlation between the assessment categories outlined in the CREATE framework. That is, do the lower order objectives such as attitudes and self-efficacy relate to knowledge and skill and do knowledge and skill relate to behaviour and so on.
Summary
Evidence-based practice education has spread across professions and clinical settings; however the ability to measure the impact of EBP educational experiences or programs is limited to a few validated tests that do not measure across all levels of learning or steps of EBP. The CREATE framework was developed to provide a classification system for new and existing tools, to help tool developers focus the intent of their tools, and to provide unifying operational definitions to facilitate a common language in EBP learning assessment. We anticipate that use of CREATE to classify EBP learning assessment tools will provide teachers and researchers with an effective method for idenitfying the best available tools for their needs.
We have outlined priorities for EBP assessment tool development generated through an international consensus process and based on the CREATE framework. We hope that consideration of these recommendations will facilitate needed innovations in EBP assessment tool development.
References
Miller RA: Why the standard view is standard - people, not machines, understand patients problems. Journal of Medicine and Philosophy. 1990, 15 (6): 581-591.
Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, Porzsolt F, Burls A, Osborne J: Sicily statement on evidence-based practice. BMC Med Educ. 2005, 5 (1): 1-10.1186/1472-6920-5-1.
Flores-Mateo G, Argimon JM: Evidence based practice in postgraduate healthcare education: A systematic review. BMC Health Serv Res. 2007, 7: 119-10.1186/1472-6963-7-119.
Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, Whelan C, Green M: Instruments for evaluating education in evidence-based practice - A systematic review. JAMA. 2006, 296 (9): 1116-1127. 10.1001/jama.296.9.1116.
Ilic D: Teaching Evidence-based Practice: Perspectives from the Undergraduate and Post-graduate Viewpoint. Annals Academy of Medicine Singapore. 2009, 38 (6): 559-563.
Greenhalgh T: How to read a paper: the basics of evidence-based medicine. 2006, Malden, Mass.: BMJ Books/Blackwell Pub, 3
Freeth D, Hammick M, Koppel I, Reeves S, Barr H: A critical review of evaluations of interprofessional education. 2002, London: LTSN HS & P
Kirkpatrick D: Evaluation of Training. Training and Development Handbook. Edited by: Craig RL, Bittel LR. 1967, Development. ASfTa. New York,: McGraw-Hill, xii, 650 p.
Straus SE, Green ML, Bell DS, Badgett R, Davis D, Gerrity M, Ortiz E, Shaneyfelt TM, Whelan C, Mangrulkar R: Evaluating the teaching of evidence based medicine: conceptual framework. BMJ. 2004, 329 (7473): 1029-1032. 10.1136/bmj.329.7473.1029.
Drescher U, Warren F, Norton K: Towards evidence-based practice in medical training: making evaluations more meaningful. Med Educ. 2004, 38 (12): 1288-1294. 10.1111/j.1365-2929.2004.02021.x.
Larkin GL, Hamann CJ, Monico EP, Degutis L, Schuur J, Kantor W, Graffeo CS: Knowledge translation at the macro level: legal and ethical considerations. Acad Emerg Med. 2007, 14 (11): 1042-1046.
Bridges DR, Davidson RA, Odegard PS, Maki IV, Tomkowiak J: Interprofessional collaboration: three best practice models of interprofessional education. Med Educ Online. 2011, 16.
Epstein RM: Medical education - Assessment in medical education. N Engl J Med. 2007, 356 (4): 387-396. 10.1056/NEJMra054784.
Schilling K, Wiecha J, Polineni D, Khalil S: An interactive web-based curriculum on evidence-based medicine: Design and effectiveness. 2006, 126-132. In: 2006
Centra J: The Development of the Student Instructional Report II. 2005, Educational Testing Service: Higher Education Assessment, 56.
Ajzen I: The Theory of Planned Behavior. Oraganizational behavior and human decision processes. 1991, 50: 179-211. 10.1016/0749-5978(91)90020-T.
Melnyk BM, Fineout-Overholt E, Feinstein NF, Sadler LS, Green-Hernandez C: Nurse practitioner educators' perceived knowledge, beliefs, and teaching strategies regarding evidence-based practice: Implications for accelerating the integration of evidence-based practice into graduate programs. Journal of Professional Nursing. 2008, 24 (1): 7-13. 10.1016/j.profnurs.2007.06.023.
Aarons GA, Cafri G, Lugo L, Sawitzky A: Expanding the Domains of Attitudes Towards Evidence-Based Practice: The Evidence Based Practice Attitude Scale-50. Adm Policy Ment Health. 2010, 1-10.
Aarons GA: Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS). Ment Health Serv Res. 2004, 6 (2): 61-74.
Ajzen I, Sexton J: Depth of processing, belief congruence, and attitude-behavior correspondence. In Dual-process theories in social psychology. 1999, New York: Guilford
Levine GM, Halberstadt JB, Goldstone RL: Reasoning and the weighting of attributes in attitude judgments. Journal of Personality and Social Psychology. 1996, 70 (2): 230-240.
Schifferdecker KE, Reed VA: Using mixed methods research in medical education: basic guidelines for researchers. MedEduc. 2009, 43 (7): 637-644.
Nastasi BK, Schensul SL: Contributions of qualitative research to the validity of intervention research. Journal of School Psychology. 2005, 43 (3): 177-195. 10.1016/j.jsp.2005.04.003.
Bandura A: Self-Efficacy - Toward a unifying theory of behavioral change. Psychological Review. 1977, 84 (2): 191-215.
Salbach NM, Guilcher SJT, Jaglal SB, Davis DA: Factors influencing information seeking by physical therapists providing stroke management. Phys Ther. 2009, 89 (10): 1039-1050. 10.2522/ptj.20090081.
Melnyk BM, Fineout-Overholt E, Mays MZ: The evidence-based practice beliefs and implementation scales: psychometric properties of two new instruments. Worldviews Evid Based Nurs. 2008, 5 (4): 208-216. 10.1111/j.1741-6787.2008.00126.x.
Salbach NM, Jaglal SB: Creation and validation of the evidence-based practice confidence scale for health care professionals. Journal of Evaluation in Clinical Practice. 2010, 17 (4): 794-800.
Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R: Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ (Clinical research ed). 2002, 325 (7376): 1338-10.1136/bmj.325.7376.1338.
Ramos KD, Schafer S, Tracz SM: Validation of the Fresno test of competence in evidence based medicine. BMJ (Clinical research ed). 2003, 326 (7384): 319-10.1136/bmj.326.7384.319.
DiCenso A, Bayley L, Haynes RB: Accessing preappraised evidence: fine-tuning the 5S model into a 6S model. Ann Intern Med. 2009, 151 (6): JC3-JC2.
Argyris C, Schön DA: Theory in practice: increasing professional effectiveness. 1974, San Francisco: Jossey-Bass Publishers, 1
Cabell CH, Schardt C, Sanders L, Corey GR, Keitz SA: Resident utilization of information technology. J Gen Intern Med. 2001, 16 (12): 838-844. 10.1046/j.1525-1497.2001.10239.x.
Fung MFK, Walker M, Fung KFK, Temple L, Lajoie F, Bellemare G, Bryson SC: An internet-based learning portfolio in resident education: the KOALA (TM) multicentre programme. MedEduc. 2000, 34 (6): 474-479.
Crowley SD, Owens TA, Schardt CM, Wardell SI, Peterson J, Garrison S, Keitz SA: A Web-based compendium of clinical questions and medical evidence to educate internal medicine residents. Academic Medicine. 2003, 78 (3): 270-274. 10.1097/00001888-200303000-00007.
Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A, Psychological Theory G: Making psychological theory useful for implementing evidence based practice: a consensus approach. Quality & Safety in Health Care. 2005, 14 (1): 26-33. 10.1111/j.1479-6988.2007.00084.x..
Nabulsi M, Harris J, Letelier L, Ramos K, Hopayian K, Parkin C, Porzsolt F, Sestini P, Slavin M, Summerskill W: Effectiveness of education in evidence-based healthcare: the current state of outcome assessments and a framework for future evaluations. International Journal of Evidence-Based Healthcare. 2007, 5 (4): 468-476. 10.1111/j.1479-6988.2007.00084.x.
Kaplan S: Outcome Measurement and Management; First Steps for the Practicing Clinician. 2007, Philadelphia: FADavis
Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L, Whitty P, Eccles MP, Matowe L, Shirran L, et al: Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004, 8 (6): 1-72.
Gawlinski A: Evidence-based practice changes: Measuring the outcome. AACN Advanced Critical Care. 2007, 18 (3): 320-322. 10.1097/01.AACN.0000284433.76028.9d.
Fritz JM, Brennan GP: Preliminary examination of a proposed treatment-based classification system for patients receiving physical therapy interventions for neck pain. PhysTher. 2007, 87 (5): 513-524.
Wazeka A, Valacer DJ, Cooper M, Caplan DW, DiMaio M: Impact of a pediatric asthma clinical pathway on hospital cost and length of stay. Pediatric Pulmonology. 2001, 32 (3): 211-216. 10.1002/ppul.1110.
Straus SE, Ball C, Balcombe N, Sheldon J, McAlister FA: Teaching evidence-based medicine skills can change practice in a community hospital. Journal of General Internal Medicine. 2005, 20 (4): 340-343. 10.1111/j.1525-1497.2005.04045.x.
Kane RL, Maciejewski M, Finch M: The relationship of patient satisfaction with care and clinical outcomes. Medical Care. 1997, 35 (7): 714-730. 10.1097/00005650-199707000-00005.
Duignan P: Implications of an exclusive focus on impact evaluation in 'what works' evidence-based practice systems. Outcomes Theory Knowledge Base Article 223. 2009
Upton D, Upton P: Development of an evidence-based practice questionnaire for nurses. JAdvNurs. 2006, 53 (4): 454.
Rice K, Hwang J, Abrefa-Gyan T, Powell K: Evidence-Based Practice Questionnaire: A confirmatory factor analysis in a social work sample. Advances in Social Work. 2010, 11 (2): 158-173.
Hendricson WD, Rugh JD, Hatch JP, Stark DL, Deahl T, Wallmann ER: Validation of an instrument to assess evidence-based practice knowledge, attitudes, access and confidence in the dental environment. JDentEduc. 2011, 75 (2): 131-144.
Terwee CB, Bot SDM, de Boer MR, van der Windt DAWM, Knol DL, Dekker J, Bouter LA, de Vet HCW: Quality criteria were proposed for measurement properties of health status questionnaires. JClinEpidemiol. 2007, 60 (1): 34-42.
Portney LG, Watkins MP: Foundations of clinical research: applications to practice. 2009, Upper Saddle River, N.J.: Pearson/Prentice Hall, 3
Donabedian A: the quality of care - how can it be assessed. JAMA. 1988, 260 (12): 1743-1748. 10.1001/jama.260.12.1743.
Pre-publication history
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/11/78/prepub
Acknowledgements
The authors would like to acknowledge Nino Cartabellotta, President of the GIMBE Foundation and his team for hosting the 5th International Conference of Evidence-Based Health Care (EBHC) Teachers and Developers and for coordinating dissemination of the manuscript to delegates for feedback; Janet Martin BScPhm, PharmD, Chair of the 5th EBHC Conference International Steering Committee for supporting our work during and after the conference; Nina Rydland Olsen, Physiotherapist, MSc who was a member of the theme group at the conference and contributed to the original outline; and all of the 5th EBHC Conference delegates who gave us thoughtful and important feedback on the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
JKT led development of the manuscript including collection and analysis of delegate feedback. SLK developed the CREATE framework. All authors were involved in drafting the manuscript and in critically revising both the text and the CREATE framework for important intellectual content. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tilson, J.K., Kaplan, S.L., Harris, J.L. et al. Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ 11, 78 (2011). https://doi.org/10.1186/1472-6920-11-78
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1472-6920-11-78
Comments
View archived comments (1)