Part III: Recommendations for EBP assessment tool development
There are substantial needs for development of EBP assessment tools across the categories outlined in this paper: reaction to the educational experience, attitudes, self-efficacy, knowledge, skills, behaviour, and benefit to patients. As noted earlier, assessment tools need to be valid and practical for use by educators and researchers. Validation across learner characteristics (e.g. students vs. clinicians, nurses vs. physicians, users vs. doers) is most useful for broad adoption within EBP education, but as a minimum, tools should identify the type(s) of learner(s) for which they are validated. Guidance for appropriate study design to establish outcome measure validity is beyond the scope of this statement, however many quality references are available [48–50].
Based upon author recommendations and feedback from Sicily 2009 delegates, we propose 4 general recommendations for developers of new EBP learning assessment tools:
Use the CREATE framework to classify new tools with regard to EBP steps assessed, assessment category (or categories) addressed, and the audience characteristics and assessment aim for which the tool is intended and/or validated.
Clearly state the foundational principles of learning and assessment upon which a new assessment tool is developed.
Clearly state how the design of a new tool is linked to the learning aims it is intended to measure.
Develop, validate, and use a standardized method for translation of tools into new languages.
Beyond these overarching recommendations, there is need for development of EBP learning assessment tools in each assessment category in the CREATE model:
Reaction to the Educational Experience:
a) A common framework and standardized questions are needed to assess learners' reactions to EBP educational interventions. A standardized assessment would allow reaction to be compared across interventions.
Attitudes and Self-Efficacy:
a) There is a need to build upon existing tools (e.g., EBPAS , EBBS, EPIC, EBPQ, KACE) to facilitate measurement of self-reported attitudes, beliefs, and self-efficacy across different learner populations and educational settings.
b) There is a need for reliable qualitative methods to assess EBP attitudes and self-efficacy that can be compared across studies.
Knowledge and Skills:
a) Developers are encouraged to continue psychometric testing of the Fresno Test and Berlin Test) to establish sensitivity to change over time and minimum 'competency' performance for different learner populations and educational settings.
b) The Berlin and Fresno assessments emphasize searching and critical appraisal skill for primary research evidence. Assessments of learners that require different skills are needed (e.g. practitioners that primarily rely on evidence summaries need to be assessed regarding their knowledge of how to appraise and skill for applying evidence summaries and clinical guidelines).
c) Further investigation is warranted to ascertain ability to obtain and integrate patient values and perspectives in the context of EBP.
d) Assessments that address the performance of EBP skills across clinical environments are needed, including assessment through observation.
a) Generic self-monitoring tools are needed that measure clinician use of EBP processes in clinical decision-making including, but not limited to: frequency of performing each EBP step, resources used, patient involvement in evidence-based decision-making, frequency of change in clinical management due to newly found evidence, and rate of positive vs. negative outcomes associated with EBP use.
b) Valid, practicable methods are needed for monitoring learners' EBP behaviours that can be used for both formative and summative purposes, particularly 'high stakes' assessments.
Benefit to patients:
a) Tools are needed that measure patient outcomes concurrently with the application of evidence-based approaches to care that inform the impact of EBP behaviours on patient outcomes.
b) Implementation of appropriate qualitative methodologies are needed to determine important outcomes from patients' perspectives with regard to EBP that can be used in diverse healthcare settings.
Finally, within the context of using EBP learning assessment tools in research studies, benefit may be gained from:
1. Using a common set of outcome tools and adopting the operational terms presented in this paper to allow comparison across studies.
2. Including a measure of learners' reaction to the intervention as this may impact effectiveness in other outcome categories.
3. Developing methodologies for assessing the efficacy of interventions designed to target different elements of EBP as defined by the CREATE framework.
4. Assessing the correlation between the assessment categories outlined in the CREATE framework. That is, do the lower order objectives such as attitudes and self-efficacy relate to knowledge and skill and do knowledge and skill relate to behaviour and so on.