Skip to main content
  • Research article
  • Open access
  • Published:

Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET)

Abstract

Background

The majority of reporting guidelines assist researchers to report consistent information concerning study design, however, they contain limited information for describing study interventions. Using a three-stage development process, the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) checklist and accompanying explanatory paper were developed to provide guidance for the reporting of educational interventions for evidence-based practice (EBP). The aim of this study was to complete the final development for the GREET checklist, incorporating psychometric testing to determine inter-rater reliability and criterion validity.

Methods

The final development for the GREET checklist incorporated the results of a prior systematic review and Delphi survey. Thirty-nine items, including all items from the prior systematic review, were proposed for inclusion in the GREET checklist. These 39 items were considered over a series of consensus discussions to determine the inclusion of items in the GREET checklist. The GREET checklist and explanatory paper were then developed and underwent psychometric testing with tertiary health professional students who evaluated the completeness of the reporting in a published study using the GREET checklist. For each GREET checklist item, consistency (%) of agreement both between participants and the consensus criterion reference measure were calculated. Criterion validity and inter-rater reliability were analysed using intra-class correlation coefficients (ICC).

Results

Three consensus discussions were undertaken, with 14 items identified for inclusion in the GREET checklist. Following further expert review by the Delphi panelists, three items were added and minor wording changes were completed, resulting in 17 checklist items. Psychometric testing for the updated GREET checklist was completed by 31 participants (n = 11 undergraduate, n = 20 postgraduate). The consistency of agreement between the participant ratings for completeness of reporting with the consensus criterion ratings ranged from 19 % for item 4 Steps of EBP, to 94 % for item 16 Planned delivery. The overall consistency of agreement, for criterion validity (ICC 0.73) and inter-rater reliability (ICC 0.96), was good to almost perfect.

Conclusion

The final GREET checklist comprises 17 items which are recommended for reporting EBP educational interventions. Further validation of the GREET checklist with experts in EBP research and education is recommended.

Peer Review reports

Background

The underlying basis of educational interventions is to increase learners’ competence and skills in a specific content area and to promote lifelong learning [1]. Evidence-based practice (EBP) is a decision making paradigm in health care that integrates the patient’s perspective, practitioner expertise and the best available research evidence [2]. Education in the principles and practice of EBP is widely accepted as a core component of professional education for healthcare professionals [3, 4]. However, the most effective teaching strategies for promoting the effective use of EBP in practice are uncertain [1]. Inconsistent reporting of interventions used in EBP educational research is a significant barrier to identifying the most effective teaching strategies [1, 57]. Many studies investigating EBP educational interventions provide insufficient details about the educational intervention, limiting interpretation, synthesis in secondary research, and replication [1].

In 1994, in an attempt to address the ‘wide chasm’ between what a randomized controlled trial (RCT) should report and what is actually reported, the Standards of Reporting Trials Group developed a ‘proposal for the structured reporting of RCT’s’ [8]. This proposal later became the CONSORT statement (CONsolidated Standards Of Reporting Trials), which is one of the earliest and most well-established reporting guidelines [9]. The CONSORT statement led the way for the development of a multitude of reporting guidelines in the form of checklists, flow diagrams and explicit instructional papers providing guidance for authors reporting a variety of research designs [10, 11].

Over the past two decades, reporting guidelines for study designs have assisted researchers, authors and reviewers in providing consistent and explicit information concerning study design. However, limited information is available in these design-specific reporting guidelines for describing details of the interventions within studies [12]. Educational interventions are complex and it is not always possible or appropriate for an educational intervention to follow a strict formula such as in a pharmaceutical intervention [12, 13]. As educational interventions frequently require modifications to ensure that they meet the needs of the learner, detailed reporting of the intervention is vital to enable replication [12, 13].

To date, there are five reporting guidelines listed on the EQUATOR network that are specifically focused on describing educational interventions [1418], but there are no guidelines specifically for EBP educational interventions. Therefore, we developed a reporting guideline to guide educators and researchers reporting educational interventions designed to develop EBP learning [19]. This guideline is called the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET), and is comprised of a checklist, and accompanying explanation and elaboration (E&E) paper.

The first stages of the development procedure for the GREET checklist and E&E paper have been published elsewhere [1921]. Preliminary testing is recommended as part of the development process for a reporting guideline [10]. Therefore, the aim of this study was to describe the final development for the GREET checklist and E&E paper, incorporating psychometric testing to determine inter-rater reliability and criterion validity.

Methods

Development of the GREET checklist and the E&E paper

Development of the GREET checklist was prospectively planned to follow the Guidance for Developers of Health Research Reporting Guidelines [10]. The original protocol for development of the GREET checklist incorporated three broad stages [19]; 1) systematic review [19], 2) consensus processes including Delphi survey [20]; and 3) development and pilot testing for the GREET checklist and the E&E paper [19]. At the time of development of the GREET checklist, a reporting guideline was being developed as a generic guide for describing interventions Template for Intervention Description and Replication’ (TIDieR) [12]. In order to ensure consistency between these reporting guidelines, the TIDieR framework was adopted as a starting point for the GREET checklist.

Several teams were involved in the development of the GREET checklist. The research team consisted of a doctoral and expert panel. The doctoral panel comprised of the principal investigator (AP) undertaking this research as part of a Doctor of Philosophy in Health Science (PhD), and the supervisory team (MTW, LKL, MPM). The expert panel was comprised of five members who were invited due to their prior knowledge and experience in EBP educational theory and research, development of reporting guidelines and the dissemination of scientific information (JG, PG, MH, DM, JKT). As part of Stage 2 of development of the GREET checklist, international experts in EBP and research reporting participated in the Delphi survey (Delphi panelists) [21].

The first stage in the development process for the GREET checklist comprised a systematic review which identified 25 items relevant for reporting an EBP educational intervention (Fig. 1) [20]. To ensure all 25 items identified in the systematic review were included in the next stage of the development process (the four round Delphi survey), cross checking of these 25 items was completed at the end of the second round of the Delphi survey [21]. Six items identified in the systematic review that were not included in the Delphi list at the completion of the second round were added as ‘additional’ items for the third round of the Delphi. Hence rounds’ 3 and 4 included items derived from Delphi participants and items derived from the systematic review. At the completion of the four round Delphi survey, 39 items were nominated for consideration for describing an educational intervention for developing foundation knowledge and skills of EBP [21].

Fig. 1
figure 1

Summary of the three development stages for the GREET checklist including those completed prior to the psychometric testing

To determine the final inclusion of items for the GREET checklist, further consensus activities were undertaken. These comprised a series of consensus meetings via international teleconference with the research team. Three international teleconferences were required to attain a consensus decision (Table 1).

Table 1 Overview of the three international consensus teleconferences

Over the course of the international consensus teleconferences, all 39 intervention items arising from the Delphi survey were reviewed, with consensus agreement (majority vote) to retain 26 (67 %) items essential for reporting and to omit 13 (33 %) non-essential items. Following the agreed refinement of the 26 retained items, 14 items were included in the first draft of the GREET checklist (Fig. 1).

The first draft of the GREET checklist underwent further review by seven Delphi panelists who had previously indicated their willingness to provide feedback. As a result of this review, three additional checklist items were included (‘Incentives’; ‘Learner time spent in face-to face contact with instructor/self-directed learning activities’; ‘Extent to which the intervention was delivered as scheduled’). Therefore, the penultimate GREET checklist comprised 17 items recommended for the reporting educational interventions for EBP.

The GREET checklist was intended to be used in conjunction with an E&E paper to provide guidance and instructions for users. Following completion of the penultimate GREET checklist, the E&E paper was developed using a standard framework for each item; this included an explanation for each GREET checklist item along with the relevance of the item for reporting, verbatim examples of explicit reporting from previously published studies and a troubleshooting section to assist in resolving uncertainties or potentially confusing issues.

Inter-rater reliability and criterion validity

Design

An observational cross-sectional design was used to assess inter-rater reliability and criterion validity of the GREET checklist among readers of an EBP educational intervention study.

Participants

A sample of tertiary health professional students in the final years of their programs (physiotherapy, podiatry, medical radiation, human movement, occupational therapy, population health, dietetics) and postgraduate health research students (PhD) in the Division of Health Sciences at the University of South Australia were recruited towards the end of a semester to avoid conflicts with academic or final examination commitments. Participants were provided with an AUD30 gift card as compensation for their time. There was a mix of participants with and without prior experience in EBP education or reporting guidelines.

Procedure

Participants were invited to read a published study and to indicate whether and where each of the items in the GREET checklist were reported. The E&E paper was provided to participants for clarification of items as needed.

Test study identification

To identify an appropriate research study for participants to review, the search strategy from our previous systematic review [20] was re-run on 22nd November 2013 to identify papers published since the original search was undertaken. All recent studies meeting the original eligibility criteria were reviewed by the doctoral panel for relevance and reporting using the draft GREET checklist. The test study [22] was selected by consensus agreement by the doctoral team as the most appropriate test study, including a wide range in the completeness and level of detail of the reporting.

To enable comparison of ratings of completeness of reporting provided by participants, a criterion reference measure was developed. All members of the doctoral panel (AP, LKL, MPM, MTW) independently evaluated the test study [22] using the GREET checklist and the E&E paper and then met to discuss each of ratings assigned for the items in the GREET checklist. The final ratings assigned to each item in GREET checklist for the criterion reference measure were determined by consensus agreement of the doctoral panel.

Validation process

The testing process used an experiential learning approach [23], and consisted of two parts; participants were required: 1) to use the GREET checklist (+/-E&E paper) to review the test study [22] and to indicate whether each checklist item was reported using possible responses [(1) Yes- fully reported, (2) Yes- partially reported, (3) No- not reported or No- not clear], and 2) to provide comment and rate the ease of use for each item. Participants were also invited to provide feedback regarding the wording and layout for individual items and for the overall GREET checklist and the E&E paper. Comments on participants’ experience of the validity-testing process were also sought.

All testing was undertaken in small groups (1–7 participants) and supervised by the principal investigator (AP), with each participant completing the process independently on a computer. This ensured that any questions or problems encountered during the testing process could be noted and addressed appropriately. At the commencement of each two hour session, a standardised overview of the procedure was presented and participants were provided with hard copies of the GREET checklist, the E&E paper and the test study [22].

Data collection tool

Data collection was undertaken using an online instrument (SurveyMonkey®) which was pilot tested by three members of the doctoral team (MTW, MPM, LKL) prior to the psychometric testing. In section 1, participants were invited to provide basic demographic data including their age, gender, study discipline, previous exposure to education in EBP and experience using reporting guidelines.

In section 2, for each item in the GREET checklist, participants were asked to indicate whether they perceived the item to be reported in the test study (Yes- fully reported, Yes- partially reported, No- not reported or No-not clear). If the item was reported, participants were invited to extract and document verbatim information that was relevant to the specific item. For each checklist item, participants were requested to indicate whether they used the GREET checklist alone or in conjunction with the E&E paper when making a decision about whether information was reported for that item in the test study. Participants were then asked to provide a Yes/No rating for the ease of use related to layout and wording of the specific item in the GREET checklist and in the E&E paper and space was provided for suggested re-wording. In Section 3, participants rated their experience using the GREET checklist and E&E paper on a 5-point Likert scale ranging from 1: poor to 5: excellent. Space was provided for any further comments.

Data analysis

Data from all completed surveys were downloaded to a spread sheet (Excel. Version 14. Microsoft; 2010). Demographic data were collated and summarised. Participant responses for completeness of reporting for each of the GREET checklist items were allocated into one of three categories, (1) YES fully reported, (2) YES partially reported and (3) NO not reported, or NO not clear. These ratings were summarised descriptively according to agreement between participants and with the consensus criterion standard. The level of agreement was specified as “agreement” where there was exact agreement of the participant rating with the consensus criterion rating (both ratings of agreement in the same category), “partial agreement” (one category of difference between the participant rating and the consensus criterion rating) or “no agreement” (two categories of difference between the participant rating and the consensus criterion rating). Percentage agreement for these categories were calculated. The consistency of agreement for the participants’ ratings of completeness of reporting with the consensus criterion standard (criterion validity) and agreement for the ratings of completeness of reporting between participants (inter-rater reliability) were analysed using intra-class correlation coefficients (ICC) (two-way mixed, consistency, average-measures ICC) (IBM SPSS statistics 21). The ICC coefficients were interpreted based on the recommendations by (Landis & Koch [24], p165), where the level of agreement indicated by the ICC values of less than zero = poor, 0 to 0.2 = slight, 0.21 to 0.4 = fair, 0.41 to 0.6 = moderate, 0.61 to 0.8 = strong and greater than 0.8 = almost perfect agreement.

The rating for the ease of use related to layout and wording of the specific item for the GREET checklist and the E&E paper were summarised descriptively. Chi square tests (χ2) were undertaken to analyse differences between those with previous experience of EBP training or exposure to reporting guidelines. Statistical significance was set at p < 0.05.

Results

Participants

Testing of the GREET checklist and the E&E paper was completed by 31 participants (n = 11 undergraduate, n = 20 postgraduate) during nine, two hour sessions. Participant demographic data are shown in Table 2.

Table 2 Participant demographic information

Criterion validity and inter-rater reliability

All participants rated completeness of reporting for each of the GREET checklist items using the test study (i.e. no missing data) (Additional file 1). The consistency of the agreement of participants ratings for the completeness of the reporting of each item in the GREET checklist with consensus criterion ratings are presented in Table 3.

Table 3 Summary of consistency of agreement for participants ratings for completeness of the test study’s reporting for items in the GREET checklist with consensus criterion ratings

The consistency of agreement between the participant ratings for completeness of reporting with the consensus criterion ratings (the participants ratings and the consensus criterion ratings were exactly the same) ranged from 19 % for item 4 Steps of EBP to 94 % for item 16 Planned delivery. Four items showed the greatest agreement between the participants’ ratings for completeness of reporting and the consensus criterion ratings, 16 Planned delivery (94 %), 14 Modifications (84 %), 17 Actual schedule (81 %) and 8 Instructors (81 %) (Table 3). The items with the greatest difference between the participants ratings for completeness of reporting and the consensus criterion ratings (2 categories of difference in the ratings) were : 4 Steps of EBP (23 %), 10 environment (23 %) and 13 Adaptations (20 %), with no agreement between 20 to 23 % of participants ratings for completeness of reporting and the consensus criterion ratings (Table 3).

Overall, consistency of agreement between participants’ ratings of completeness of reporting and the consensus criterion ratings was strong (criterion validity ICC 0.73, 95 % CI 0.51–0.88, p <0 .0001). There was almost perfect consistency of agreement within the participant group for ratings of completeness (inter-rater reliability) (ICC 0.96, 95 % CI 0.93–0.98, p < 0.0001) (Table 3).

Participant ratings for the wording of items in the GREET checklist

The majority of participants (97 to 100 %) provided a yes/no response to the question “Did you find the wording and layout easy to use for this item in the GREET checklist?” Six items, (7 Incentives, 8 Instructors, 10 Environment, 11 Schedule, 12 Face to face time, 14 Modifications) were rated by all participants as “yes”- the wording and layout was easy to use. Item 1 intervention, achieved the least number of “yes” ratings (70 % of participants).

An overall rating for the layout and ease of use of the GREET checklist was provided by the majority of participants (n = 30, 97 %). The ratings, on a 5-point scale from poor to excellent, were positive, with participants rating the overall layout and ease of use as Poor (n = 0), Fair (n = 2, 7 %), Good (n = 12, 40 %), Very Good (n = 12, 40 %), and Excellent (n = 4, 13 %).

Evaluation of the E&E paper

The E&E paper was used inconsistently, with participants indicating that they selectively referred to this depending upon the checklist item; less than half (48 %) of the participant group referred to the E&E paper for item 8 Instructors compared to the majority of participants (87 %) for item 2 Theory. Participants, who referred to the E&E paper, rated the E&E paper positively, with 75 % of participants rating all items as easy to use.

Prior experience with EBP or reporting guidelines

Participants did not rate the layout and ease of use of the GREET checklist significantly differently based on prior exposure to EBP training (χ2 = 3.41, p = 0.33) or experience with reporting guidelines (χ2 = 4.22, p = 0.24). No significant difference was found between the level of agreement with the consensus criterion ratings for any of the GREET checklist items based on prior exposure to EBP training or experience with reporting guidelines.

Participant comments

Participants provided 185 separate comments regarding the wording and layout of the items in the GREET checklist and the E&E paper. The majority of these comments (n = 105, 57 %) concerned reinforcing or justifying their ratings for whether the item was reported in the study. For example: “The learning objectives were very detailed and I was unsure how much detail to provide.

A small number of comments were provided for re-wording items in the GREET checklist (n=15, 16 %) and the E&E paper (n = 14, 16 %). For example: “I think the BRIEF NAME/TITLE, initially makes me refer to the title of the study. However, the GREET checklist description to provide the educational intervention requires further reading from the article. I would think INTERVENTION is a better heading.”

Outcomes for the GREET checklist

The GREET checklist and the E&E paper were updated based on participant ratings for whether the wording and layout were easy to use, the agreement for the participants’ ratings for completeness of reporting with the consensus criterion and between the raters, and the comments and suggestions provided by the participants. Wording changes were made to eight items; 1 Title (changed to Intervention), 3 Learning Objectives (added all groups involved), 4 Steps of EBP (changed to EBP content), 6 Learning Strategies (changed to Educational Strategies), 13 Adaptations (reworded to Planned changes), 16 Planned delivery (further information added to describe materials and educational strategies) and 17 Actual Schedule (actual schedule removed from heading and further information provided regarding the planned schedule of the intervention) (Table 4).

Table 4 The GREET checklist 2016a

The final, complete version E&E paper is provided as an online supplement (Additional file 2).

Limitations

The original plan to test the GREET checklist and the E&E paper was to have Delphi panelists trial the GREET checklist and the E&E paper during the writing phase for a manuscript or recent educational intervention for EBP. In the final round of the Delphi survey, participants were asked ‘Would you be interested in reviewing the draft of the reporting guideline and associated document?’ and ‘If you are currently undertaking an EBP educational strategy and plan to submit this for publication, would you be willing to pilot test the draft guideline.’ None of the Delphi participants accepted this invitation. As such, the psychometric testing of the GREET checklist has several limitations. Firstly, in this validation study, appraisal of reporting rather than use of the checklist to guide reporting a paper was tested. Rather than researchers and educators experienced in EBP, the study sample included a range of non-expert users with varying experience and exposure to EBP education and reporting guidelines. Secondly, the sample size was small and participants were recruited from one tertiary institution. Although there were a variety of health professions represented, there were no medical or nursing students and these groups are widely represented in authorship of studies investigating educational interventions for EBP. Thirdly, the role of the criterion standard was to provide a reference point by which to compare the ratings provided by the participants. However, as the criterion ratings were based on consensus, they reflected the opinion of the doctoral panel, rather than a set of ‘correct’ ratings. Finally, inclusion of the response category “NO not reported, or NO not clear” created an ambiguous response option, similar to “neutral”, or “neither agree nor disagree” option. These categories may have been used by respondents to indicate a range of options (lack of clarity in the question, unable to confidently attribute to a different response option or inadequate knowledge of the question content).

Discussion

This study aimed to complete the final development for the GREET checklist, incorporating provisional psychometric testing to determine inter-rater reliability and criterion validity.

Although the consistency of agreement, both between the participants and between the participants and the consensus criterion standard, was good to almost perfect, the consistency of agreement in the ratings between participants (ICC 0.96) was considerably higher than between participants and the consensus criterion ratings (ICC 0.73) [24]. It is possible the lower overall consistency of agreement for the criterion validity was a result of the differences in expertise and experience of the participants compared with the doctoral team responsible for the consensus criterion ratings. Collectively, the doctoral team were experienced in EBP education, terminology and study appraisal, whereas more than one quarter of the participants had no previous experience or training in EBP. For all participants, this was their first exposure to the GREET checklist and for almost half of the participants their first exposure to a reporting guideline. The differences in expertise and experience with EBP and reporting guidelines may also provide an explanation for the difference between the participants’ ratings for completeness of reporting and the consensus criterion ratings for the items, 4 Steps of EBP and 5 Materials, where almost one quarter of participant ratings had no agreement with the consensus criterion ratings.

The purpose of the GREET checklist and the E&E paper is to provide specific guidance for the reporting of EBP educational interventions, rather than to replicate reporting guidelines for specific research designs. As such, the GREET checklist and the E&E paper were designed to be used in conjunction with an appropriate existing reporting guideline for the study design.

While it may seem burdensome to add further reporting guidelines for interventions to the already long list of items required in current guidelines for study design, the consistent and transparent reporting of interventions in primary educational research studies is just as important. For educators applying research into their teaching, and for consumers of research, reporting guidelines can provide structure for interpreting the relevance of information, and a method of identifying possible biases in the reporting of interventions.

Conclusion

The GREET checklist is a reporting guideline designed to provide a framework for the consistent and transparent reporting for educational interventions for EBP. Used together with the E&E paper, developed to enhance its use and understanding, the GREET checklist could further facilitate development of an evidence-base for EBP education. Further targeted, user-specific, review and validation of the GREET checklist with experts in EBP research and education is recommended.

Abbreviations

AP:

Anna Phillips

CONSORT:

CONsolidated standards of reporting trials

DM:

David Moher

E&E:

Explanation and elaboration paper

EBP:

Evidence-based practice

GREET:

Guideline for reporting evidence-based practice educational interventions and teaching

ICC:

Intra-class correlation coefficient

JG:

James Galipeau

JKT:

Julie K Tilson

LKL:

Lucy K Lewis

MPM:

Maureen P McEvoy

MTW:

Marie T Williams

MH:

Marilyn Hammick

PhD:

Doctor of Philosophy

RCT:

Randomized controlled trial

TIDieR:

Template for intervention description and replication

References

  1. Ilic D, Maloney S. Methods of teaching medical trainees evidence-based medicine: a systematic review. Med Educ. 2014;48(2):124–35.

    Article  Google Scholar 

  2. Rosenberg WMC, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71–2.

  3. Wong SC, McEvoy MP, Wiles LK, Lewis LK. Magnitude of change in outcomes following entry-level evidence-based practice training: a systematic review. Int J Med Educ. 2013;4:107–14.

    Article  Google Scholar 

  4. Tilson J, Mickan S. Promoting physical therapists’ use of research evidence to inform clinical practice: I: theoretical foundation, evidence, and description of the PEAK program. BMC Med Educ. 2014;14(1):125.

    Article  Google Scholar 

  5. Coomarasamy A, Taylor R, Khan K. A systematic review of postgraduate teaching in evidence-based medicine and critical appraisal. Med Teach. 2003;25(1):77–81.

    Article  Google Scholar 

  6. Young T, Rohwer A, Volmink J, Clarke M. What Are the Effects of Teaching Evidence-Based Health Care (EBHC)? Overview of Systematic Reviews. PLoS ONE. 2014;9(1), e86706.

    Article  Google Scholar 

  7. Maggio LA, Tannery NH, Chen HC, ten Cate O, O’Brien B. Evidence-based medicine training in undergraduate medical education: a review and critique of the literature published 2006–2011. Acad Med. 2013;88(7):1022–8.

    Article  Google Scholar 

  8. The Standards of Reporting Trials Group. A proposal for structured reporting of randomized controlled trials. The Standards of Reporting Trials Group. JAMA. 1994;272(24):1926–31.

    Article  Google Scholar 

  9. Begg CB, Cho MK, Eastwood S, Horton R, Moher D, Olkin I, Rennie D, Schulz KF, Simel DL, Stroup DF. Improving the quality of reporting of randomized controlled trials: the CONSORT statement. JAMA. 1996;276:637–9.

    Article  Google Scholar 

  10. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2), e1000217.

    Article  Google Scholar 

  11. The EQUATOR network [http://www.equator-network.org/]. Accessed 1 Aug 2016.

  12. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, Altman DG, Barbour V, Macdonald H, Johnston M, Lamb SE, Dixon-Woods M, McCulloch P, Wyatt JC, Chan A, Michie S. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

    Article  Google Scholar 

  13. Olson CA, Bakken LL. Evaluations of educational interventions: getting them published and increasing their impact. J Cont Ed Health Prof. 2013;33(2):77–80.

    Article  Google Scholar 

  14. Haidet P, Levine RE, Parmelee DX, Crow S, Kennedy F, Kelly PA, Perkowski L, Michaelsen L, Richards BF. Perspective: Guidelines for reporting team-based learning activities in the medical and health sciences education literature. Acad Med. 2012;87(3):292–9.

    Article  Google Scholar 

  15. Howley L, Szauter K, Perkowski L, Clifton M, McNaughton N. Association of Standardized Patient Educators (ASPE): Quality of standardised patient research reports in the medical education literature: review and recommendations. Med Educ. 2008;42(4):350–8.

    Article  Google Scholar 

  16. Patricio M, Juliao M, Fareleira F, Young M, Norman G, Vaz Carneiro A. A comprehensive checklist for reporting the use of OSCEs. Med Teach. 2009;31(2):112–24.

    Article  Google Scholar 

  17. Stiles CR, Biondo PD, Cummings G, Hagen NA. Clinical trials focusing on cancer pain educational interventions: core components to include during planning and reporting. J Pain Symptom Manage. 2010;40(2):301–8.

    Article  Google Scholar 

  18. Borek AJ, Abraham C, Smith JR, Greaves CJ, Tarrant M. A checklist to improve reporting of group-based behaviour-change interventions. BMC Public Health. 2015;15(1):963.

    Article  Google Scholar 

  19. Phillips AC, Lewis LK, McEvoy MP, Galipeau J, Glasziou P, Hammick M, Moher D, Tilson J, Williams MT. Protocol for development of the guideline for reporting evidence based practice educational interventions and teaching (GREET) statement. BMC Med Educ. 2013;13:9.

    Article  Google Scholar 

  20. Phillips AC, Lewis LK, McEvoy MP, Galipeau J, Glasziou P, Hammick M, Moher D, Tilson JK, Williams MT. A systematic review of how studies describe educational interventions for evidence-based practice: stage 1 of the development of a reporting guideline. BMC Med Educ. 2014;14:152.

    Article  Google Scholar 

  21. Phillips AC, Lewis LK, McEvoy MP, Galipeau J, Glasziou P, Hammick M, Moher D, Tilson JK, Williams MT. A Delphi survey to determine how educational interventions for evidence-based practice should be reported: Stage 2 of the development of a reporting guideline. BMC Med Educ. 2014;14:159.

    Article  Google Scholar 

  22. Sanchez-Mendiola M, Kieffer-Escobar LF, Marin-Beltran S, Downing SM, Schwartz A. Teaching of evidence-based medicine to medical students in Mexico: a randomized controlled trial. BMC Med Educ. 2012;12:107.

    Article  Google Scholar 

  23. Kolb D. Experiential learning: experience as the source of learning and development. Englewood Cliffs, New Jersey: Prentice Hall; 1984.

  24. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank Dr Marilyn Hammick for her time, effort and expertise during the development of the GREET checklist and everyone who participated in the GREET checklist testing process.

Funding

Dr Anna Phillips was supported by an Australian Postgraduate Award scholarship for her doctoral program of research and a University of South Australia International Student Mobility Grant ($5000).

Availability of data and materials

The dataset supporting the conclusions of this article is included within Additional file 1.

Authors’ contributions

AP planned and carried out the psychometric testing, completed the analyses and drafting of the manuscript. LKL, MPM and MTW contributed to the planning stages for the psychometric testing, and the entire validity testing process including the analyses and writing of the manuscript. JG, DM, PG and JKT made substantial contributions to the consensus discussions and the development of the GREET checklist and the E&E paper and contributed extensively to the drafting and critical revision of the manuscript. All authors read and approved the final manuscript.

Authors’ information

AP is a Lecturer and Clinical Educator in the physiotherapy program at the University of South Australia.

LKL is a Senior Lecturer in Physiotherapy at Flinders University, and an Adjunct Senior Research Fellow at the University of South Australia.

MPM is a Lecturer, School of Health Sciences and a member of the International Centre for Allied Health Evidence (iCAHE), Sansom Institute for Health Research, University of South Australia, Adelaide, Australia.

JG is a Senior Research Associate, Ottawa Hospital Research Institute, The Ottawa Hospital, Centre for Practice-Changing Research (CPCR), Ontario, Canada.

PG is the Director, Centre for Research in Evidence-Based Practice (CREBP), Bond University, Queensland, Australia.

DM is a Senior Scientist, Clinical Epidemiology Program, Ottawa Hospital Research Institute, The Ottawa Hospital, Centre for Practice-Changing Research (CPCR), Ontario, Canada.

JKT is an Associate Professor, University of Southern California Division of Biokinesiology and Physical Therapy, Los Angeles, USA.

MTW is an Associate Professor, Associate Head: Research, School of Health Sciences and a member of the Alliance for Research in Exercise, Nutrition and Activity (ARENA), Sansom Institute for Health Research, University of South Australia, Adelaide, Australia.

Competing interests

Dr Moher is supported by a University Research Chair. Dr Moher is a member of the EQUATOR network executive committee.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Ethical approval was obtained from the University of South Australia Human Research Ethics Committee (protocol no’s. 25590 and 31938). All participants provided written informed consent.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna C. Phillips.

Additional files

Additional file 1:

Dataset for psychometric testing of the GREET checklist. Complete dataset with results of participant responses (n = 31) for completeness of reporting for each item in the GREET checklist, ratings for whether the GREET checklist and E&E paper were easy to use and all comments provided by participants for the psychometric testing of the GREET checklist. (XLSX 82 kb)

Additional file 2:

2016 Explanation and elaboration paper for the GREET checklist. An explanation and elaboration document developed to be used in conjunction with the GREET checklist to enhance the use and understanding for the information items in the GREET checklist. (DOCX 130 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Phillips, A.C., Lewis, L.K., McEvoy, M.P. et al. Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET). BMC Med Educ 16, 237 (2016). https://doi.org/10.1186/s12909-016-0759-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-016-0759-1

Keywords