Skip to main content

REAPing the benefits: development and use of a structured evaluation framework to codify learning resources for Global Health professionals

Abstract

Background

The learning opportunities for global health professionals have expanded rapidly in recent years. The diverse array of learners and wide range in course quality underscore the need for an improved course vetting process to better match learners with appropriate learning opportunities.

Methods

We developed a framework to assess overall course quality by determining performance across four defined domains Relevance, Engagement, Access, and Pedagogy (REAP). We applied this framework across a learning catalogue developed for participants enrolled in the Sustaining Technical and Analytic Resources (STAR) project, a global health leadership training program.

Results

The STAR learning activities database included a total of 382 courses, workshops, and web-based resources which fulfilled 531 competencies across three levels: core, content, and skill. Relevance: The majority of activities were at an understanding or practicing level across all competency domains (486/531, 91.5%). Engagement: Many activities lacked any peer engagement (202/531, 38.0%) and had limited to no faculty engagement (260/531, 49.0%). Access: The plurality of courses across competencies were offered on demand (227/531, 42.7%) and were highly flexible in pace (240/531, 45.2%). Pedagogy: Of the activities that included an assessment, most matched activity learning objectives (217/531, 40.9%).

Conclusions

Through applying REAP to the STAR project learning catalogue, we found many online activities lacked meaningful engagement with faculty and peers. Further development of structured online activities providing learners with flexibility in access, a range of levels of advancement for content, and opportunities to engage and apply learning are needed for the field of global health.

Peer Review reports

Background

A commitment to promoting societal health is perhaps one of the few unifying features of the global health and public health workforces [1, 2]. Those who work in the field approach their workplace challenges from different perspectives: as generalists and specialists; as academicians and field-based practitioners; as citizens of low- and middle-income countries (LMICs) and high-income countries; as advocates and clinicians [2,3,4,5,6]. Professionals at all stages of their career can benefit from opportunities to further develop and expand their skills and knowledge. The challenge is finding the right opportunities that best meet both their goals and professional needs. Moreover, given the plethora of learning options available, it is unclear how one should evaluate opportunities for suitability (i.e., appropriate content) versus desired format (e.g. in-person versus online, or a hybrid of both) and engagement level (e.g. interactive workshops versus asynchronous courses) [7, 8].

Recent years have seen an exponential rise in the availability of massive open online courses (MOOCs) and a rapid expansion in content variety and the number of institutions offering global health educational opportunities. Making online courses relevant, impactful, and desirable to learners has required drawing insights from both learning theory and learner demand [9, 10]. However, despite improvements, increased attention is still needed to appropriately design curricula that meet the skills and values sought by the diverse array of learners while still being actively engaging [10, 11]. The differing values of global health learners, combined with the continually expanding array of online resources, underscore the need for a systematic approach to catalogue the myriad learning opportunities available for global health professionals, many of whom are busy and have “numerous responsibilities that compete for their time” [12].

In recent years, as learning theories have been expanded to incorporate online education, a variety of course instruction rubrics have emerged as tools to help guide the development of new courses and better evaluate existing ones. Most of these rubrics have standards or indicators covering a general course overview and information, course design, learning objectives, accessibility, learner support, interaction and engagement, assessment, and technology (see Annex 1 for a summary of the topics covered by several commonly used rubrics) [13,14,15,16]. These rubrics have similarities as well as some unique features such as the degree they focus on communication between students and faculty and how outcomes are defined (e.g. baselines, effective, and exemplary results); most were designed for specific kinds of online learning activities and/or target audiences. We did not find a multi-purpose and vetted strategy that could be applied to the wide range of courses that global health professionals seek to build their skills. Thus, the Quality Matters (QM) rubrics, which have been tested and are widely utilized for online courses, outline eight general standards each with specific indicators and were utilized as a foundation for our tool [13].

The Sustaining Technical and Analytic Resource (STAR) project is a global health training program for both junior and experienced global health professionals funded by the United States Agency for International Development (USAID). The STAR curriculum and the cohorts are described in greater detail elsewhere [17]. These participants vary widely in their professional foci and backgrounds, with many focused on infectious disease programs (specifically tuberculosis or HIV/AIDS) and across a variety of technical areas (for example monitoring, evaluation, and learning (MEL) versus supply chain management). Participants provide technical support across a breadth of global health programs and are also provided with protected time and resources for leadership training and professional development. In order to facilitate identifying appropriate learning opportunities to match the needs of each unique participant, the STAR project built a database to catalogue hundreds of learning activities for participants across a diverse array of topic areas. We soon realized that the process of identifying, vetting, and assigning appropriate learning activities for a diverse pool of professionals was a more general obstacle for the field of global health to overcome.

This paper describes our experience developing and applying a framework to evaluate quality indicators for use by the STAR project. We document the initial process of sourcing and reviewing learning opportunities to develop our database of vetted learning activities and some key findings that can inform how other global health educational programs review existing curricula for potential use by their learners.

Methods

Background – STAR learning approach

STAR participants are provided with an individualized learning plan (ILP) that focuses on their individual development goals [17]. In order to guide the baseline competency assessments, development of ILPs, and the organization of our database, a competency framework was developed for STAR (see Fig. 1) [17]. This framework included “core” competencies that all participants were expected to demonstrate a minimum level of expertise in by the end of the fellowship as well as elective skill and content competencies. Within each competency domain, a set of five milestones were defined which represented demonstrable knowledge or skills ranging from a basic level of inquiry to advanced mastery.

Fig. 1
figure 1

STAR Global Health Competencies Framework

Development of the REAP tool

We conducted an online review using Google, Google Scholar, PubMed, and the Education Resources Information Center (ERIC) for tools developed and implemented to assess course quality, particularly for online courses. Although STAR participants may also complete in-person courses, the rubrics we chose to look at primarily addressed online and blended courses as these were most likely to be the best fit for the majority of STAR participants and were where we anticipated finding the most variety in terms of the quality of these courses. Our goal was to design a framework that would first be broadly applicable to learning activities that the program was utilizing and developing, and second would provide us with a systematic approach to codifying the activities in our database to meet the needs and preferences of learners. Based on our review of the literature, and particularly the thoroughly tested and widely applied Quality Matters rubrics as a foundation, [13] the instructional design leads at the STAR project developed a tailored and systematic framework for codifying and vetting learning activities for STAR: The Relevance, Engagement, Access, and Pedagogy (REAP) tool (Fig. 2). The Relevance domain of REAP focuses on how well the content covered in a course or other activity aligns with STAR’s competency framework and milestone levels [17]. The Engagement domain focuses on the extent to which learners have opportunities to engage with each other and with course faculty. Within the Access domain, we capture key variables related to the format (online or onsite), pace, and flexibility of the learning activity. Finally, the Pedagogy domain captures elements of the course credibility and assessment approaches.

Fig. 2
figure 2

Key Variables Captured Within Each REAP Domain

Given the wide range of backgrounds, levels of experience, and work contexts of global health and public health professionals, no single combination of course characteristics (e.g. online, self-paced, and with limited engagement with peers) will universally be the most desirable or appropriate for all participants. The most useful approach, therefore, was to establish a standard qualitatively oriented review process that could determine whether a course would meet the needs of a specific participant. Particularly for STAR’s diverse range of learners, we aimed not to determine definitively whether a course was “good” or “bad”, but rather to examine the fit of a particular activity with individual learner preferences and needs across a set of variables. To ensure consistency and defensibility of the reviews of each activity, we undertook an intensive and iterative approach entailing regular meetings to discuss database entries and included notes sections to provide explanations and document decisions during the data entry process in order to: 1) build shared understanding among all reviewers and 2) to provide detailed notes to ensure transparency and thick descriptions [18,19,20] for the decisions and any caveats related to each entry in the database. We also included a summary measure under each category of how good of a fit a particular learning activity was for STAR participants (for example, how well the Relevance variables for a particular activity aligned with the needs of STAR participants).

Implementation of the REAP tool

We utilized the REAP tool to vet learning activities, including courses, workshops, and other activities, as they were entered into our learning activities database. Activities added to the database were identified through 1) an initial search for activities offered by STAR project partners and particularly focused on STAR core competencies [17] and 2) the specific areas of work and learning needs of STAR participants as they were onboarded. Each learning activity was reviewed by at least one STAR staff member and as much information as possible related to each REAP domain was added. The learning activity database is searchable by keyword as well as by select REAP indicators to locate opportunities that are the best fit for particular participants. Participant evaluations of learning activities are also accessible in the database so that participant satisfaction can be incorporated in the overall activity assessment.

Based on our experience using the tool, we made revisions to improve clarity and efficiency of the tool as we went along. These revisions did not change the content of the rubric itself and aimed to improve usability. Two key changes were the addition of summary measures for each category in order to provide an overall assessment of the fit of an activity related to a particular category (e.g. Relevance) for the majority of STAR participants. Secondly, we streamlined the number of open text response variables to make the tool faster to use and to better standardize our data.

Contents of the REAP-vetted learning activities database

The learning database is live and was designed for STAR staff utilization in April 2019 and is added to continually. It contains a combination of workshops, courses (online and in-person), and web-based resources such as websites and grey literature (materials that are not controlled by commercial publishers) reports and manuals. Of note, the contents of the database reflect the learning needs of STAR participants and are not meant to be representative of the overall learning opportunities for global health and public health professionals.

Data analysis

Data for this paper were pulled from the active STAR database on June 16, 2020. Data from the database were analyzed using Stata v.15 (StataCorp LLC, College Station, TX, USA). We chose to present findings disaggregated by STAR competency categories as a consistent way to analyze the REAP domains because of the centrality of these competencies to the design of STAR’s learning program and because we anticipated that some of the REAP variables would differ based on the kind of content that a learning activity focused on. A descriptive statistical approach was used to provide an overview of the learning activity database, characteristics of learning activities across each of the REAP domains, and the fit of each learning activity for STAR.

Statement of IRB approval

Ethical approval was sought and received from the institutional review boards of the Public Health Institute (IRB #I19–022) and the Johns Hopkins Bloomberg School of Public Health (IRB00011259). Written consent was sought from all participants.

Results

A total of 382 activities were evaluated by the REAP tool of which there are 40 workshops (10.5%), 280 Courses (73.3%) and 62 web-based resources (22.1%). Workshops included summer intensive workshops on qualitative research to management problem-solving sessions. Courses included both academic courses, those offered by USAID and partners, and trainings for private companies on a range of topics from gender equity to communications skills to languages. Web-based resources included self-paced training modules on tuberculosis from the Centers for Disease Control, a range of infectious disease resources including webinars and resource pages, and toolkits on scientific writing and data analysis. The full dataset that was utilized for our analysis of the database can be accessed in Additional file 1.

Table 1 further describes the characteristics of the included learning activities. The majority of learning activities were offered online (60% of workshops, 71.4% of courses, and 100% of web-based resources) and the majority of activities (85.1%) fulfilled 1–2 competencies. Most learning activities were based in the Pan American Health Organization (PAHO) region (84.8%) or European Regional Office (EURO) (9.2%) region, though some courses were also based in the African Regional Office (AFRO) countries (3.4%). Cost varied widely, with a large set of free courses (47.1%), but also a substantial set of courses costing over $1000 (22.5%).

Table 1 Characteristics of STAR learning database activities

Each of the learning activities were further assessed across the four domains of the REAP tool. Results are displayed in Tables 2, 3, 4 and 5 with key variables for each domain broken down by the category of competency (core, content, or skill-based competency). Some activities were often able to fill multiple competencies and thus may be featured multiple times.

Table 2 REAP Relevance components by STAR competency category
Table 3 REAP Engagement components by STAR competency category
Table 4 REAP Access components by STAR competency category
Table 5 Pedagogy components of REAP by STAR competency category

Relevance

The relevance findings, summarized in Table 2, describe how the learning activities in the database were distributed across the three categories of STAR competencies (core, content, and skill) and by milestone level. Most courses across all competencies were either at the understanding (195/531, 36.7%) or practicing level (291/531, 54.8%). Very few courses were at the inquiring (20/531, 3.8%) or leading level (2/531, 0.4%). Many activities were identified as highly relevant if they addressed core competency domains and were geared towards understanding and practicing levels, as these courses were relevant a substantial proportion of STAR participants to meet minimum requirements or the STAR experience and/or to enhance these core skill areas. For example, courses on gender and health equity as well as language learning opportunities were identified as relevant for the STAR audience.

Engagement

Engagement was a REAP domain that STAR participants prioritized, with general preferences for higher levels of engagement with faculty and peers, which are presented in Table 3. Among the core competency-related activities, there was a high level of faculty engagement (83/195, 42.6%), but about equally as many had either no direct faculty engagement (36/195, 18.5%) or limited engagement (41/195, 21.0%). Faculty engagement levels also varied for the content and skill competencies, though skill competencies had a number of activities with average (53/241, 22.0%) and high levels of engagement (50/241, 20.7%). High engagement activities tended to be workshops and consultative meetings and groups with a focus on peer dialogue and problem solving, such as a Technical Consultation on Expanding Contraceptive Method Choice that was included in our database. Low engagement courses included self-paced online courses such as a Coursera course on supply chain management.

For peer engagement, many activities across all competency categories had no engagement (core: 60/195, 30.8%; content: 45/95, 47.4%; skill: 97/241, 40.2%). For activities covering core competencies, the largest proportion of activities were characterized as having high levels of engagement (66/195, 33.8%). For content competency-related activities, the second largest set of courses had an average amount of peer engagement (20/95, 21.1%). Finally, the skill competency activities were fairly equally distributed across limited (41/241, 17.0%), average (44/241, 18.3%), and high (45/241, 18.7%) levels of engagement.

Access

Access variables include the in-person requirements of an activity and the flexibility in start time and pace as presented in Table 4. The largest proportion of courses were offered online only across all categories of competencies (core: 110/195, 56.4%; content: 76/95, 80.0%; skill: 173/241, 71.8%). Core competency-related activities also had a large proportion (63/195, 32.3%) offered in workshop or short training formats, while the content and skill competency-related activities had a small, but still noteworthy, number of these workshop format activities (content: 14/95, 14.7%; skill 53/241, 22.0%). Many language offerings, such as immersion programs for foreign languages, require in person participation whereas any courses has online options available.

For core competency activities, the flexibility in start time varied and was spread fairly evenly across infrequent starts (once or twice a year) (47/195, 24.1%), frequent starts (53/195, 27.2%), and on demand courses (available whenever learners signed up) (58/195, 28.7%). Skill competencies followed a similar distribution, but with more courses (106/241, 44.0%) offered on demand. Content-related activities were more commonly available on demand (63/95, 66.3%).

Across competencies, the most common amount of flexibility in terms of the pace at which a participant could complete the course was “highly flexible” (core: 68/195, 34.9%; content: 66/95, 69.5%; skill: 106/241, 44.0%).

Pedagogy

Table 5 includes a set of variables related to the pedagogy of each activity. For this variable, we evaluated the availability of a syllabus, the credibility of the instructor, the applied learning aspects, and use of assessment as a learning tool. Activities were split between which had and which did not have formal syllabi: core competency-related activities provided a syllabus (93/195, 47.7%) more often than not (81/195, 41.5%) and more skills activities also had a syllabus (133/241, 55.2%) than did not (80/241, 33.2%). The vast majority of instructors for all categories of activities were considered to be credible (core: 164/195, 84.1%; content: 83/95, 87.4%; skill: 203/241, 84.2%).

The amount of application of knowledge and concepts in an activity was also a priority for STAR, with many participants valuing higher levels of applicability to both real-world examples and their work. The amount of applicability found within activities varied, but for many activities, there was not enough information about this aspect available (core: 45/195, 23.1%; content: 22/95, 23.2%; skill: 53/241, 22.0%). An example of a highly applied activity was a course on graphic design and digital presentations, whereas others courses such as ones becoming a better teacher or core principles of communications received scores for lower or no direct application to participants’ work.

In terms of the inclusion and alignment of learner assessment within activities, both the core (88/195, 45.1%) and skill (105/241, 43.6%) competency-related activities had substantial proportions that included assessments that were highly aligned with activity learning objectives and topics. However, all categories of competencies had almost one third of activities for which the inclusion of any assessments was unknown (core: 56/195, 28.7%; content: 31/95, 32.6%; skill: 73/241, 30.3%).

Discussion

The REAP tool presents a structured approach to codify learning activities in a systematic way. We were able to successfully apply the REAP tool to the STAR learning activities database and found it to be useful when identifying potential gaps and specific activities to meet learning needs of STAR participants. Our database, due to the needs of our participants, boosts a high volume of online learning resources. As such, we found that there are a large number of activities that had flexible pace and were easily accessible. This focus on online courses also meant that many of the activities were at an introductory level, syllabi were often not available, and a large number provide limited faculty or peer engagement opportunities.

The REAP tool adds to the available strategies designed to help educators and trainees sort through the plethora of available learning opportunities. As online education has expanded, educators have developed principles to guide the development of online learning. Effective online learning includes interactive and collaborative learning through synchronous discussions and reflections through asynchronous tools. Within these activities, instructors can create a safe and educational learning environment for learners by encouraging the development of critical thinking, monitoring discussion fora, and providing guidelines to ensure that course content and discussions are grounded in factual information [10]. In addition, online education needs to be both adaptable to the needs of diverse learners through varied formats to deliver content as well as flexible in order to allow learners to navigate content at their own pace [9]. While these criteria are not novel, our analysis showed that many of the activities available to our participants lacked an emphasis on meaningful engagement (which was identified as a high priority consideration by many STAR participants) and adopted a more passive approach to content delivery. Therefore, those activities were found to be less ideal for the particular audience of the majority of STAR participants.

A central consideration in course design and evaluation is understanding the components students’ value. For example, both young professional students as well as “nontraditional” (e.g. mid-career and executive-level professionals) students have been found to place similar importance on how a course is assessed, but nontraditional students place greater importance on how well a course is designed, especially with regards to course expectations and instructions for accessing resources and support [12]. Two past evaluations have found that services are needed to support learners based on their individual needs and that most students valued peer interaction and instructor feedback [21, 22]. Our experience using REAP to identify learning activities for participants led us to recognize that there is still a lack of online courses that take full advantage of pedagogical and technological resources available to engage learners, which needs to be addressed in order to meet the needs presented in the literature as well as priorities voiced by STAR participants.

While this paper provides a practical example of how a structured evaluation tool can be used to codify learning activities, it is not without limitations. Principally, the activities in the STAR database do not represent the plethora of global health resources available, but rather a subset of activities that were chosen for STAR participants based on their individual goals and work needs. As such, this paper is not intended to be a comprehensive evaluation of global health learning opportunities, but rather a case example presenting how an innovative structured evaluation approach to identifying appropriate learning opportunities for diverse learners can be implemented as part of a global health training program. An additional limitation is that, while we integrated an intensive qualitative approach to ensuring the rigor and consistency of the content in this database, additional quantitative validity checks were beyond the scope of this project and thus present an important opportunity for further research.

Conclusion

This paper provides a description of our development and use of the REAP framework as well as the results of our initial experience utilizing it in the STAR project. While the REAP tool does not serve as a formal assessment tool to determine which courses are “better” or “worse” (and thus will be unable to provide an approach to ranking activities), as a project team we found the use of this tool to be a useful exercise to allow us rapidly identify the core characteristics of an activity in a systematic, standardized way and match these with participants who required particular content, interaction with faculty and peers, level of flexibility in delivery mode, and applicability to their job and career goals. Such tools will continue to become more valuable as the quantity and diversity of global health learning activities continues to expand. We hope that our experience developing and using REAP within the context of the STAR project can be a valuable example for other global health training programs to adapt and learn from.

Availability of data and materials

The data utilized for this analysis are available as an Additional file 1 to this paper according to the United States Agency for International Development (USAID)‘s Public Access Plan.

Abbreviations

AFRO:

African Regional Office

ERIC:

Education Resources Information Center

EURO:

European Regional Office

ILP:

Individualized learning plan

LMIC:

Low- and Middle-Income Countries

MEL:

Monitoring, evaluation, and learning

MOOC:

Massive Open Online Courses

PAHO:

Pan American Health Organization

QM:

Quality Matters

REAP:

Relevance, Engagement, Access, and Pedagogy

STAR:

Sustaining Technical and Analytic Resources

USAID:

United States Agency for International Development

References

  1. Koplan JP, Bond TC, Merson MH, Reddy KS, Rodriguez MH, Sewankambo NK, et al. Towards a common definition of global health. Lancet. 2009;373(9679):1993–5. https://doi.org/10.1016/S0140-6736(09)60332-9.

    Article  Google Scholar 

  2. Jogerst K, Callender B, Adams V, Evert J, Fields E, Hall T, et al. Identifying interprofessional global health competencies for 21st-century health professionals. Ann Glob Health. 2015;81(2):239–47. https://doi.org/10.1016/j.aogh.2015.03.006.

    Article  Google Scholar 

  3. Abayomi A, Gevao S, Conton B, Deblasio P, Katz R. African civil society initiatives to drive a biobanking, biosecurity and infrastructure development agenda in the wake of the west African Ebola outbreak. Pan Afr Med J. 2016;24:270.

    Article  Google Scholar 

  4. Cole DC, Davison C, Hanson L, Jackson SF, Page A, Lencuch R, et al. Being global in public health practice and research: complementary competencies are needed. Can J Public Health. 2011;102(5):394–7. https://doi.org/10.1007/BF03404183.

    Article  Google Scholar 

  5. Cherniak W, Nezami E, Eichbaum Q, Evert J, Doobay-Persaud A, Rudy S, et al. Employment Opportunities and Experiences among Recent Master's-Level Global Health Graduates. Ann Glob Health. 2019;85(1):1-9.

  6. Nakanjako D, Namagala E, Semeere A, Kigozi J, Sempa J, Ddamulira JB, et al. Global health leadership training in resource-limited settings: a collaborative approach by academic institutions and local health care programs in Uganda. Hum Resour Health. 2015;13(1):87. https://doi.org/10.1186/s12960-015-0087-2.

    Article  Google Scholar 

  7. Madhok R, Frank E, Heller R. Building public health capacity through online global learning. Open Praxis. 2018;10(1):91–7. https://doi.org/10.5944/openpraxis.10.1.746.

    Article  Google Scholar 

  8. Douglass K, Jaquet G, Hayward A, Dreifuss B, Tupesis J. Development of a Global Health milestones tool for learners in emergency medicine: a pilot project. Soc Acad Emerg Med. 2017;1(4):269-79.

  9. Aragon SR, Johnson SD. An Instructional Strategy Framework for Online Learning Environments. Proc Acad Human Res Dev. 2002:1022–9.

  10. Huang H-M. Toward constructivism for adult learners in online learning environments. Br J Educ Technol. 2002;33(1):27–37. https://doi.org/10.1111/1467-8535.00236.

    Article  Google Scholar 

  11. Merrill MD. First principles of instruction: identifying and designing effective, efficient and engaging instruction. Hoboken: Wiley; 2013.

    Google Scholar 

  12. Hixton E, Barczyk C, Ralston-Berg P, Buckenmeyer J. Online Course Quality: What to Nontraditional Students Value. Online J Dist Learn. 2016;XIX(4). https://www.westga.edu/~distance/ojdla/winter194/hixon_barczyk_ralston-berg_buckenmeyer194.html.

  13. Marylandonline. QualityMatters Rubrics and Standards. Annapolis: MarylandOnline; 2016–2020.

    Google Scholar 

  14. California State University at Chico. Exemplary Online Instruction. Chico: California State University; 2019.

  15. https://www.csuchico.edu/eoi/_assets/documents/rubric.pdf. Rubric for Online Instruction. 2009.

  16. CEHD's Digital Education and Innovation Team. The Check: A Guide to Online Course Design. Minneapolis: University of Minnesota; 2017.

  17. Hansoti B, Schleiff M, Akridge A, Dolive C, Gordon A, Rodriguez D, et al. Developing a high-impact learning program for Global Health professionals: the STAR project. J Pedagogy Health Promot. 2020;6(1):23–30. https://doi.org/10.1177/2373379919898484.

    Article  Google Scholar 

  18. Cook DA, Kuper A, Hatala R, Ginsburg S. When assessment data are words: validity evidence for qualitative educational assessments. Acad Med. 2016;91(10):1359–69. https://doi.org/10.1097/ACM.0000000000001175.

    Article  Google Scholar 

  19. Keeley T, Al-Janabi H, Lorgelly P, Coast J. A qualitative assessment of the content validity of the ICECAP-A and EQ-5D-5L and their appropriateness for use in health research. PLoS One. 2013;8(12):e85287. https://doi.org/10.1371/journal.pone.0085287.

    Article  Google Scholar 

  20. FitzPatrick B. Validity in qualitative health education research. Curr Pharm Teach Learn. 2019;11(2):211–7. https://doi.org/10.1016/j.cptl.2018.11.014.

    Article  Google Scholar 

  21. Ehlers U. Quality e-learning from a Learner's perspective. Essen: Des jalons dans la formation à distance (Milestones in Distance Education); 2004.

  22. Young A, Norgard C. Assessing the quality of online courses from the Students' perspective. Internet High Educ. 2006;9(2):107–15. https://doi.org/10.1016/j.iheduc.2006.03.001.

    Article  Google Scholar 

Download references

Acknowledgements

The co-authors appreciate the support from the Sustaining Technical and Analytic Resources (STAR) to undertake the work to develop the REAP tool and the database of learning activities leading to the data utilized in this paper. Further, we wish to thank STAR staff persons, including Melanie Atwell and Ashlee Walker, for their inputs on the participant process at STAR as well as in developing figures utilized in the paper, respectively.

Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The STAR project is supported through Cooperative Agreement No. 7200AA18CA00001 by the United States Agency for International Development (USAID).

Author information

Authors and Affiliations

Authors

Contributions

MS and BH conceived the study design. MS, BH, LJ, and CD developed the tools and database utilized in this paper and drafted portion of the manuscript’s content. EH and AM conducted the analysis. All authors had a role in the synthesis and review of this manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Meike Schleiff.

Ethics declarations

Ethics approval and consent to participate

All STAR participants represented in this study agreed to have their data included in our analysis. Ethical approval was sought and received from the institutional review boards of the Public Health Institute (IRB #I19–022) and the Johns Hopkins Bloomberg School of Public Health (IRB00011259).

Consent for publication

Not applicable.

Competing interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Annex 1

Annex 1

Table 6 Summary of key standards of commonly used course assessment rubrics

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schleiff, M., Hahn, E., Dolive, C. et al. REAPing the benefits: development and use of a structured evaluation framework to codify learning resources for Global Health professionals. BMC Med Educ 21, 374 (2021). https://doi.org/10.1186/s12909-021-02805-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-021-02805-6

Keywords