Learner feedback and educational outcomes with an internet-based ambulatory curriculum: a qualitative and quantitative analysis
© Sisson et al; licensee BioMed Central Ltd. 2012
Received: 31 January 2012
Accepted: 12 July 2012
Published: 12 July 2012
Online medical education curricula offer new tools to teach and evaluate learners. The effect on educational outcomes of using learner feedback to guide curricular revision for online learning is unknown.
In this study, qualitative analysis of learner feedback gathered from an online curriculum was used to identify themes of learner feedback, and changes to the online curriculum in response to this feedback were tracked. Learner satisfaction and knowledge gains were then compared from before and after implementation of learner feedback.
37,755 learners from 122 internal medicine residency training programs were studied, including 9437 postgraduate year (PGY)1 residents (24.4 % of learners), 9864 PGY2 residents (25.5 %), 9653 PGY3 residents (25.0 %), and 6605 attending physicians (17.0 %). Qualitative analysis of learner feedback on how to improve the curriculum showed that learners commented most on the overall quality of the educational content, followed by specific comments on the content. When learner feedback was incorporated into curricular revision, learner satisfaction with the instructive value of the curriculum (1 = not instructive; 5 = highly instructive) increased from 3.8 to 4.1 (p < 0.001), and knowledge gains (i.e., post test scores minus pretest scores) increased from 17.0 % to 20.2 % (p < 0.001).
Learners give more feedback on the factual content of a curriculum than on other areas such as interactivity or website design. Incorporating learner feedback into curricular revision was associated with improved educational outcomes. Online curricula should be designed to include a mechanism for learner feedback and that feedback should be used for future curricular revision.
KeywordsOnline education Curriculum development Feedback Learner satisfaction
One of the first steps in curriculum development is a needs assessment of targeted learners [1, 2]. Kern et al. state that a curriculum that does not address the needs of its learners risks being inefficient or ineffective . The role of a needs assessment does not vanish once a curriculum has been implemented or is moved online. However, many online curricula have been developed without attention to the principles of curriculum development [3, 4]. This is unfortunate, because the Internet adds the capability of efficiently performing recurrent needs assessments of its learners . By incorporating such assessment into design of online curricula, educators can easily determine whether the needs of learners have been met and gauge program effectiveness . This information then serves as the needs assessment for the next round of curricular revision. Many online curricula, however, do not include outcomes assessment and risk becoming out of date [4, 6]. Since a curriculum’s goals and objectives will likely evolve over time as learning needs change, these rounds of outcomes assessment, needs assessment, and curriculum revision become important to the long-term success of a curriculum .
In addition to enhanced capabilities with outcomes assessment, the Internet offers tools that can be used when educating physicians. As Cook and Dupras state: “The most effective websites creatively integrate content with the power and flexibility of the Web to enhance learning rather than merely replicate traditional methods.”  Interactivity is one tool available to Web educators that cannot be replicated in a textbook [5, 8, 9]. The Internet also allows for the incorporation of multimedia into content, including audio narration, animation, and video clips [7, 10, 11]. Hyperlinks can be used to augment content, directing learners to source material or additional resources . The impersonality of online training can be minimized by allowing for communication with instructors or other learners [5, 7, 9, 11, 12].
With all that is possible with online education, much has been written about what online learners prefer. Atreja et al. showed that the best predictors of satisfaction among the web-based learners they studied were the appeal of the website design and the ability of the course to improve subject understanding . They also found that the ability to communicate with instructors and other learners was associated with greater satisfaction . Others have shown learner satisfaction with web-based education increases when an online curriculum is interactive, with a user-friendly interface that is easy to navigate . Still others have shown that the quality of the online educational content is most important to learners . Many of the features made possible by online learning increase learner satisfaction, but at the expense of an increased time commitment among learners .
Internet-based educational programs should be designed and studied to determine those features that learners prefer, ideally incorporating this information into a needs assessment for curricular revision to improve educational outcomes [2, 11, 14, 15]. Heeding that call, we describe here learner feedback on a widely-used online curriculum on ambulatory care and evaluate associated changes in learner satisfaction and educational outcomes.
Study data was generated by the Internal Medicine Curriculum on the Johns Hopkins Internet Learning Center. The Johns Hopkins Internet Learning Center is an educational website established in 2002 and is offered to internal medicine residency training programs that subscribe for an annual fee . Residents and faculty at subscribing programs can then access educational content, which is divided into topic-specific training modules. Training modules are structured in a pretest-didactics-post test format. Learners comment on the website and curriculum on general message boards contained on the Internet Learning Center website. Starting in the 2003/04 academic year, learners were required to rate the instructive value of the module on a Likert scale (1 = not instructive; 5 = highly instructive). Starting in the 2005/06 academic year, learners were also required to give feedback on each training module by answering in free text the following: “Please tell us how to improve this module”. Learners could enter more than one comment or even nonsensical comments. For this study, feedback data (i.e., module instructive value ratings and free text comments) were analyzed from the 2003/04 academic year through the 2007/08 academic year on 19 training modules that were part of the curriculum each of those academic years. For feedback comments, 1000 comments were randomly selected for each of academic years 2005/06, 2006/07, and 2007/08. Likert ratings on instructive value from all learners who completed any one of the 19 training modules were included for study. The study was approved by the Johns Hopkins Institutional Review Board.
Three team members (SS, DR, and MH) read all 3000 feedback comments to identify main themes. Themes were used to develop a coding sheet to qualify comments on a goodness/badness, quality/quantity scale as well as on the aspect of the curriculum (e.g., questions, content or website function) addressed by the comments. The goodness/badness/quality/quantity scale categorized comments as either “bad”, “good”, “both good and bad”, “neither good nor bad”, “too short/too few”, “too long/too many”, “quantity just right”, or “additional requested features”. The curricular aspect/website function scale comments were categorized as no meaningful response, overall quality of the module, factual content of module, content organization, module images, module length, question quality, question quantity, or website function. The same three team members then re-read each feedback comment and coded them on each of the two categorization aspects. Unanimity among all three coders was required; when unanimity was not present, feedback comments were discussed until consensus was reached.
Average change in scores was calculated by subtracting aggregate pretest scores from aggregate post test scores for each module. Reliability testing was done on all pretests and post tests by performing item discrimination on each item and Cronbach’s alpha on each test. Content of pretests and post tests underwent face validity testing, and test results were evaluated by training year to perform construct validity. Reliability and validity testing on this curriculum has been reported elsewhere . These scores were then weighted by the number of learners for each module in a given academic year. Then the average change in scores was determined among all modules in a given academic year. We used frequency analyses to describe the respondent population and comment themes, Student’s t-test to compare average module ratings, and one-way ANOVA to compare average change in module score. Tests of significance were two-tailed, with an alpha level of 0.05. We performed analyses using SPSS, version 18.0 (Chicago, IL).
Description of learners
Year of Training
All learners* N (%)
Qualitative feedback+analysis N (%)
Website capability to gather qualitative feedback from learners was added in the 2005/06 academic year, and was a mandatory component of module completion. For the purposes of this study, a subset of 1000 comments was gathered from each of the first three academic years that qualitative feedback was available. The training year or attending status of learners used in the random sample of 3000 feedback comments is shown in Table 1.
Overall quality of module
· Awesome module
· A very good learning module
· Proofread it; there are numerous grammatical errors throughout the module
· Found this module to be minimally helpful
· Some mistakes
· Good review on diabetes treatment
· Good review of cancer screening
· Would like to hear more about recent studies
· Include trade names with generics to make it easier to comprehend
· The use of bisphosphonates was not entirely clear
· Liked the diagrams
· This module was done very well in a stepwise manner
· Liked the format of the clinical cases as well as the pretest/post test organization
· More printable tables
· More charts
· Summarize “red flags” of back pain somewhere in the module
· This module could be improved by having pictures
· More pictures
· Pictures could have been included (e.g., tonsillar exudates, local GABHS complications, scarlet fever rash)
· Very good set of questions
· Good questions
· Very complete in its questions
· The pretest questions were poorly worded
· Very poor questions; answers do not correspond with information given in the module
· You can add more clinical settings to the stem of the questions
· Increase practice questions
· Ask more questions
· Less questions
· Make explanations more concise
· Too long
· Was just too lengthy
· Good organized feedback
· Easy to use
· I like that we can download summaries in PDF
· The case answer was different than the correct answer
· I wasn’t receiving credit for correct questions during the cases
· The “pop up” had the wrong answer
Additional features requested
· Try to time the modules
· I think you should give an idea of how many questions are in a section
· Brief reading material before the start of the module
Learner comments on module questions most commonly pertained to the number of questions (usually asking for more of them) followed by complaints about the quality of the questions. For those learners commenting on website function, the majority requested additional website features, including a timer function and a navigation map for each module. The percentage of comments on topics other than the overall quality or content of the modules was low.
Response to feedback
Summary of curricular and website changes
Changes (year added)
· Increased number of tables (all years)
· Increased number of figures/algorithms (all years)
· Increased number of images (all years)
· Increased number of questions on pre/post tests (all years)
· Added explanatory text on correct/incorrect answers on pretest/post test (2007)
· Added recap statements at end of didactic sections (all years)
· Added navigation map to each module (2007)
· Changed didactic section from program control to learner control (2007)
Added video content capability (2005)
· Added 1-page printable summaries for completed modules (2006)
Changes in education outcomes
Using qualitative feedback from learners as a needs assessment to guide curricular revision and website design is associated with greater learner satisfaction and larger gains in knowledge. Qualitative learner feedback on desired improvements centers mostly on the general quality of the module rather than any specific component of the content or website function. When making specific comments, the greatest number of comments is on the educational content of the curriculum, particularly its factual elements. A small proportion of learners request reorganization of content, typically to include more tables, charts, figures, and images. Those who comment on practice questions typically want more of them. A small number of learners comment on website function, usually to request additional features.
While others have commented that taking bad educational content and putting it online doesn’t improve learner satisfaction, we found that responding to learner feedback on content that is generally rated as good and reorganizing that content (i.e., adding tables, figures, and images) and changing website design/function in response to specific feedback improved learner satisfaction . What we found demonstrates the value of recurrent needs assessment in curricular revision. Needs assessment should be done when a curriculum is first developed, but since learners might not fully know their needs until exposed to subject matter, and learner needs evolve, recurrent needs assessment should be built into online curricular design [1, 2, 4, 18]. Over time, our curricular content became more visual (through the addition of tables, figures, and images) and more interactive (through the addition of more questions and greater opportunity for feedback). With these changes, satisfaction among our learners improved. Getting learners to repeatedly access an online curriculum requires that they be satisfied with it [8, 18]. Our website design, including a strong evaluation component, and our process of using this information as a recurrent needs assessment, is one way to increase the chances of success of an online curriculum.
While placing a curriculum online offers additional tools to enhance content (i.e., interactivity, hyperlinks, audio and video) our results confirm what others have shown: the quality of educational content is what matters most to learners [7, 12, 13]. The majority of learner comments on the curriculum were related to the overall quality of the modules and their educational content, suggesting that the educational quality of the content was most important to learners. Learner satisfaction is also strongly associated with knowledge gains. In a meta-analysis of studies of predictors of learner satisfaction done before the Internet was widely used, learner satisfaction with a curriculum was determined most by how much learners felt they got out of the educational content (i.e., “teaching effectiveness”) . In our study, we used the average change in scores on a module (i.e., post test score minus pretest score) as a measure of teaching effectiveness. We found that by responding to learner feedback, the teaching effectiveness of our modules improved. Since teaching effectiveness is such a driver of learner satisfaction, it is not surprising that our results showed similar findings in teaching effectiveness and learner satisfaction. Although we are unable to determine which changes (i.e., website functionality, content organization etc.) contributed to these improvements, it may not matter. The more important finding is that the process of incorporating learner feedback into curricular revision is associated with improved outcomes in learner satisfaction and knowledge outcomes.
It is our opinion that online curricula should be designed to include powerful evaluation tools (including learner satisfaction as described here) to assess the reliability and validity of education outcomes. We have continually leveraged the capabilities of being online to increase the evaluation component of our curriculum. Assessment tools (i.e., pretests and post tests) were expanded, and item discrimination and Cronbach’s alpha calculations were added to these instruments. Group performance measures, subgroup (i.e., training year; training program) performance measures, and individual performance measures (including standard scores) were added or expanded. This evaluation component allowed us not only to perform a continuous needs assessment of our learners, but also to serve the needs of the residency program directors that chose to implement our curriculum. The Accreditation Council of Graduate Medical Education (ACGME) requires that program directors demonstrate the educational outcomes of their curricula, which is provided at a group and individual level by our curriculum . Use of our curriculum has grown from 24 internal medicine residency programs in 2002 to over 160 programs in 2011. As others have pointed out, tracking evaluation outcomes and learner feedback consumes resources, as does the resultant curricular revision and implementation of website improvements, but these costs are minimized by sharing resources among users [4, 5]. Developing a strong evaluation component to online curricula allows educators to advance the science of education and answer the call to increase evaluative research on education outcomes and those features that improve them [15, 21, 22].
Our study has several limitations. Individual learners changed from year to year, and so changes in learner satisfaction and knowledge outcomes may have been due to learner characteristics (e.g., post-graduate year, program type) rather than the curriculum. Since several changes to curricular content and website design were made in any given year, we were unable to measure the impact of any single change on learner satisfaction or knowledge outcomes. Over the years of study, online education resources expanded greatly, and learners may in general have become more satisfied with online didactics independent of our changes. Also over the years of study, clinician educators who wrote the educational content may have improved their writing skills, contributing to improvement in the educational content of the curriculum.
We found that when asked to give feedback on how to improve an online curriculum, learners commented most on its educational content rather than its interactivity or website design. We also found that a process of recurrent needs assessment followed by curricular revision is associated with improved learner satisfaction and larger gains in knowledge. Online educational curricula should be designed to include evaluative tools that measure educational outcomes for study so that educational outcomes can be improved.
Accreditation Council of Graduate Medical Education.
- Kern DE, Thomas PA, Hughes MT: Curriculum development for medical education: a six-step approach. 2009, Johns Hopkins University Press, BaltimoreGoogle Scholar
- Lieff SJ: Evolving curriculum design: a novel framework for continuous, timely, and relevant curriculum adaptation in faculty development. Acad Med. 2009, 84: 127-134. 10.1097/ACM.0b013e3181900f4b.View ArticleGoogle Scholar
- Harden RM: A new vision for distance learning and continuing medical education. J Cont Educ Health Prof. 2005, 25: 43-51. 10.1002/chp.8.View ArticleGoogle Scholar
- Sisson SD, Hill-Briggs F, Levine D: How to improve medical education website design. BMC Med Educ. 2010, 10: 30-10.1186/1472-6920-10-30.View ArticleGoogle Scholar
- Ruiz JG, Mintzer MJ, Leipzig RM: The impact of e-learning in medical education. Acad Med. 2006, 81: 207-212. 10.1097/00001888-200603000-00002.View ArticleGoogle Scholar
- Candler CS, Uijdehaage SHJ, Dennis SE: Introducing HEAL: the health education assets library. Acad Med. 2003, 78: 249-253. 10.1097/00001888-200303000-00002.View ArticleGoogle Scholar
- Cook DA, Dupras DM: A practical guide to developing effective web-based learning. J Gen Intern Med. 2004, 19: 698-707. 10.1111/j.1525-1497.2004.30029.x.View ArticleGoogle Scholar
- Wong G, Greenhalgh T, Pawson R: Internet-based medical education: a realist review of what works, for whom, and in what circumstances. BMC Med Educ. 2010, 10: 12-10.1186/1472-6920-10-12.View ArticleGoogle Scholar
- Chumley-Jones HS, Bobbie A, Alford CL: Web-based learning: sound educational method or hype? A review of the evaluation literature. Acad Med. 2002, 77 (suppl): S86-S93. 10.1097/00001888-200210001-00028.View ArticleGoogle Scholar
- Cook DA, Levinson AJ, Garside S: Time and learning efficiency in internet-based learning: a systematic review and meta-analysis. Adv in Health Sci Educ. 2010, 15: 755-770. 10.1007/s10459-010-9231-x.View ArticleGoogle Scholar
- Curran VR, Fleet L: A review of evaluation outcomes of web-based continuing medical education. Med Educ. 2005, 39: 561-567. 10.1111/j.1365-2929.2005.02173.x.View ArticleGoogle Scholar
- Atreja A, Mehta NB, Jain AK, Harris CM, Ishwaran H, Avital M, Fishleder AJ: Satisfaction with web-based training in an integrated healthcare delivery network: Do age, education, computer skills and attitudes matter?. BMC Med Educ. 2008, 8: 48-10.1186/1472-6920-8-48.View ArticleGoogle Scholar
- Sargeant J, Curran V, Jarvis-Selinger S, Ferrier S, Allen M, Kirby F, Ho K: Interactive on-line continuing medical education: Physicians’ perceptions and experiences. J Contin Med Educ Health Prof. 2004, 24: 227-236. 10.1002/chp.1340240406.View ArticleGoogle Scholar
- Kassebaum DG: The measurement of outcomes in the assessment of educational program effectiveness. Acad Med. 1990, 65 (5): 293-296. 10.1097/00001888-199005000-00003.View ArticleGoogle Scholar
- Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin MJ, Montori VM: Internet-based learning in the health professions. JAMA. 2008, 300 (10): 1181-1196. 10.1001/jama.300.10.1181.View ArticleGoogle Scholar
- Levinson AJ: Where is evidence-based instructional design in medical education curriculum development?. Med Educ. 2010, 44: 536-537. 10.1111/j.1365-2923.2010.03715.x.View ArticleGoogle Scholar
- Sisson SD, Dalal D: Internal medicine residency training on topics in ambulatory care: a status report. Am Jour Med. 2011, 124: 86-90. 10.1016/j.amjmed.2010.09.007.View ArticleGoogle Scholar
- Sisson SD, Hughes MT, Levine D, Brancati FL: Effect of an internet-based curriculum on post-graduate education: A multicenter intervention. J Gen Intern Med. 2004, 19: 503-507.View ArticleGoogle Scholar
- Gold JP, Begg WB, Fullerton DA, Mathisen DJ, Orringer MB, Verrier ED: Evaluation of web-based learning tools: Lessons learned from the thoracic surgery directors association curriculum project three-year experience. Ann Thorac Surg. 2005, 80: 802-810. 10.1016/j.athoracsur.2005.03.052.View ArticleGoogle Scholar
- Cohen PA: Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies. Rev Educ Res. 1981, 51 (3): 281-309.View ArticleGoogle Scholar
- Accreditation Council for Graduate Medical Education. ACGME Outcome Project. 2012, Accessed at http://www.acgme.org on
- Sisson SD: Online instruction: time to grow. JGIM. 2002, 17: 574-10.1046/j.1525-1497.2002.20513.x.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/12/55/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.