Our data indicate that the online formative assessments in Phase 1 of our undergraduate Medicine program have been effective in promoting learning by students. Students who participated in formative assessments were likely to achieve higher marks in EOC examinations. Better performance in each of the formative assessments was also consistently associated with higher marks in the respective EOC examinations. There was a trend, although not consistent across all courses, for first year students to derive greater benefit from the formative assessments at the commencement of their program, consistent with the notion that such assessments provide a "safety net" for novices in student-centred learning .
We believe this report provides much-needed evidence of a quantifiable effect of online formative assessments on learning. Our findings are in contrast to two recent studies in related settings. The first study demonstrated the value of online formative quizzes in improving preparation and participation in classes , but reported no effect on summative examination results. The second study was a randomized control trial of online formative assessments for medical students in clinical clerkships, which found no positive effect on learning . It should be noted that the latter investigation, in contrast to our study, failed to gain significant numbers of participating students.
Our findings also contrast with reports suggesting that online formative assessments utilising objective items such as multiple choice questions have no effect on student learning outcomes [23, 25], or even a negative effect on learning . The authors of the latter study asserted that multiple choice questions may be unsuitable for formative assessments, because the "lures" or distractors create "false knowledge" . However, these adverse findings were based on the use of multiple choice questions without feedback. In that context, it is not surprising that incorrect answers could be "learned".
Importantly, our results substantially extend the observations of Krasne et al , who found that untimed "open-book" formative assessments were good predictors of performance in a summative examination, possibly related to factors such as the reduced pressure compared to a conventional examination format and an emphasis on assessment of higher order learning (e.g. application, evaluation, self-direction). Although our formative assessments employed multiple choice questions (as well as short-answer questions), they were in many respects similar in character to the "open-book" assessments described by Krasne and colleagues . Unlike those assessments, however, ours were integrated across disciplines, broader in their scope, available for a longer period of time, and embedded throughout a program of study. These factors are likely to have increased their efficacy. Furthermore, recent data on the application of test-enhanced learning in higher education  validates our use of a combination of short-answer questions and multiple choice questions with immediate feedback in promoting retention of knowledge.
Student perceptions of the formative assessments were uniformly favourable, with consistent and increasingly positive evaluations in online feedback surveys. Correspondingly, there were high participation and repetition rates in the online assessments for each course. The fact that students on average completed the formative assessment within each course in less than one hour might have contributed to the popularity of the assessments, because they were perceived as an efficient means of study for time-poor students.
Our systematic approach to the design, development, implementation and continual improvement of online formative assessments is likely to have played a role in students' perceptions of the assessments, as well as their positive effect on student learning. For example, as part of a continuous improvement cycle, all items used in formative assessments were analysed with regard to difficulty, discrimination co-efficient and correlation with overall assessment outcome. Those that correlated poorly were edited or eliminated from the next cycle, while concepts that proved difficult for students to comprehend were flagged for course conveners.
A limitation of our study is that it is not possible to conclude whether there is a causal relationship between participation and/or performance in online formative assessments and EOC examination marks. It might be that "better" students, who were more highly motivated, were more likely to undertake the formative assessments [12, 16]. In our study, multiple attempts at each formative assessment were not associated with higher EOC examination marks. This might suggest that EOC examination performance was primarily influenced by the inherent properties of the students, rather than the salient effects of formative assessment with feedback. Nevertheless, although one reported study has demonstrated no relationship between the effect of online formative assessments and overall student performance in a program as measured by grade point average , the design of our study cannot exclude such a relationship. Proving a causal relationship would require a design in which students were randomly assigned to a "control group" within a cohort. This would be inequitable, because a group of students would be deprived of the opportunity to undertake formative assessments during the trial period .
Implications for practice and future research
The results of our study reinforce the impact on learning of well-designed online formative assessments. The highly computer-literate students to whom these assessments were targeted, a population for which web delivery of learning materials and resources is now the default, expressed a very high level of satisfaction. This could in part be related to the graphically intensive approach we used, particularly in visual disciplines such as Anatomy and Pathology. It is likely that this could not be matched by any other mode of delivery. We have evidence from feedback surveys that students pursued further reading and investigated linked resources, so the purpose of provoking further thought about the topics clearly was served.
Thus, while the effort and expense involved in this enterprise has been considerable, this investment is clearly justifiable because the assessments had a high take-up rate and evidently contributed to better learning outcomes for students.
These findings have important implications not only for education of junior medical students but also for continuing education of senior medical students in clinical attachments  and especially of junior doctors and specialist trainees. The latter two groups, who are notoriously time-poor, might be attracted to well-packaged formative assessments which they could undertake at their convenience. They might derive considerable benefit from non-threatening feedback on their knowledge and clinical decision-making.
From a research perspective, a question that remains of interest to us is whether the learning benefits of online formative assessments for junior medical students, which we have demonstrated, persist into senior years of medical programs, particularly with respect to understanding of the biomedical sciences. It would also be of interest to determine whether our formative assessments could have diagnostic value, i.e. are those students who perform poorly in formative assessments at their first attempt more likely to fail EOC examinations?