Skip to main content
  • Research article
  • Open access
  • Published:

Certainty rating in pre-and post-tests of study modules in an online clinical pharmacy course - A pilot study to evaluate teaching and learning

Abstract

Background

Graduate and post-graduate education for health professionals is increasingly delivered in an e-learning environment, where automated, continuous formative testing with integrated feedback can guide students’ self-assessment and learning. Asking students to rate the certainty they assign to the correctness of their answers to test questions can potentially provide deeper insights into the success of teaching, with test results informing course designers whether learning outcomes have been achieved. It may also have implications for decision making in clinical practice.

Methods

A study of pre-and post-tests for five study modules was designed to evaluate the teaching and learning within a pharmacotherapeutic course in an online postgraduate clinical pharmacy program. Certainty based marking of multiple choice questions (MCQ) was adapted for formative pre- and post-study module testing by asking students to rate their certainty of correctness of MCQ answers. Paired t-tests and a coding scheme were used to analyse changes in answers and certainty between pre-and post-tests. A survey evaluated students’ experience with the novel formative testing design.

Results

Twenty-nine pharmacists enrolled in the postgraduate program participated in the study. Overall 1315 matched pairs of MCQ answers and certainty ratings between pre- and post-module tests were available for evaluation. Most students identified correct answers in post-tests and increased their certainty compared to pre-tests. Evaluation of certainty ratings in addition to correctness of answers identified MCQs and topic areas for revision to course designers. A survey of students showed that assigning certainty ratings to their answers assisted in structuring and focusing their learning throughout online study modules, facilitating identification of areas of uncertainty and gaps in their clinical knowledge.

Conclusions

Adding certainty ratings to MCQ answers seems to engage students with formative testing and feedback and focus their learning in a web-based postgraduate pharmacy course. It also offers deeper insight into the successful delivery of online course content, identifying areas for improvement of teaching and content delivery as well as test question design.

Peer Review reports

Background

Continuing professional development, graduate and postgraduate programs in health sciences and clinical education in Australia and many other countries are increasingly delivered online to accommodate the needs of adult, professional learners and address their expectations to be able to work, study and learn wherever and whenever they choose [1]. Evidence for comparable learning outcomes between online, internet-based and face-to-face course delivery has generally been established, although strategies for successful e-teaching and e-learning design and its effective implementation are still emerging [25]. A systematic review of internet based learning (IBL) in health profession education identified teaching strategies with a positive impact on learning outcomes, namely interactivity, practice exercises, repetition and feedback [6].

Formative assessment is regarded as essential in providing opportunities for learners to develop self-assessment and self-regulation skills [7, 8], optimise learning [9, 10] and prepare for summative assessment [11]. It offers guidance to adult, postgraduate e-learners who often study in relative isolation, asynchronous to others, in structuring their learning. Using assessment results to improve teaching practices and assessing the assessment can assist designers of e-learning for health professionals to meet the challenge of developing courses which are student-centred, relevant and applicable to learners who bring varying priorities to their course of study [12]. Continuous monitoring and evaluation of students’ results in formative tests allows for timely adjustment of learning content and delivery as well as assessment tasks to optimise student learning [13, 14].

One convenient strategy for formative assessment within the virtual learning environment (VLE) is the use of tests of multiple choice questions (MCQs) at the completion of learning modules [15]. MCQs have been validated as an assessment method in health sciences and clinical education, with diligent design contributing to test reliability and validity and the assessment of critical thinking [1620]. The use of context-rich MCQs which test the application of clinical and therapeutic knowledge after educational activities promotes retention and application of knowledge [21].

The Postgraduate Clinical Pharmacy Programs (PCPP) at the University of Queensland (UQ), Australia, are delivered via a virtual learning platform, Blackboard® (Blackboard Inc., Washington DC, USA), and offer practicing pharmacists from Australia and other countries the opportunity to attain a postgraduate degree at a Diploma or Master’s level via course work. The program is structured into courses comprising of learning modules. These modules offer a wide range of learning content and activities to accommodate practicing pharmacists’ varying professional experience and background, scopes of practice and technological expertise. Module content is designed to build on pharmacists’ varying degrees of baseline clinical skills and knowledge, engaging them in critical thinking, reflection on their practice and discussing changes in clinical evidence and recent controversies. The program emphasises the teaching strategies delineated in Cook’s review [6], with formative assessment and feedback the focus of this evaluation.

Formative post-module MCQ have always been an integral component of the therapeutics online courses in the PCPP to encourage self-assessment of learning and prepare students for an open-book, computer-based, end of course MCQ exam [7]. The exam forms one aspect of summative assessment along with performance- and practice- based assessments [22].

Adding pre- module MCQ tests to post- tests integrates feeding forward and allows e-learners to self-regulate and focus their learning through the online study content, based on their pre-existing knowledge and skills. Pre- and post-module tests encourage learners to self-evaluate their baseline learning needs and uptake of taught content.

At the same time evaluation of formative and summative MCQ tests indicates whether desired learning outcomes have been achieved to the developers of learning material. Psychometric analysis of MCQ test results can provide insight into whether questions are well chosen to test learning and of an appropriate level of difficulty and whether they reliably discriminate between good and bad performers [23]. Overall score analysis only provides limited insight though into whether learners knew or guessed an answer correctly or how certain or confident they were of its correctness [24]. Confidence into or certainty of knowledge as well as awareness of uncertainty becomes important when knowledge needs to be applied with immediacy or in potentially high risk situations as is often the case in clinical practice [25, 26]. Reflection on decision making under uncertainty is a significant aspect of clinical reasoning and health professional practice and education [27]. When integrated into formative assessment such reflection can become routine in a learner’s self-evaluation and professional development [28].

One strategy for transferring decision making under uncertainty into the teaching and learning of health professions has been the introduction of certainty (formerly known as confidence) based marking (CBM) of MCQs in formative and summative assessment [29, 30]. In addition to finding the correct answer to a MCQ certainty based marking requires students to state how certain they are that their given answer is correct. Marking schemes have been designed to reward accuracy of answers and honest reporting of degrees of certainty, penalising students for high certainty ratings of incorrect answers and reward acknowledgment of uncertainty while also maintaining reward for correctness of answers [31, 32]. The main argument in support of CBM in summative assessment builds on its ability to distinguish between students who are guessing versus knowing or deducing correct answers.

In formative assessment, CBM of MCQs allows learners and teachers to identify knowledge gaps and gauge certainty of knowledge or reasoning [26]. Evaluation of students’ experience with CBM in formative and summative MCQ tests suggests that it fosters deeper involvement with the tested content, encourages reflection and raises student awareness of areas of uncertainty [33, 34]. The potential for CBM to inform educators as to whether the learning content of their courses achieves intended learning outcomes, for example by increasing learners’ knowledge and skills in combination with increased certainty of knowledge, has been explored as a concept but not been realised in practice [35].

This study aimed to pilot a novel adaptation of CBM in evaluating both e-teaching and e-learning with the use of formative assessment pre-and post-completion of study modules, using certainty ratings of answers to MCQs instead of CBM marking schemes. Success in learning design and delivery would be signified by an overall increase of correct answers in post-tests as well as increased certainty of their correctness compared to pre-tests. Concerns would be raised if correct answers were changed to incorrect or certainty for incorrect answers increased from pre-to post-tests.

Methods

Study design

The aims of this pilot study were to investigate the potential utility of formative pre- and post-test MCQs with certainty rating of answers within a virtual learning environment, in terms of:

  1. 1.

    feedback to course designers

  2. 2.

    learner experience

Sets of 10 MCQs were developed for each of five study modules of a one year pharmacotherapeutics course in the PCPP. MCQs were either designed to encourage critical thinking and clinical reasoning by using case scenarios and complex answers from which the most or least appropriate option had to be chosen or they asked about pharmacotherapeutic and clinical knowledge relevant to or contentious in clinical pharmacy practice.

To address the first research question, in addition to answering the MCQs students were asked to rate their certainty of having identified the correct answer for each MCQ on a four point Likert scale. Identical sets of certainty-rated multiple choice questions (CRMCQs) were administered in pre-module tests at the start of each of the five learning modules and post-tests at completion, both available for a limited time period. These covered key aspects of respective learning content while taking care that students weren’t deterred from participation in a voluntary activity by a higher number of questions. Study modules and tests were released approximately monthly over the course year.

The assignment of certainty levels on a four point Likert scale (no idea/ uncertain/ certain/ very certain) was adapted from previous CBM studies which either used three tier Likert scales of low, mid or high certainty or four tier Likert scales expressing certainty in percentages, with ‘very certain’ usually assigned to or understood as high or 80-90 % certainty and’certain’ calibrated in the range of 60–80 % [33, 35, 36]. Students were instructed to choose ‘very certain’ when they felt they were more than 90 % sure their answer was correct and ‘certain’ for less than 90 % certainty. The discrimination between ‘certain’ and very ‘certain’ intended to facilitate observations of differences in certainty levels between pre-and post-tests for those answers where students already had a degree of certainty of correctness in pre-tests.

Automated feedback to students at completion of pre-tests identified which questions they answered correctly or incorrectly. At the same time students also received guidance on which resources and learning materials within the study module would assist them in coming to the correct answer, without explicitly revealing that exact answer. On completion of study modules post-tests were released for a limited time period. Automated feedback now revealed the correct answer and again provided detailed information on where to locate relevant study content, e.g. which lecture, guideline or journal article linked in the module will assist them in finding the answer. Students had the opportunity to revisit their test results if they wanted to check results and answers before repeating tests or for revision before the summative, end of course MCQ test. As test availability was temporally restricted and due to the layout of Blackboard® students had to actively seek out a different section of the course site for this purpose.

To answer the second research question, at the end of the two semester course participants were asked to complete an anonymous online survey (see Additional file 1) answering eleven questions which explored their attitudes towards assigning a certainty rating to their MCQ answers. The survey was based on and adapted from similar instruments investigating the attitudes of students towards CBM and included questions on how CRMCQs affected their approach to learning and module content, using a five point Likert scale (strongly disagree to strongly agree) [30, 34, 37].

Data collection and analysis

Overall 1315 matched pairs of answers and certainty ratings between pre- and post-module CRMCQ tests for five modules were downloaded from Blackboard® and CRMCQ data were analysed in Microsoft Excel 2010 and R 3.3.1 [38]. Certainty categories were converted into numerical values, (1 = no idea, 2 = uncertain, 3 = certain and 4 = very certain) and analysed using R. Paired t-tests were conducted to investigate whether certainty levels for correct answers increased between pre- and post- tests for each module. A coding scheme was designed (Table 1) to analyse and describe in more detail any changes of answers given by individual students between pre-and post-tests as well as their assigned certainty ratings. Descriptive statistics were used to analyse survey responses.

Table 1 Particpant characteristics

Results

Analysis of certainty-rated multiple choice questions

Of the 39 students who completed the course, 29 (74 %) provided consent for participation in the study. Not all students who consented to participate completed all pre- and post-module CRMCQs of all five evaluated study modules. Only CRMCQs by students who provided answers to all questions of the pre-and post-tests in a particular study module were evaluated. A median of 25 (23–28) consenting students answered all questions of both pre-and post-tests for the five study modules. Proportionate to course enrolment demographics, 82 % of participating students were female and the majority worked at least part-time as hospital pharmacists with less than 5 years of professional practice. Table 1 describes the participant characteristics.

The overall results reflect favourably on module and learning design. One fifth to one third (21.7–35.2 %) of answers to CRMCQs across the five study modules (M1-M5) were changed from an incorrect to a correct answer between pre- and post-tests (codes 1–3). Students who identified the correct answer to pre-test questions usually also identified it in the post-test (28.0–44.2 %) and the majority increased or didn’t change their certainty of having identified the correct answer in post-tests (codes 4–6). Paired t-tests revealed that an increase in certainty levels of having identified the correct answer in the post-tests was consistent and statistically significant across all study modules (Tables 2 and 3).

Table 2 Number of answers changed from incorrect to correct and mean levels of certainty in pre-and post-tests of study modules
Table 3 Number of correct answers and mean levels of certainty in pre-and post-tests of study modules

Table 4 describes a coding scheme, which in particular assisted in analysing changes in correctness of answers and certainty which occurred with lesser frequency. Percentages of each code assigned for each study module (M1-M5) and the mean across all modules are listed, with numerical values in brackets.

Table 4 Coding scheme and changes in answers and certainty between pre-and post-tests for all study modules

The same incorrect answer in pre-and post-tests was chosen with a frequency of 4.6–16.1 %, in the majority with unchanged certainty (codes 7–9).

An overall average of 13.4 % of incorrect answers in pre-tests were replaced by a different incorrect answer after completion of study modules in the post-module tests, mostly with decreased certainty (codes 10 and 11). A smaller number of answers (average 7.3 %) were changed from the correct to an incorrect one between pre- and post-tests. Eighty percent of students who chose a correct answer in a pre-test and subsequently changed to an incorrect answer in the post-test (code 12), were either uncertain or had no idea that they had chosen the correct answer in the pre-test. Generally uncertainty was higher when incorrect answers were chosen in a post-test compared to correct answers.

The design of individual MCQs and delivery of study module content was then evaluated more specifically. Module and learning design was reviewed when individual CRMCQ results indicated that study modules may not have offered the learning needed to answer them correctly or MCQs could have been ambiguous or flawed in their design, not testing the actual learning adequately. This was regarded to be the case with high occurrences of a) answers not changed from incorrect to correct, b) increased or high certainty levels attached to incorrect answers in post-tests or c) correct answers in pre-tests changed to incorrect answers in post-tests.

On the other hand, d) a high proportion of correct answers with high certainty ratings in pre-tests would suggest course content was already familiar to or mastered by learners and could be removed or not tested. Applying these parameters to CRMCQ results flagged 14 out of 50 deployed for a review in terms of MCQ formulation and the delivery of examined content, with a) occurring 8 times, b) once, c) once and d) 4 times. This led to changes in the course material and/or MCQs for the following year.

Analysis of student survey

Twenty-four of the 29 participants completed an anonymous online survey containing eleven questions relating to their perceptions of benefit and usefulness of assigning a certainty rating to their answers in MCQs in structuring or advancing their learning. The majority of students were positive about their experience with CRMCQs in that they agreed or strongly agreed that assigning a degree of certainty to their answers to MCQs:

  • Made them think about how certain they are of the correctness of their answer (92 %)

  • Made them aware of what they know and don't know (92 %)

  • Assisted in identifying knowledge gaps (83 %)

  • Assisted in identifying their guesses (71 %)

  • Directed their learning (67 %)

  • Was useful for revision (67 %)

  • Made them think more carefully about answers (67 %)

  • Focused the approach to the topic of study (50 %)

  • Made them think more before answering clinical questions in practice (50 %)

Students mostly disagreed or were neutral that CRMCQS:

  • Were a waste of time (96 %)

  • Limited their approach to the topic of study (79 %)

Discussion

This pilot study was conducted in a postgraduate clinical pharmacy course and designed to evaluate the addition of a certainty rating of answers to MCQs, exploring whether some of the benefits observed in the application of certainty-based marking in summative testing could be translated into formative assessment [31].

The study investigated the effects of restructuring formative e-assessment of study modules to pre- and post-test CRMCQs with sign-posting to relevant learning resources, with the aim of providing guidance to adult learners in how to structure and prioritise their approach to study in an e-learning environment. Designing MCQs that clearly stipulate expected knowledge and identifying resources for knowledge extension worked well in a post-graduate context where students draw on previous experience. Optimising the instructional system within the VLE for this purpose assisted in meeting expectations of good e-assessment and maximised the utility of feedback for student learning [3942]. The resulting 21–35 % improvement in correct scores across five study modules aligns with similar findings in other postgraduate health profession programs [43, 44].

The addition of certainty ratings to MCQs in pre-and post-tests for each study module along with the student survey results afforded deeper understanding for course designers whether students improved their knowledge and their ability to apply it. Student survey results indicated that learners regarded answering CRMCQs and feedback in pre-module tests as a guide to study module content. CRMCQs directed their learning, seemingly realising the intent of feeding forward and creating assessment for learning [45]. CRMCQ tests with integrated feedback enabled self-assessment, assisted with revision for the final, summative MCQ test and focused but did not limit students’ approach to the topics of study according to their learning needs [15]. In combination with feedback on incorrectness of answers, which points to obvious gaps in their knowledge, CRMCQs also directed their study efforts to areas of low certainty. The increase in certainty of having chosen the correct answer when that answer was indeed correct was consistent and statistically significant over all study modules.

Student feedback indicates that assigning certainty ratings to MCQs added another stimulus to reflect on their knowledge and learning, raising awareness of their own uncertainty. Students described that they became more conscious of what they know or do not know and seemed to engage more before committing to an answer, which is consistent with previous evaluations of student perceptions of CBM [34, 37].

Certainty of knowledge can be regarded as a surrogate marker for quality and applicability of knowledge in a clinical context. If a learner ‘knows’ the correct answer to a clinical problem but isn’t certain, knowledge will not be readily applied in clinical practice, whereas someone who does have great confidence into an incorrect answer and applies this ‘hazardous knowledge’ may cause inadvertent harm [26]. Interestingly, half the students agreed that as a result of taking CRMCQ tests they think more before answering clinical questions in their practice. Although an association between student reflection on learning and testing with reflective clinical practice hasn’t been established conclusively, this finding could be interpreted as an indicator that CRMCQs enable pharmacists to become more reflective practitioners. Promoting reflection on certainty in learning and practice represents one strategy to engage clinicians in decision making under conditions of uncertainty [27, 46]. It may also assist pharmacists, who at times seem to exhibit a dislike of making decisions under uncertainty, to cognitively resolve apprehension through reflection and conscious awareness [47].

Overall, the combination of pre-and post-module CRMCQ tests resulted in achieving e-design of formative assessment which exhibits many of the hallmarks of good assessment and feedback practice. They seemingly assisted in clarifying goals and standards, promoted self-assessment and reflection, provided feedback and motivation, pointed out strategies how to close knowledge gaps, and as described below, helped to shape teaching [48].

Informing teaching was an integral component of the evaluation design. Utilising certainty ratings with MCQs in pre-and post-tests added an additional gauge for course designers whether MCQs were pitched at an appropriate level or required review. When MCQs were answered correctly by a majority of students in a pre-test it could be concluded that either tested learning content or the question were too basic, leading to revision or removal of either in the future. But when certainty ratings for correct pre-test answers were low, the question would still provide stimulus to learn and engage with study module content, demonstrated by the consistently higher degree of certainty in the post-test compared to pre-tests. The majority of students who identified the correct answer in post-tests increased their certainty between pre-and post-module tests (p-values <0.001) which can be regarded as a surrogate marker for deeper learning and understanding [37, 48, 49]. Between 63–78 % of all questions were answered correctly in post-module tests, which may have been expected due to the overall complexity testing and learning content of respective modules.

Changes from a correct to an incorrect answer from pre- to post-test raise potential issues of failure in the delivery of learning content or student engagement. Analysing the certainty ratings assigned to such changes provided some assurance that it is unlikely e-learning design confused or mislead students. Most students were ‘uncertain’ or had ‘no idea’ they had chosen the correct answer in the pre-test which indicates they were making a more or less educated guess at the time. Generally students decreased their certainty on incorrect answers in post-tests compared to pre-tests.

A small number of students gave the same incorrect answer in both tests. In an online environment for postgraduate, clinical education with few opportunities to question students directly as compared to face-to-face or clinical teaching this raises concerns that students may hold on to misconceptions and erroneous or outdated “knowledge”, particularly when certainty increases between pre-to post-tests. Additional undesirable outcomes would be students changing their answer from a correct to an incorrect one or choosing a different wrong answer with increased certainty in a post study module test. All of these outcomes occurred infrequently in this study (≤10 %). As results of both pre- and post-tests did not contribute to overall course marks there may have been little incentive or motivation for students to check their pre-test answers before undergoing the post-test at a later time. The addition of post-test marks to the summative assessment of the course could result in greater motivation to integrate and apply results from pre-tests.

The pilot study results afforded a deeper insight into which study module content was delivered in a manner enabling students to apply it correctly in post-tests. Beyond purely looking at the answers to MCQs in pre- and post-tests certainty ratings provided an enhanced understanding whether content was already known and applied well by students at the beginning of a study module. In addition, high certainty ratings for correct answers in post-tests add certainty for course designers that teaching and learning in a study module have achieved the intended outcomes, and correct guesses are minimally involved in increased correctness of answers. On the other hand, increased certainty in post-tests for incorrect answers given in both pre-and post-tests flags necessary reviews of teaching, MCQ design and strategies for student engagement.

There are a number of limitations to this pilot study which impact on its external validity. Although the participation rate of 75 % in the survey is well above other student surveys the small sample size and predominance of female participants, which was closely related to the enrolment figures, make generalisation of study results difficult. Despite similarity to survey outcomes obtained in comparable settings, knowing the opinion of all students may have provided a more complete picture of students’ experience of CRMCQs. Some of those who didn’t participate in the study may have had disparate views from their peers.

The lack of a control group of students who only answered pre-and post-module MCQs without assigning a certainty rating restricts the validity of the student survey. Although the majority of participants would have had extensive experience with MCQ testing as they completed their undergraduate pharmacy degree in Australia it remains unclear whether the perception of positive impact on learning was generated by the addition of certainty ratings versus just completing MCQs alone. In addition, sign-posting study module content useful in addressing gaps in knowledge on completion of the tests has not been investigated separately from the use of CRMCQs. As the pre-and post-module tests were used formatively some students may not have spent as much effort on identifying correct answers to pre-test MCQs before taking the post-test, particularly as these weren’t linked together, as they may for summative tests. All these factors limit the reliability of results, particularly when considering changes between pre-and post-tests.

Nevertheless, this pilot study adds a new perspective on the usefulness of CRMCQs in formative assessment in an online, postgraduate course where enrolled pharmacists start with varying degrees of knowledge and experience. The results indicate a positive impact on student learning and the potential for evaluating effectiveness of teaching design in achieving the desired learning outcomes, starting to generate proof of the concept suggested by Gardner-Medwin of CBM adding value to formative assessment [34]. In addition the study contributes to the literature on e-learning in pharmacy as well as self and e-assessment [5052]. Insights based on its findings were used to refine teaching and assessment in the UQ postgraduate pharmacy program to optimise learning for future students.

Conclusion

Asking students to rate their certainty of correctness of answers to MCQs in formative assessment and providing feedback on how to fill knowledge gaps to increase certainty, creates potential to enhance MCQ testing by encouraging reflection, self-assessment and self-regulation by learners. Students indicated that CRMCQs had a positive impact on their learning by guiding them through online study modules and content, focusing their learning and raising awareness of areas which needed further work or skill development.

The analysis of certainty ratings in addition to the correctness of answers along with trends in changes of answers and certainty between pre- and post-test CRMCQs deployed in an online pharmacotherapeutic, clinical course allowed for more accurate and detailed insights into which topic areas were delivered adequately for students to gain appropriate knowledge and understanding. This pilot study also shows that certainty ratings can assist in identifying topic areas within an online course and MCQ design in pre- and post-module tests that may require adjustment in delivery and design.

Abbreviations

CBM:

Certainty based marking

CRMCQ:

Certainty rated multiple choice question

M:

Module

MCQ:

Multiple choice question

PCPP:

Postgraduate clinical pharmacy programs

UQ:

The University of Queensland

VLE:

Virtual learning environment

References

  1. Pachler N, Daly C. Key issues in e-learning: research and practice. London, New York: Continuum International Publishing Group; 2011.

    Google Scholar 

  2. Dupras DM, Erwin PJ, Cook DA, Garside S, Montori VM, Levinson AJ. Internet-based learning in the health professions: A meta-analysis. JAMA. 2008;300:1181–96.

    Article  Google Scholar 

  3. Salter SM, Karia A, Sanfilippo FM, Clifford RM. Effectiveness of e-learning in pharmacy education. Am J Pharm Ed. 2014;78:83.

    Article  Google Scholar 

  4. Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: sound educational method or hype? A review of the evaluation literature. Acad Med. 2002;77:S86.

    Article  Google Scholar 

  5. Wutoh R, Boren SA, Balas EA. eLearning: A review of internet-based continuing medical education. J Contin Educ Health Prof. 2004;24:20–30.

    Article  Google Scholar 

  6. Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Instructional design variations in internet-based learning for health professions education: a systematic review and meta-analysis. Acad Med. 2010;85:909–22.

    Article  Google Scholar 

  7. Macfarlane-Dick D, Nicol D. Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud High Ed. 2006;31:199–218.

    Article  Google Scholar 

  8. Sadler DR. Formative assessment and the design of instructional systems. Instr Sci. 1989;18:119–44.

    Article  Google Scholar 

  9. Black P, Wiliam D. Assessment and classroom learning. Assess Ed Princ Pol Pract. 1998;5:7–74.

    Google Scholar 

  10. Al-Kadri HM, Al-Moamary MS, Roberts C, Van der Vleuten CPM. Exploring assessment factors contributing to students' study strategies: literature review. Med Teach. 2012;34:S42–50.

    Article  Google Scholar 

  11. Sambell K, McDowell L, Montgomery C. Assessment for learning in higher education. New York, NY: Routledge; 2012.

    Google Scholar 

  12. Backhaus A, Schibeci R, Hosie P. A framework and checklists for evaluating online learning in higher education. Assess Eval High Educ. 2005;30:539–53.

    Article  Google Scholar 

  13. Suskie L. In: Joughin G, editor. Assessment, Learning and Judgement in Higher Education. London: Springer; 2009. p. 133–50.

    Google Scholar 

  14. Boud D, Molloy E. Rethinking models of feedback for learning: the challenge of design. Assess Eval High Educ. 2012;38:698–712.

    Article  Google Scholar 

  15. Velan GM, Jones P, McNeil HP, Kumar RK. Integrated online formative assessments in the biomedical sciences for medical students: benefits for learning. BMC Med Educ. 2008;8:52.

    Article  Google Scholar 

  16. Norcini JJ, Swanson DB, Grosso LJ, Webster GD. Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence. Medical Educ. 1985;19:238–47.

    Article  Google Scholar 

  17. Case SM, Swanson DB. Extended‐matching items: a practical alternative to free‐response questions. Teach Learn Med. 1993;5:107–15.

    Article  Google Scholar 

  18. Case SM, Swanson DB. Constructing written test questions for the basic and clinical sciences. 3rd ed. Philadelphia: National Board of Medical Exminers; 1998.

    Google Scholar 

  19. McCoubrie P. Improving the fairness of multiple-choice questions: a literature review. Med Teach. 2004;26:709–12.

    Article  Google Scholar 

  20. Ellaway R, Masters K. AMEE guide 32: e-learning in medical education. Part 1: learning, teaching and assessment. Med Teach. 2008;30:455–73.

    Article  Google Scholar 

  21. McConnell MM, St-Onge C, Young ME. The benefits of testing for learning on later performance. Adv Health Sci Educ Theory Pract. 2015;20:305–20.

    Article  Google Scholar 

  22. Boud D. Feedback: ensuring that it leads to enhanced learning. Clin Teach. 2015;12:3–7.

    Article  Google Scholar 

  23. Heck A, Van Gastel L. Mathematics on the threshold. Int J Math Educ Sci Technol. 2006;37:925–45.

    Article  Google Scholar 

  24. Burton RF. Misinformation, partial knowledge and guessing in true/false tests. Med Educ. 2002;36:805–11.

    Article  Google Scholar 

  25. Miles A, Loughlin M, Polychronis A. Medicine and evidence: knowledge and action in clinical practice. J Eval Clin Pract. 2007;13:481–503.

    Article  Google Scholar 

  26. Dory V, Degryse J, Roex A, Vanpee D. Usable knowledge, hazardous ignorance - beyond the percentage correct score. Med Teach. 2010;32:375–80.

    Article  Google Scholar 

  27. Christensen N, Jones MA, Higgs J, Edwards I. Dimensions of clinical reasoning capability. In: Higgs J, Jones M, editors. Clinical reasoning in the health professions. Oxford: Butterworth-Heineman; 2008. p. 101–10. 44.

    Google Scholar 

  28. Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: a systematic review. Adv Health Sci Educ Theory Pract. 2009;14:595–621.

    Article  Google Scholar 

  29. Gardner-Medwin A, Gahan M. Formative and summative confidence-based assessment. Proceedings of the 7th International Computer-Aided Assessment Conference. Loughborough: Loughborough University; 2003:147-155.

  30. Khanal S, Buckley T, Harnden C, et al. Effectiveness of a national approach to prescribing education for multiple disciplines. Br J Clin Pharmacol. 2013;75:756–62.

    Google Scholar 

  31. Gardner-Medwin T, Curtin N. Certainty-based marking (CBM) for reflective learning and proper knowledge assessment. Proceedings of the REAP International Online Conference: Assessment Design for Learner Responsibility. Glasgow: University of Strathclyde; 2007:29-31. Available online from http://www.ucl.ac.uk/lapt/REAP_cbm.pdf. Accessed 30 Sept 2016.

  32. Gardner-Medwin AR. Confidence-based marking – towards deeper learning and better exams. In: Bryan C, Clegg K, editors. Innovative Assessment in Higher Education. London: Routledge; 2006. p. 141–9.

    Google Scholar 

  33. Nix I, Wyllie A. Exploring design features to enhance computer‐based assessment: Learners' views on using a confidence‐indicator tool and computer‐based feedback. Br J Educ Technol. 2011;42:101–12.

    Article  Google Scholar 

  34. Schoendorfer N, Emmett D. Use of certainty-based marking in a second-year medical student cohort: a pilot study. Adv Med Educ Pract. 2012;3:139.

    Article  Google Scholar 

  35. Gardner-Medwin A, Curtin N. Confidence assessment in the teaching of physiology. J Physiol. 1996;494:P74-P74

  36. Gardner-Medwin A. Confidence assessment in the teaching of basic science. Res Learn Tech. 1995;3:80–5.

    Article  Google Scholar 

  37. Issroff K, Gardner-Medwin AR. Evaluation of confidence assessment within optional coursework. In: Oliver M, editor. Innovation in the Evaluation of Learning Technology. London: University of North London; 1998. p. 168–78.

    Google Scholar 

  38. R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Available at https://www.R-project.org/. Accessed 18 August 2016

  39. Pachler N, Daly C, Mor Y, Mellar H. Formative e-assessment: practitioner cases. Comput Educ. 2010;54:715–21.

    Article  Google Scholar 

  40. Wiliam D. What is assessment for learning? Stud Educ Eval. 2011;37:3–14.

    Article  Google Scholar 

  41. Walker DJ, Topping K, Rodrigues S. Student reflections on formative e‐assessment: expectations and perceptions. Learn Media Technol. 2008;33:221–34.

    Article  Google Scholar 

  42. Sadler DR. Beyond feedback: developing student capability in complex appraisal. Assess Eval High Educ. 2010;35:535–50.

    Article  Google Scholar 

  43. Cook DA, Cook DA, Dupras DM. Teaching on the web: automated online instruction and assessment of residents in an acute care clinic. Medical Teach. 2004;26:599–603.

    Article  Google Scholar 

  44. Bell JA, Patel B, Malasanos T. Knowledge improvement with web-based diabetes education program: brainfood. Diabetes Technol Ther. 2006;8:444–8.

    Article  Google Scholar 

  45. Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011;33:478–85.

    Article  Google Scholar 

  46. Sandars J. The use of reflection in medical education: AMEE guide no. 44. Med Teach. 2009;31:685–95.

    Article  Google Scholar 

  47. Cordina M, Lauri M-A, Buttigieg R, Lauri J. Personality traits of pharmacy and medical students throughout their course of studies. Pharm Pract. 2015;13(4):1-9.

  48. Nicol D. E-assessment by design: using multiple-choice tests to good effect. J Furth High Educ. 2007;31:53–64.

    Article  Google Scholar 

  49. Rogers MS, Chung T, Li A. Answering MCQs: a Study of Confidence Amongst Medical Students. Aust N Z J Obstet Gynaecol. 1992;32:133–6.

    Article  Google Scholar 

  50. Eva KW, Regehr G. Self-assessment in the health professions: a reformulation and research agenda. Acad Med. 2005;80:S46.

    Article  Google Scholar 

  51. Colthart I, Bagnall G, Evans A, Allbutt H. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME guide no. 10. Med Teach. 2008;30:124–45.

    Article  Google Scholar 

  52. Stödberg U. A research review of e-assessment. Assess Eval High Educ. 2012;37:591–604.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank all participating pharmacists and Holly Ross at UQ for support with the statistical analysis.

Funding

None.

Availability of data and materials

Additional and raw data can be made available upon request to the corresponding author.

Authors’ contribution

KL conceived of the study, JB executed its design and coordination. KL performed the data collection and analysis. KL drafted and JB revised the manuscript. Both authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

The study was approved by the UQ Behavioural and Social Sciences Ethical Review Committee (clearance number: 2012001278). Written consent was granted by all participants.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karen Luetsch.

Additional file

Additional file 1:

Student questionnaire. (DOCX 11 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luetsch, K., Burrows, J. Certainty rating in pre-and post-tests of study modules in an online clinical pharmacy course - A pilot study to evaluate teaching and learning. BMC Med Educ 16, 267 (2016). https://doi.org/10.1186/s12909-016-0783-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-016-0783-1

Keywords