This article has Open Peer Review reports available.
Does Medical Students' Preference of Test Format (Computer-based vs. Paper-based) have an Influence on Performance?
© Hochlehnert et al; licensee BioMed Central Ltd. 2011
Received: 13 July 2011
Accepted: 25 October 2011
Published: 25 October 2011
Computer-based examinations (CBE) ensure higher efficiency with respect to producibility and assessment compared to paper-based examinations (PBE). However, students often have objections against CBE and are afraid of getting poorer results in a CBE.
The aims of this study were (1) to assess the readiness and the objections of students to a CBE vs. PBE (2) to examine the acceptance and satisfaction with the CBE on a voluntary basis, and (3) to compare the results of the examinations, which were conducted in different formats.
Fifth year medical students were introduced to an examination-player and were free to choose their format for the test. The reason behind the choice of the format as well as the satisfaction with the choice was evaluated after the test with a questionnaire. Additionally, the expected and achieved examination results were measured.
Out of 98 students, 36 voluntarily chose a CBE (37%), 62 students chose a PBE (63%). Both groups did not differ concerning sex, computer-experience, their achieved examination results of the test, and their satisfaction with the chosen format. Reasons for the students' objections against CBE include the possibility for outlines or written notices, a better overview, additional noise from the keyboard or missing habits normally present in a paper based exam. The students with the CBE tended to judge their examination to be more clear and understandable. Moreover, they saw their results to be independent of the format.
Voluntary computer-based examinations lead to equal test scores compared to a paper-based format.
The use of computer-based examinations combines advantages with respect to content (integration of other media, favourable presentation of pictures, and possibility of other examination formats) with rapid data analysis. This promises higher efficiency with respect to implementation and evaluation [1–3]. However, students often have worries and prejudices concerning an unsatisfactory graphical user interface (GUI) of the examination software, possible technical problems with the computer, concentration problems, and additional exam stress . An increase of the number of required graded examinations during medical studies from zero to 39 has been one effect of the new medical licensing regulations in Germany . This presents a challenge to academic departments, especially those with limited teaching personnel and financial resources. To our knowledge this is the first study comparing voluntary computer-based examinations (CBE) and paper-based examinations (PBE). The aims of this study were (1) to assess the readiness of students to a CBE vs. PBE (2) to examine the acceptance and satisfaction with the CBE on a voluntary basis, and (3) to compare the results of the examinations, which were conducted in different formats.
Pr > |t|
computer-based examination format (n = 36)
paper-based examination format (n = 62)
1) The usability of the examination was easy.
2) The examination was clear and easily understandable.
3) Additional mental effort was required due to the chosen examination format.
4) I found it useful to take the examination in the chosen format, because it increased efficacy in this situation.
5) I was anxious before the examination.
6) After a few questions, my anxiety at the beginning of the examination was gone.
7) If I had a choice, I would take future examinations more often in the chosen format.
8) Expected scoring in examination
9) CBE: If I would have chosen PBE I suppose my results would have been (better/equal/worse). PBE: If I would have chosen CBE I suppose my results would have been (better/equal/worse).
10) In how many CBEs did you participate so far?
11) It is an advantage of CBE that there is the possibility to change my answers during the examination.
12) It is important for me to have the possibility to change my answers during the examination.
13) I found it useful to take this examination in CBE-format because it prepared me for further upcoming CBEs. (only CBE)
14) The CBE itself was in total better than I expected it to be. (only CBE)
15) There were no technical problems in the CBE. (only CBE)
16) I was satisfied with the graphical user interface. (only CBE)
17) Reason for preference (open question)
18) suggestions for improvement (open question)
19) I decline CBE in general. (only PBE)
For statistical evaluation a t-test with Satterthwaite-correction for unequal variances was conducted. The statistical evaluation was performed with SAS 9.1.
Reason for preference
Reason for the preference (frequency of the item) of the computer-based examination format n = 26 of 36
Reason for the preference (frequency of the item) of the paper-based examination format n = 55 of 62
Quick assessment (n = 10, 34%)
Possibility for outlines or written notices on the questionnaire (n = 23, 42%)
Supporting technological advancement (n = 7, 27%)
Better overview (n = 15, 27%)
Support of the computer concerning the number of the items to consider (n = 7, 27%)
Habit (n = 11, 20%)
More objective assessment (n = 2, 8%)
Fear of computer errors or disturbingly loud keyboards (n = 6, 11%)
According to the evaluation of both groups, predominantly no or only very little additional effort was necessary to handle the format, whereas the students of the CBE-group noted higher mental exertion due to the CBE format (CBE: 2.3, PBE: 1.1, p < 0.001). In the CBE-group, the members felt no influence over their performance in the examination, whereas members of the PBE-group perceived an added benefit with respect to their performance in the examination (CBE: 3.0, PBE 4.4; p < 0.001).
Dealing with the examination format was judged to be simple in both groups (CBE: 4.6 vs. PBE: 4.7, not statistically significant), although the CBE was supposed to be more clear and easier to understand (CBE: 4.4 vs. PBE: 4.0, p = 0.12). There were no technical problems in the CBE-group (4.7). In addition, the students in the CBE were very satisfied with the graphical user interface (4.7).
From the instructors' point of view, the assessment of the computer-based examination was quicker and more efficient (0.5 hours for one instructor for 36 students vs. 60 hours for 62 students). The assessment took 45 min at the CBE for the two open ended questions.
Primary results show a high acceptance of the computer-based examination, which is reflected in a 37% voluntary participation and in a high level of readiness to take further examinations in this examination format. The students of the paper-based examination showed very high willingness to remain in this examination format for upcoming examinations. Additionally, this group showed a high level of anxiety with respect to poorer performance in a computer-based format. On the one hand, this could be a consequence of being used to paper-based examinations. On the other hand, it could be due to additionally stated reasons, which should be taken seriously and diminished in future examinations as effectively as possible (see table 2).
For example, there should be the possibility for written notices on the test (possibility for outlines, personal remarks etc.), including the computer-based examination. This was a motivating reason for 23 students who registered for the paper-based examination. On average, the anxiety before the examination was comparable in both groups. The fear of PC-errors or technical difficulties was nevertheless reason enough for the choice of seven students for the paper-based examination. This anxiety could possibly be lowered by an additional introduction to the examination software or a sample examination for these students. The perceived usefulness of the paper-based examination concerning a personal performance increase in this format could surely be interpreted in the context with the given reasons for the decision to take the paper-based examination. One reason may be the possibility of writing short notices on the test form or a subjectively better overview of the examination can lead to the impression of a higher performance. Moreover, Miller et al. found that the development of visually rich quizzes was greatly facilitated by the use of computers . While Ogilvie et al. demonstrated that students found computer based tests less time consuming , we experienced that both groups finished in nearly the same amount of time.
The slightly higher mental exertion required in the computer-based format could be explained by the students' familiarity with paper-based examinations and the rare usage of the graphical user interface. Regular computer-based examinations in the curriculum of the faculty in more subjects with the same examination software could possibly change this impression of the students by familiarizing these students with computer-based examinations and lower the anxiety. On the other hand, the objection of some students concerning disturbingly loud keyboards must be taken seriously (see table 2) and be corrected with the use of special keyboards if the occasion arises.
The examination was judged as simple, there were no technical difficulties, and the students were satisfied with the graphical user interface. The usability was judged to be very high in the CBE-group, especially in clarity and understanding. The students with the CBE judged their examination to be more clear and understandable, however this difference did not reach the level of significance. This is evidence of a high level of security throughout the examination, which is in concordance with other studies [9–12]. Due to the fact that answers to open ended questions could be corrected by only one instructor, the quality criteria of equal treatment could be better achieved and as a consequence objectivity could be raised.
Another essential result of this study is the independence of the exam outcome from the chosen format. In addition, the average score depicted as the distribution of the grades showed no difference. Russell & Haney show that the test results of students accustomed to writing on computer are higher then those written by hand . Despite this, there was a tendency of more students to fail the examination in the PBE-group (8 vs. 2 in the CBE-group, 12.9% vs. 5.6%). However, this is not statistically significant. On the one hand we could not principally rule out a potential bias that the more intellectual students rather chose the CBE, so possible disadvantages due to technical reasons were compensated. On the other hand the better self-assessment of the students in the CBE-group is impressive and may hint towards a more optimistic attitude of these students to support innovation from the beginning.
The main advantage of the computer-based examination is an increase in efficiency and objectivity, because the automatic procession of the examination data is assumed to be less error-prone. Peterson et al. pointed out that an important step in evaluating computer-based examinations is to be sure that the exam format is measuring the examinee's knowledge and not their comfort level or confidence with the technology . Even script concordance tests could be examined computer-based . Some studies showed that a development of a web-based assessment resulted in less administration for course organizers [11, 16]. Unfortunately our study design did not allow performing a randomised trial because of legal reasons so we focused on the voluntary aspect. Here we found no differences if the students are free to choose the test format. Since this study was not intended to prove that CBE is equal or superior to PBE for all exam takers, further studies need to test how these results and the evaluation of a computer-based examination (totally or with randomised access) influence the acceptance, the assessment of the usability, and the outcome of the examination, especially in those students who do not prefer a computer-based examination.
With respect to the variety of teaching and examination content, a computer-based examination is not only equally in its efficiency and ability to measure academic performance, but also an instrument to examine applied knowledge and visual skills with the help of innovative questionnaires and the use of complex media and/or new item structures such as the key feature.
Despite the exam anxiety on the part of the students, 37% chose a computer-based examination format. In total, there were only very few students (ca. 5%) who denied the computer-based option.
We could show that voluntary computer-based examinations lead to equal test scores compared to the paper-based manner. After further improvement and compensation of objectives on the side of the students, a required computer-based format should be of no disadvantage to the students. By providing reliable information and a proper preparation of the students for the exam via an introduction to the software, a CBE could be a good method to conduct written examinations efficiently and fairly.
AH specialised in health technology assessment with the focus on evaluation of the impact of Informatics on Medical Education. He evaluated the data and drafted the manuscript. KB took part in the development and the execution of the computer-based examination. AM participated in the design of the study and performed the statistical analysis. JJ is responsible for the medical education program at the Medical hospital and participated in the coordination of the study. All authors read and approved the final manuscript.
We acknowledge the contribution of Ilja Kalinin who was involved in revising the manuscript critically, Steffen Briem and Joern Heid (CAMPUS-Team) who supported the acquisition of data, and Prof. C. Bartram (head of the department for human genetics).
- Cantillon P, Irish B, Sales D: Using computers for assessment in medicine. BMJ. 2004, 329: 606-609. 10.1136/bmj.329.7466.606.View ArticleGoogle Scholar
- Kreiter CD, Ferguson K, Gruppen LD: Evaluating the usefulness of computerized adaptive testing for medical in-course assessment. Acad Med. 1999, 74: 1125-1128. 10.1097/00001888-199910000-00016.View ArticleGoogle Scholar
- Naguwa GS: An update on the USMLE performance of medical students at the John A. Burns School of Medicine and Computer-Based Testing. Hawaii Med J. 1998, 57: 646-648.Google Scholar
- Dorup J: Experience and attitudes towards information technology among first-year medical students in Denmark: longitudinal questionnaire survey. J Med Internet Res. 2004, 6: e10-10.2196/jmir.6.1.e10.View ArticleGoogle Scholar
- Bundesministerium für Gesundheit: Approbationsordnung für Ärzte vom 27. Juni 2002 [Medical Licensing Regulations.]. Bundesgesetzbl. 2002, 1: 2405-2435.Google Scholar
- Ruderich F, Bauch M, Haag M, Heid J, Leven FJ, Singer R, Geiss HK, Jünger J, Tonshoff B: CAMPUS - A flexible, interactive system for web-based, problem-based learning in health care. Stud Health Technol Inform. 2004, 107: 921-925.Google Scholar
- Miller AP, Haden P, Schwartz PL, Loten EG: Pilot studies of in-course assessment for a revised medical curriculum: II. Computer-based, individual. Acad Med. 1997, 72: 1113-1115. 10.1097/00001888-199712000-00026.View ArticleGoogle Scholar
- Ogilvie RW, Trusk TC, Blue AV: Students' attitudes towards computer testing in a basic science course. Med Educ. 1999, 33: 828-831. 10.1046/j.1365-2923.1999.00517.x.View ArticleGoogle Scholar
- Bocij P, Greasley A: Can computer-based testing achieve quality and efficiency in assessment?. International Journal of Educational Technology. 1999, 1: 1-17.Google Scholar
- Fleetwood J, Vaught W, Feldman D, Gracely E, Kassutto Z, Novack D: MedEthEx Online: a computer-based learning program in medical ethics and communication skills. Teach Learn Med. 2000, 12: 96-104. 10.1207/S15328015TLM1202_7.View ArticleGoogle Scholar
- Wheeler DW, Whittlestone KD, Smith HL, Gupta AK, Menon DK: A web-based system for teaching, assessment and examination of the undergraduate peri-operative medicine curriculum. Anaesthesia. 2003, 58: 1079-1086. 10.1046/j.1365-2044.2003.03405.x.View ArticleGoogle Scholar
- Wolfson PJ, Veloski JJ, Robeson MR, Maxwell KS: Administration of open-ended test questions by computer in a clerkship final examination. Acad Med. 2001, 76: 835-839. 10.1097/00001888-200108000-00018.View ArticleGoogle Scholar
- Russell M, Haney W: Testing Writing on Computers: An Experiment Comparing Student Performance on Tests Conducted via Computer and via Paper-and-Pencil. Education Policy Analysis Archives. 1997, 5: 1-20.View ArticleGoogle Scholar
- Peterson MW, Gordon J, Elliott S, Kreiter C: Computer-based testing: initial report of extensive use in a medical school curriculum. Teach Learn Med. 2004, 16: 51-59. 10.1207/s15328015tlm1601_11.View ArticleGoogle Scholar
- Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C: The Script Concordance test: a tool to assess the reflective clinician. Teach Learn Med. 2000, 12: 189-195. 10.1207/S15328015TLM1204_5.View ArticleGoogle Scholar
- Oliver RG, Chadwick B, Jones ML, Blakytny C, Hunter B, Richmond S, Hunter ML, Whitehouse NH: A computerised method of monitoring and assessing undergraduate clinical activity. Br Dent J. 1996, 181: 279-282. 10.1038/sj.bdj.4809232.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/11/89/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.