This article has Open Peer Review reports available.
Assessing basic life support skills without an instructor: is it possible?
© Mpotos et al.; licensee BioMed Central Ltd. 2012
Received: 18 December 2011
Accepted: 23 July 2012
Published: 23 July 2012
Current methods to assess Basic Life Support skills (BLS; chest compressions and ventilations) require the presence of an instructor. This is time-consuming and comports instructor bias. Since BLS skills testing is a routine activity, it is potentially suitable for automation. We developed a fully automated BLS testing station without instructor by using innovative software linked to a training manikin. The goal of our study was to investigate the feasibility of adequate testing (effectiveness) within the shortest period of time (efficiency).
As part of a randomised controlled trial investigating different compression depth training strategies, 184 medicine students received an individual appointment for a retention test six months after training. An interactive FlashTM (Adobe Systems Inc., USA) user interface was developed, to guide the students through the testing procedure after login, while Skills StationTM software (Laerdal Medical, Norway) automatically recorded compressions and ventilations and their duration (“time on task”). In a subgroup of 29 students the room entrance and exit time was registered to assess efficiency. To obtain a qualitative insight of the effectiveness, student’s perceptions about the instructional organisation and about the usability of the fully automated testing station were surveyed.
During testing there was incomplete data registration in two students and one student performed compressions only. The average time on task for the remaining 181 students was three minutes (SD 0.5). In the subgroup, the average overall time spent in the testing station was 7.5 minutes (SD 1.4). Mean scores were 5.3/6 (SD 0.5, range 4.0-6.0) for instructional organisation and 5.0/6 (SD 0.61, range 3.1-6.0) for usability. Students highly appreciated the automated testing procedure.
Our automated testing station was an effective and efficient method to assess BLS skills in medicine students. Instructional organisation and usability were judged to be very good. This method enables future formative assessment and certification procedures to be carried out without instructor involvement.
KeywordsAutomated testing Basic Life Support Cardiopulmonary resuscitation Self-directed learning
Delivery of high quality chest compressions is the Basic Life Support (BLS) skill most likely to improve survival [1–5]. To ensure trainees reliably achieve the learning objectives, educational interventions should be evaluated through assessment . Since BLS skills mastery rapidly decays and should not be assumed to persist for pre-defined time periods, regular skill assessment should be established to determine the need for refresher training . Current BLS testing methods require the presence of an instructor, making testing time-consuming with a risk of instructor bias . Acquiring objective data from recording manikins provides more accurate information about skills mastery than instructor judgement. However, current manikin-based solutions still require an instructor to organise testing, to manage the candidates, to present a scenario (when required) and to operate the manikin and the computer.
The goal of our study was to investigate the feasibility of adequate testing (effectiveness) within the shortest period of time (efficiency), using an automated testing procedure without an instructor.
To determine the effectiveness of the testing procedure, we surveyed the participants’ perceptions regarding the key elements in the instructional setting of the automated testing station (goals, instructions, assessment and feedback) and elements related to the setup. In the literature, the latter is labelled as "usability" .
Efficiency was measured by a research collaborator who registered the overall time spent in the testing station in a subgroup of students
During the academic year 2009–2010, as part of a randomised controlled trial investigating different compression depth training strategies in a self-learning (SL) station, 184 third year medicine students had to be assessed six months after initial training . In order to facilitate the assessment procedure, our objective was to develop a fully automated testing method without instructor and to evaluate if such a method would be able to achieve adequate testing (effectiveness) within the shortest period of time (efficiency). The Ethics Committee of Ghent University Hospital approved the study on 8 December 2009 (trial registration B67020097543). Participation in the study was voluntary, non-participation did not influence student grades. All students had received instructor-led (IL) training and testing during the first and second year of medicine.
The actual testing results are reported in the randomised controlled trial by Mpotos and colleagues . To ensure BLS competency of every student in accordance to the resuscitation guidelines a refresher training was provided in the following year.
After this introduction, a text message was displayed asking the student to take a face shield and place it on the manikin. By clicking “continue”, the next screen informed the student that a victim had just collapsed in the room, that there was no breathing and circulation and that an ambulance was called. The student was asked to kneel down next to the victim and to resuscitate the victim. The same screen showed an analogue clock and a digital countdown timer of three minutes [Figure 2b]. By clicking “continue”, the next screen asked the student to confirm that he was sitting next to the manikin and that the face shield was properly placed on the manikin’s face. The student was asked to click the “start test” button displayed on the screen and to perform BLS for three minutes [Figure 2c]. Because during training in the SL station six months before the test, students had received automated voice feedback from the manikin, we stressed that the test would be without voice assistance. The Resusci Anne Skills StationTM automatically registered the data picked up by sensors in the manikin. The amount of time spent performing compressions and ventilations was also registered and will be referred to as “time on task”. For our test, a time on task of three minutes was required. Exactly after three minutes, the clock and numeric countdown turned red and an audio warning signal was played. This was immediately followed by a video message from the instructor to announce the end of the test, also asking the student to clean the manikin and to leave the room [Figure 2d]. The program then automatically returned to the login screen and performance results were stored as xml files in a database. Students did not receive feedback about their performance at the end of the test.
Information stored in the xml files
total number of compression
average compression depth
number registered with incomplete release (≥5 mm)
number registered with hand position too low/too high up/too far to the right/too far too the left
number registered with incorrect hand placement
number registered with average, adequate, insufficient and excessive rate, time-outs, total number of ventilations
number of ventilations registered with average, adequate, insufficient and excessive volume
average minute volume
number of ventilations registered with insufficient relaxation
average inspiration time
number of ventilations registered with adequate, too short, too long inspiration time
average ventilation flow rate
number of ventilations registered with adequate, too short, too long duration
number of ventilations registered with airway closed
number of cycles registered with too few compressions/ventilations, too many compressions/ventilations, enough compressions/ventilations
total hands off time
number of cycles registered with correct, too long, much too long, average hands off time
total cycles counted
Results are reported as means with standard deviations. With respect to the questionnaire, descriptive results (based on percentages) are graphically represented.
Students’ responses to the questionnaire were analysed through principal components analysis (PCA) . Since independence of the components was not assumed, an oblique rotation (promax) was used instead of an orthogonal rotation. In order to determine the number of components, parallel analysis (PA) was used. PA is a statistically based method to decide upon the number of components, focusing on the number of components that account for more variance than the components derived from random data, and is more appropriate than using screen plots or the values-greater-than-one rule . Individual loadings of 0.40 or larger were used to identify components. Extracted components were examined and labelled based on the items loading on the specific component. Cronbach’s α was calculated to determine the internal consistency of the items within each component. All statistical analyses were performed using PASW® statistics 18 for Windows (SPSS Inc. Chicago, USA). For the parallel analysis, the SPSS syntax of O’Connor was used .
One hundred and eighty-four students were tested. During testing there was a technical failure (incomplete data registration) in two students and one student performed compressions only. Complete data sets were obtained for the remaining 181 students. According to the automatic time registration of the system, average time on task was three minutes (SD 0.5). Manual timing of the entrance and exit time in the subgroup of 29 students showed an average time of 7.5 minutes (SD 1.4) spent in the testing station.
The questionnaire was completed by 174/184 students (response rate of 94 %). The descriptive results are shown in Figure 3. None of the 20 items received a “strongly disagree” score and only five items (16, 17, 10, 18, and 9) received a “hardly agree” score from a small number of students. Furthermore the graph shows that for the upper 15 questions, more than 80 % of the students either strongly or certainly agreed and that for items 13, 17, and 10 more than 70 % of the students either strongly or certainly agreed. In response to item 18, asking students whether they preferred the automated testing station to an IL test, 55 % of the students either strongly or certainly agreed, 25 % agreed, 15 % somewhat agreed, and 5 % hardly agreed.
Principal component analysis
Pattern matrix of the principal components analysis (promax rotation)
The instruction was clear and plain.
The instructions of the test were sufficiently clear.
During the application of the automated testing station I always knew what to do.
The computer support was relevant.
The organisation of the test was effective.
I was prompted to start the test at the right moment.
The goals of the test were sufficiently clear.
During the session, I knew clearly what I was doing.
The automated testing station was easy to use.
The organisation of the test was convenient.
During the test procedure I was sufficiently guided.
The accompanying text was very helpful.
The test was relevant to assess the goals of the course.
To me, the automated testing station is a good way to assess my skills.
I prefer the automated testing station to an instructor-led test.
The testing of my performance connected to what I had learned.
The aids used (computer, video, manikin) were appropriate.
The skills that were tested correspond to what I had learned.
The test lasted long enough to evaluate my abilities.
The test was too long and was causing fatigue in the end (R).
We have developed a fully automated testing station to assess BLS skills. An interactive FlashTM module, embedded in commercially available software (Resusci Anne Skills StationTM software), allowed guiding students accurately through the testing procedure without instructor involvement. Although the software contained a timer to indicate the duration of the test, this does not automatically imply that rescuers performed BLS during the full three minutes. By recording the actual time-on-task, we could confirm that average test duration was three minutes. An automated testing station can be used to assess large groups of trainees (i.e. for certificative testing). On a 14 hour per day base, considering an average time of 7.5 minutes per student, eight students could be tested in one hour and in total 112 subjects could be tested in a day. Achieving this number with an instructor would be far more labour- and time-intensive.
Testing stations could also present an added value as an integral part of training, since testing has been shown to yield a powerful effect on retention which may be essential to consolidate newly acquired skills . Adding a test as a final activity in a BLS course seems to have a stronger long-term learning impact as compared to spending an equal amount of time practising the same skills [13–15]. At a theoretical level, the training of continuous retrieval processes seems to account for the “testing effect”. Also, requiring learners to elaborate while retrieving earlier knowledge during testing, has been found to affect long term learning [12, 14, 15]. Though these assumptions explain the testing effect in relation to declarative knowledge acquisition, the theoretical assumptions also fit the beneficial impact on the acquisition of skills, and tests also invoke retrieval and elaboration of procedural knowledge .
SWOT analysis of automated BLS skills testing
· Accessible (24 h/24 h)
· Need for human supervision to supply disposables (wipes and lungs)
· Frequent manikin maintenance
· Technical failures (manikin, hardware or software bug, computer problems
· Objective (no instructor bias)
· Hygiene concerns
· Able to achieve adequate testing (effectiveness)
· within the shortest period of time (efficiency)
· Formative testing of large groups
· Dependency of computer and internet technology
· Certification procedures
· Monopoly of technology and commercial exploitation
· Pre- and post testing in educational interventions
Acceptance by internet generation
The PCA resulted in a two-component structure, with one component focusing on the quality of instructional organisation (goals, instructions, assessment and feedback) and the other component focusing on usability. Average scores indicated that students certainly to strongly agreed that the instructional organisation was appropriate and students certainly agreed that the approach was usable. The results of this questionnaire are important for two reasons. First, they show that the automated testing station is functioning properly and is adequately organised. Second, they show that students were positive about the usability of the testing station.
As suggested by Kromann and colleagues, future studies should investigate the intrinsic testing effect and the extrinsic learning effect of formative testing, informing the participant about performance and guiding him towards further skills improvement and mastery [14, 15]. These studies could incorporate automated skills testing as a formative assessment procedure in an adaptive learning cycle with repetitive testing .
A number of limitations have to be stressed. When discussing the quality of this specific assessment setting, two aspects have to be distinguished. The first aspect is the quality of the assessment process. The second aspect is the quality of the measurement of the performance indicators. This is guaranteed by the intrinsic quality of the manikin sensors and by the use of existing registration software. Maintenance protocols and timely replacement of sensors, valves and springs are imperative to guarantee measurement reliability and validity. In the context of the present study, the students were familiar with training in a SL station and that may have improved the usability of the testing station. However, the automated testing situation and the specific FlashTM module were completely new to the students. Presenting the usability questionnaire six months after testing may have introduced a bias. Further research is needed to confirm these results in terms of non-inferiority compared to IL testing and usability in other student populations.
The software prototype we used only focussed on testing the technical CPR components. Future developments could embed interactive components allowing the trainee to dial a phone number or assessing cardiac arrest by performing the right actions on-screen.
Automated testing is an effective and efficient method for assessing BLS skills in medicine students and has the potential to innovate traditional resuscitation training. It grounds the scalability of formative assessment and certification procedures without instructor involvement.
NICOLAS MPOTOS, MD, MSc, is a resident in emergency medicine and a PhD student at Ghent University Hospital.
BRAM DE WEVER, PhD, is a professor at the Department of Educational Studies, Ghent University.
MARTIN VALCKE, PhD, is a professor and head of the Department of Educational Studies, Ghent University.
KOENRAAD G. MONSIEURS, MD, PhD, is a professor of emergency medicine at the University of Antwerp and at the University of Ghent.
We are grateful to the management of Ghent University Hospital, to the IT department for computer support, to Charlotte Vankeirsbilck for administrative support and to all the students who participated in the study. We thank Lisa Malfait for her participation in the intro video and for her informed consent to use any image related to this video. The programming of the FlashTM module was done by Uniweb bvba (Strombeek-Bever, Belgium).
- Van Hoeyweghen RJ, Bossaert L, Mullie A, Calle P, Martens P, Buylaert WA, Delooz H: Quality and efficiency of bystander CPR. Belgian Cerebral Resuscitation Study Group. Resuscitation. 1993, 26: 47-52. 10.1016/0300-9572(93)90162-J.View ArticleGoogle Scholar
- Cummins RO, Eisenberg MS: Pre-hospital cardiopulmonary resuscitation: is it effective?. JAMA. 1985, 253: 2408-2412. 10.1001/jama.1985.03350400092028.View ArticleGoogle Scholar
- Gallagher EJ, Lombardi G, Gennis P: Effectiveness of bystander cardiopulmonary resuscitation and survival following out-of-hospital cardiac arrest. JAMA. 1995, 274: 1922-1925. 10.1001/jama.1995.03530240032036.View ArticleGoogle Scholar
- Swor RA, Jackson RE, Cynar M, Sadler E, Basse E, Boji B, Rivera-Rivera EJ, Maher A, Grubb W, Jacobson R, Dalbec DL: Bystander CPR, ventricular fibrillation, and survival in witnessed, unmonitored out-of-hospital cardiac arrest. Ann Emerg Med. 1995, 25: 780-784. 10.1016/S0196-0644(95)70207-5.View ArticleGoogle Scholar
- Wik L, Steen PA, Bircher NG: Quality of bystander cardiopulmonary resuscitation influences outcome after pre-hospital cardiac arrest. Resuscitation. 1994, 28: 195-203. 10.1016/0300-9572(94)90064-7.View ArticleGoogle Scholar
- Soar J, Mancini ME, Bhanji F, Billi JE, Dennett J, Finn J, Ma MH, Perkins GD, Rodgers DL, Hazinski MF, Jacobs I, Morley PT: Part 12: Education, implementation, and teams: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations. Resuscitation. 2010, 81 (Suppl 1): e288-e330.View ArticleGoogle Scholar
- Lynch B, Einspruch EL, Nichol G, Aufderheide TP: Assessment of BLS skills: Optimizing use of instructor and manikin measures. Resuscitation. 2008, 76: 233-243. 10.1016/j.resuscitation.2007.07.018.View ArticleGoogle Scholar
- Holzinger A: Usability engineering methods for software developers. Communications of the ACM. 2005, 48: 71-74.View ArticleGoogle Scholar
- Mpotos N, Lemoyne S, Wyler B, Deschepper E, Herregods L, Calle PA, Valcke MA, Monsieurs KG: Training to deeper compression depth reduces shallow compressions after six months in a manikin model. Resuscitation. 2011, 82: 1323-1327. 10.1016/j.resuscitation.2011.06.004.View ArticleGoogle Scholar
- Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ: Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods. 1999, http://dx.doi.org/10.1037/1082-989X.4.3.272.Google Scholar
- O’Connor BP: SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods. 2000, http://dx.doi.org/10.3758/BF03200807.Google Scholar
- Larsen DP, Butler AC, Roediger HL: Repeated testing improves long-term retention relative to repeated study: a randomised controlled trial. Medical Education. 2009, 43: 1174-1181. 10.1111/j.1365-2923.2009.03518.x.View ArticleGoogle Scholar
- Roediger HL, Karpicke JD: The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science. 2006, 1: 181-276. 10.1111/j.1745-6916.2006.00012.x.View ArticleGoogle Scholar
- Kromann CB, Jensen ML, Ringsted C: The effect of testing on skills learning. Medical Education. 2009, 43: 21-7. 10.1111/j.1365-2923.2008.03245.x.View ArticleGoogle Scholar
- Kromann CB, Bohnstedt C, Jensen ML, Ringsted C: The testing effect on skills learning might last 6 months. Adv Health Sci Educ Theory Pract. 2009, http://dx.doi.org/10.1007/s10459-009-9207-x.Google Scholar
- Sutton RM, Niles D, Meaney PA, Aplenc R, French B, Abella BS, Lengetti EL, Berg RA, Helfaer MA, Nadkarni V: Low-dose, high-frequency CPR training improves skill retention of in-hospital pediatric providers. Pediatrics. 2011, 128: e145-151. 10.1542/peds.2010-2105.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/12/58/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.