- Research
- Open access
- Published:
Development and validation of assessment instruments for cervical collar and spinal board placement in simulated environments for nursing students in the care of polytrauma patients
BMC Medical Education volume 24, Article number: 1080 (2024)
Abstract
Background
Multiple trauma injuries are the leading cause of death and disability in people under the age of 45 and require prompt and specialised care. However, medical and nursing education programmes do not always include specific training in emergency pre-hospital care, resulting in a lack of basic practical skills in trauma management.
Objective
To develop and validate two instruments for assessing nursing students’ competence in cervical collar and spinal board application in simulated pre-hospital emergency scenarios.
Method
This is an instrumental study that involves the development of two assessment instruments and the evaluation of their psychometric properties in a sample of 392 nursing students. Content validity was assessed using expert judgement, by calculating the content validity ratio (CVR) for each item and the scale level content validity index average (S-CVI/Ave) for the instruments. Exploratory factor analysis using the MINRES extraction method and Promax rotation was performed to analyse the performance of the items and structure of the rubrics. Internal consistency was analysed using the Omega coefficient and inter-rater agreement was assessed using Cohen’s Kappa coefficient.
Results
Initially, two rubrics were obtained: one with six items for cervical collar placement (S-CVI/Ave = 0.86) and one with nine items for spinal board placement (S-CVI/Ave = 0.81). Both had a single-factor structure, with all items having factor loadings greater than 0.34 for the cervical collar rubric and 0.56 for the spinal board rubric, except for item 2 of the cervical collar rubric (λ = 0.24), which was subsequently removed. The final cervical collar rubric (five items) had an overall internal consistency of 0.84 and the spinal board rubric had an overall internal consistency of 0.90, calculated using the Omega statistic. The weighted Kappa coefficient for each item ranged from acceptable (0.32) to substantial (0.79). These results show that we have successfully developed two sufficiently valid instruments to assess the immobilisation competencies proposed in the objective of the study.
Conclusion
Whilst further research is needed to fully establish their psychometric properties, these instruments offer a valuable starting point for evaluating nursing students' competence in cervical collar and spinal board application in simulated pre-hospital scenarios.
Introduction
With the increasing complexity of health care, the education system is having to respond to the need to equip health sciences students with advanced clinical skills [1] to ensure patient safety [2]. A clear example of this need is the initial management of patients with multiple trauma injuries. Such cases are increasingly common [3, 4] and account for approximately 25% of hospital admissions [4]. They are the leading cause of death, accounting for 4.4% of global mortality, and one of the leading causes of death among the population under 45 years of age in the European context [5, 6].
This scenario calls for prompt and specialised care from healthcare professionals. Whilst pre-hospital emergency teams are not equivalent across all countries, in Spain, according to legislation, these teams are mandatorily comprised of a nurse and an emergency medical technician, and when necessitated by the severity of the situation, a physician. These professionals are responsible for delivering initial medical intervention [7]. However, medical and nursing education programmes are not standardised and do not always include specific training in pre-hospital emergencies [8, 9]. Research has revealed that both medical students [9] and professionals [10, 11] lack basic practical skills in trauma management, although professionals report high levels of self-efficacy [12]. Furthermore, as pre-hospital treatment of critically injured patients is not routine for most emergency teams, periodic training of healthcare professionals in concepts and emergency interventions is needed [13].
Professionals often enhance their skills through postgraduate and specialisation courses [14, 15], which typically follow the principles of Pre-hospital Trauma Life Support (PHTLS) [16]. This is universally recognised as the gold standard in trauma management, although it is often adapted to specific contexts [17,18,19]. These best practice standards mean that every patient can be assessed and treated using a common language, especially in high-intensity emergency situations [20].
In recent years, specialist courses have adopted a broader, more multidisciplinary and collaborative approach, expanding their teaching base to include clinical physicians and nurses with experience in trauma care but no previous teaching experience. This has led to concerns about whether or not the highest standards of education are being maintained [21, 22]. While there is a need to tailor the content of these courses to different contexts and resources, the international practice guidelines remain clear [20]. However, there is no consensus on the best pedagogical approaches, assessment methods or certification requirements [21, 22].
In terms of pedagogical approaches, most of these courses involve hands-on skills training using clinical simulation, a widely used method for both professionals [2, 23] and health sciences students [24]. Simulation has been shown to be effective in replicating real-life scenarios without putting patients at risk [25].
However, the evaluation of competence acquisition is an area that is less well developed, often marked by variability and subjectivity in assessors’ scores [10, 25]. This highlights the need for systems that objectively evaluate trauma management skills using valid assessment instruments [26, 27]. In this context, some studies have been published on the development and validation of checklists for assessing the overall performance of emergency teams in trauma care (such as the PERFECT checklist) [28]. These checklists do not contain any specific criteria for the evaluation of the procedures in question. Validated instruments are also available for the assessment of certain procedures such as the application of tourniquets, emergency dressings and topical haemostatic agents [29]. However, there are no published instruments available for assessing the application of other procedures, such as cervical collars and spinal boards. Although recent literature has increasingly documented the disadvantages of spinal immobilisation [30, 31] and called for more precise guidelines to aid decision making regarding its use [32], rational, non-routine spinal immobilisation remains necessary for some patients with severe injuries [33,34,35].
The aim of this study was therefore to develop and validate instruments for assessing the competence of nursing students in cervical collar and spinal board application in a simulated pre-hospital emergency setting. To achieve this aim, an instrumental study design was chosen. This approach allows for systematic creation of the instruments and rigorous testing of their psychometric properties.
Method
Design
An instrumental study was conducted to develop and evaluate the psychometric properties of two assessment instruments designed for two specific procedures in simulated pre-hospital care settings for polytrauma patients: 1) cervical collar application and 2) spinal board application. We adapted the methodology proposed by Slavec and Drnovšek [36] to suit our specific context and research objectives. In the initial phase, we carried out a comprehensive review of relevant measurement instruments and related articles in order to generate items for questionnaire development. The second phase involved assessing content validity, while the third phase focused on evaluating the instrument’s psychometric properties (Fig. 1).
Participants
During the second phase, we used a convenience sample consisting of 11 experts in pre-hospital emergency care. In this phase, the inclusion criteria were: 1) Spanish physicians or nurses with more than five years of experience in pre-hospital emergency care settings. In the third phase, we used a convenience sample of nursing students. In this phase the inclusion criteria were: 1) Students enrolled in the subject “Nursing Care in Specialist Units” in the fourth year of the Nursing Degree at the University of Alicante during the academic years 2022/23 and 2023/24.
Sample size
In consideration of the latest guidelines in the field of psychometrics developed by Ferrando et al. [37] and Lloret-Segura et al. [38], a sample size of at least 200 participants is recommended, even in optimal conditions.
Variables and instruments
For descriptive purposes, we examined the following sociodemographic and academic variables, that a priori could influence students' competence levels, which data were collected using an ad hoc questionnaire: age (as a discrete continuous variable), gender (male/female), previous work experience in the field of emergency medicine (yes/no), previous experience in high-fidelity simulation (HFS) (yes/no), type of training simulators used in previous HFS (none, manikins, hybrid patients, standardised patients, a combination of these) and the course where training in HFS was received (nursing degree, other degree, non-university training).
Questionnaire development and evaluation
Phase 1: Item generation
Various approaches can be used to generate items: 1) deductive, 2) inductive or 3) a combination of the two. For our study, we used a combination of the two methods, which is recommended when there is limited evidence of similar or related instruments for the construct in question [39, 40]. The research team conducted an exhaustive search for relevant instruments and related articles in a number of databases, including Medline, CINAHL, Web of Science, Scopus and Cuiden, as well as in specialist journals on emergency medicine (see supplementary material 1). We also reviewed the technical specifications of various commercially available devices. This process was complemented by input from two nurses with over five years’ experience in pre-hospital emergency care [41], which corroborated the relevance of the initial draft of items.
Phase 2: Content validity
Once the initial sets of items for the assessment rubrics had been created, we carried out a content analysis with the help of experts using the modified Delphi method [42] (Fig. 2). Eleven experts in the field of pre-hospital emergency care, of whom 54.55% were male, participated in the content validity process described above. They included five physicians and six active emergency nurses working in services in three Spanish autonomous communities (the Community of Valencia, Castile-La Mancha and the Community of Madrid). In the first phase of the content validity process, these experts were asked to rate each item for relevance, sufficiency, coherence and clarity [43, 44] and to assign a score from 1 (minimum importance) to 10 (maximum importance). In the second phase of the content validity process, the experts then had the opportunity to modify the wording or suggest changes. At the end of each rubric there was a blank section where new items could be added if deemed necessary. Several rounds were undertaken until the experts reached a high level of consensus [45]. After each round, they were given feedback on the information that had been incorporated and any changes that had been made [42]. In the third phase of the content validity process, we calculated the Content Validity Ratio (CVR) and the Content Validity Index (CVI). For the CVR, the measurement model used for individual scale items was based on Lawshe's Model [46]. The CVR measures the essentiality of an item. CVR varies between 1 and − 1, with a higher score indicating greater agreement among panel members. The CVR was determined by classifying each item's score based on three possible options: 1) the item was essential for assessing the construct (scores of 7–10), 2) the item was useful but dispensable (scores of 4–6), and 3) the item was deemed unnecessary (scores of 0–3) [45]. The formula for the CVR is CVR = (Ne – N/2)/(N/2), where Ne is the number of experts indicating an item as essential and N is the total number of experts. For the CVR at the item level of the instrument, we deemed a recommended value over 0.59 to be acceptable, as the panel comprised 11 experts [46]. For the Content Validity Index (CVI), we calculated both the Item-CVI (I-CVI) and the Scale-CVI (S-CVI). I-CVI is computed as the number of experts rating an item as very relevant divided by the total number of experts [47]. The CVI was determined by classifying each item's score based on three possible options: 1) the item was very relevant for assessing the construct (scores of 7–10), 2) the item was relevant but dispensable (scores of 4–6), and 3) the item was deemed irrelevant (scores of 0–3) [46]. Values range from 0 to 1, where an I-CVI > 0.79 indicates the item is relevant, between 0.70 and 0.79 suggests the item needs revision, and below 0.70 indicates the item should be eliminated. Similarly, S-CVI was calculated using the number of items that have achieved a rating of very relevant. We calculated the S-CVI Average (S-CVI/Ave) [47]. S-CVI/Ave is calculated by taking the sum of the I-CVIs divided by the total number of items. For the S-CVI/Ave, we deemed a recommended value over 0.80 to be acceptable [48]. Finally, to assess face validity, a pilot test was carried out with 16 students, divided into four subgroups of four students each. This pilot test was designed to allow the instructor to evaluate the readability and comprehensibility of the assessment instruments, which assessed students in simulated environments of polytrauma patients in the context of out-of-hospital emergency care.
Phase 3: Psychometric evaluation
We imputed missing values for the sociodemographic variables and then performed a descriptive analysis of the sociodemographic and academic variables. Percentages were calculated for categorical responses and the mean and standard deviation (ẋ ± SD) for continuous variables. The performance of the rubric items was analysed. We calculated skewness, kurtosis, the item-total correlation coefficient, and floor and ceiling effects of the items. Skewness and kurtosis values between -2 and 2 were considered indicative of the normality of the items [49]. The item-total correlation was calculated using the point-biserial correlation coefficient. Discrimination index and coefficient values above 0.29 were deemed adequate [50]. A floor or ceiling effect was confirmed if more than 15% of participants achieved the lowest or highest possible score, respectively [49]. To examine the structure of the rubric, we performed an Exploratory Factor Analysis (EFA) using the Minimum Residual (MINRES) extraction method, which is recommended for categorical variables and small samples [51], with oblique Promax rotation [37]. The appropriate number of factors was determined using parallel analysis [45, 52], supported by theoretical interpretability criteria [37, 38]. The suitability of the EFA was tested using three indicators: Bartlett’s sphericity test (with p < 0.05 considered adequate) [53], the determinant of the correlation matrix (values close to zero) [37, 38] and the Kaiser–Meyer–Olkin (KMO) measure (values above 0.70) [54]. Item selection was determined by a saturation level ≥ 0.30 [49, 55]. We also analysed internal consistency was using the Omega coefficient for categorical variables, as recommended for ordinal data and one-dimensional models. An Omega value greater than or equal to 0.70 was considered acceptable [56,57,58]. Finally, inter-rater agreement between two independent observers was assessed using Cohen's Kappa coefficient for each item [59], with a value greater than 0.21 considered fair [60]. The performance of the instrument by factor was calculated using descriptive statistics (mean and standard deviation) and percentile scores based on the categories proposed by the Australian Council for Educational Research [61]. The analyses were performed using R (version 3.4.0) and SPSS version 29.0.2.0 for Macintosh.
Procedure
An educational intervention using high-fidelity simulation was delivered between October and December in both the 2022/23 and 2023/24 academic years. The purpose of the intervention was to train students, divided into 107 groups of three to four people, in two specific procedures: 1) cervical collar application and 2) spinal board application. The equipment used included a Sim Man ALS simulator from Laerdal Medical a X-cervical collar from IES Medical and a spinal rescue board from Steelpro Safety. The educational intervention (see Supplementary Material 2) consisted of a series of four sessions delivered over a two-month period. For the fourth session, the students were assembled in a familiar and well-equipped classroom within the Faculty. They were initially given 30 min during which they could practise the procedures freely. Each subgroup then received 15 min of training in a simulated scenario involving the comprehensive care of a polytrauma patient. Subgroups of three or four students were then required to respond to simulated clinical scenarios in which they had to apply the above procedures as part of the comprehensive care of the patient. As part of the evaluation process, each group was then assessed by the regular practical instructor and by an independent observer. Both the instructor and the independent observer were located in the control room, out of sight of the students. At the end of the practical session, collaborating researchers collected participants’ sociodemographic data in an adjacent classroom.
Ethical considerations
This study adhered to the criteria of the Declaration of Helsinki and the European Union Standards of Good Clinical Practice, and was approved by the Research and Ethics Committee of the University of Alicante (reference number: UA-2022–07-15). All participants were informed of the voluntary nature of their participation and the way in which the data collected would be processed. The data collected were treated confidentially and used only for research purposes. Written informed consent was obtained from all participants. The informed consent was completed by the students during theoretical session 3 (see supplementary material 2).
Results
Phase 1: Item generation
This phase yielded two rubrics, one with six items relating to cervical collar placement and the other with nine items relating to spinal board placement (see Supplementary Material 3). The literature search across various electronic databases yielded no results that could directly inform or assist in the construction of items. Consequently, the research team developed the items based on technical specifications from diverse commercial devices, generating a preliminary pool of items for the different rubrics. Subsequently, two nurses made minor modifications to the items based on their practical experience and corroborated the relevance of the initial draft of items.
Phase 2: Content validity
During the initial content validity process, the changes suggested by the experts were incorporated into the original item pool for items 2, 5 and 6 of the cervical collar rubric For the spinal board rubric, all items were modified except item 7, which remained unchanged. Based on the experts’ recommendations, a number of grammatical changes were then made to item 5 of the cervical collar rubric and items 4, 6, 7, 8 and 9 of the spinal board rubric (see Supplementary Material 3). The experts deemed all the items to be essential (median and mode of all items ≥ 7). With the exception of items 2, 3 and 6 of the spinal column rubric, the CVR was greater than 0.59 and I-CVI was greater than 0.79. As these items were considered theoretically relevant, the research team decided not to reject them. The S-CVI/Ave for each instrument was > 0.8 (see Table 1). Finally, in the pilot test conducted with 16 students, no changes to the items were necessary. The instructors took an average of two to three minutes to complete the instrument.
Phase 3: Psychometric evaluation
Sociodemographic characteristics of the sample
The mean age of the participants (n = 392 students) was 23.83 ± 5.27 years and the majority were female (80.1%, n = 314). A total of 88.6% had received previous training in clinical simulation, with 95.7% having received this training during their nursing studies. In this sample, 93.2% had no previous work experience in emergency medicine (see Table 2).
Performance of the items
Table 3 shows the performance of the items in the cervical collar placement and spinal board placement rubrics. No skewness or kurtosis was observed in the items that made up the instruments. The correlation coefficient was adequate for all items in the overall rubrics, except for item 2 in the cervical collar placement rubric, which fell below the critical value of 0.29 [27]. A ceiling effect was observed for all items in both rubrics.
Structural validity
For the cervical collar placement rubric, the EFA results with the six items generated yielded an initial single-factor structure with a KMO of 0.70, a determinant of the correlation matrix equal to 0.21 and a Bartlett’s sphericity test value of 163.22 (df = 15; p < 0.001). Horn’s parallel analysis initially identified two factors, but this gave rise to a less meaningful theoretical interpretation. A single-factor structure was therefore retained, which was further confirmed by other analyses including the scree plot and Minimum Average Partial Test (MAP) [62]. This approach was further supported by adherence to the principle of parsimony, which advocates for the simplest model that adequately explains the data without unnecessary complexity [37, 38]. All items had factor loadings greater than 0.34, with the exception of item 2 (λ = 0.24). A new exploratory factor analysis was therefore performed after eliminating item 2 (Table 4). This five-item structure yielded a KMO of 0.68, a determinant of the correlation matrix equal to 0.22 and a Bartlett’s sphericity test value of 163.22 (df = 15; p < 0.001). All items had factor loadings greater than 0.34, with an explained variance of 42%.
With regard to the spinal board placement rubric, the EFA results of the nine items generated (Table 4) yielded an initial single-factor structure with a KMO of 0.86, a determinant of the correlation matrix equal to 0.013 and a Bartlett’s sphericity test of 443.93 (df = 36; p < 0.001). As with the cervical collar placement rubric, parallel analysis supported the retention of a two-factor structure. However, in the interests of more meaningful interpretation, we decided to retain a one-dimensional structure, a decision that was supported by other analyses. All items had factor loadings greater than 0.56, with an explained variance of 45%.
Internal consistency and reliability
Overall internal consistency was 0.84 for the cervical collar placement rubric and 0.90 for the spinal board placement rubric, calculated using the Omega statistic. The Kappa coefficient used to estimate inter-rater agreement for each item ranged from acceptable (0.32) to substantial (0.79) (Table 5). For the cervical collar placement rubric, although this item is presented in Table 5, it demonstrated poor performance with a low Kappa coefficient (0.044). This, in conjunction with the low factor loading identified in the EFA, contributed to the decision to remove it from the rubric.
Descriptive statistics of the assessment rubrics
Descriptive statistics and percentiles of the item scores can be found in Supplementary Material 4. The mean and standard deviation for the cervical collar placement and spinal board placement rubrics were 6.63 (SD = 2.45) and 13.43 (SD = 4.57), respectively. The students’ average performance ranged from 5 to 9 points for cervical collar placement and from 11 to 17 points for spinal board placement.
Discussion
The main aim of this study was to develop and validate two rubrics for assessing competence in cervical collar and spinal board application in nursing students undergoing clinical simulation training in the initial care of polytrauma patients (see Supplementary Material 5). To the best of our knowledge, this is the first Spanish study to develop and validate specific instruments to assess these two procedures, both of which are recommended for rational, non-routine spinal immobilisation that remains necessary for some patients with severe injuries [33,34,35].
First of all, while researchers have expressed concern about the proper immobilisation of these patients, the focus of their work has tended to be on analysing the pros and cons of immobilisation systems, the effectiveness of different spinal immobilisation techniques [63, 64] or the actual movement of the head and neck during patient transfer [34, 65]. The development of tools to assess the correct use of these devices has not been addressed. However, the present study is consistent with another in which rubrics were validated to assess training in specific procedures, such as the application of tourniquets, emergency bandages and topical haemostatic agents, in both civilian and military settings [29]. Our study also complements other assessment tools used to evaluate the comprehensive simulated care of polytrauma patients that either lack detailed scoring criteria or have not been subjected to a rigorous psychometric validation process [28].
Second, in terms of content validity, the study findings show that although three items in the spinal board placement rubric had a CVR below the acceptable threshold [46], they were not eliminated because they were theoretically relevant. Furthermore, the S-CVI/Ave for both instruments was higher than 0.80, indicating good content validity [48].
Third, in terms of item performance, all items were adequately correlated with the overall rubric, with the exception of item 2 in the cervical collar placement rubric, which yielded a point-biserial item-total correlation coefficient of 0.22. This indicates a weak relationship between the item and the overall score, suggesting that the item performs poorly in relation to the construct being measured [50]. Moreover, ceiling effects were observed, which could make it difficult to differentiate between student groups with scores at the upper end of the instruments [26, 27]. However, we have provided percentiles to help interpret the scores for both rubrics. Percentiles help to understand the relative position of a score within a distribution, which is useful when dealing with scales with ceiling effects. In such cases, absolute scores may not accurately reflect differences between student groups, and it may be difficult to distinguish among them. Future studies should aim to provide further evidence of instrument validity in larger samples. This should lead to the development of normative values, enabling the comparison of scores obtained by particular student groups with normative values from a wider population with similar characteristics [66].
Fourth, in terms of structural validity, unlike other instrument development and validation studies in the pre-hospital setting, we followed the latest psychometric recommendations proposed by Ferrando et al. [38] and Lloret-Segura et al. [38], and conducted an EFA. As with the calculation of the point-biserial item-total correlation coefficient, item 2 of the cervical collar placement rubric also had a factor loading below the minimum threshold of 0.30 required for item selection [49, 55] and was therefore removed as per the previously established criteria. It is possible that item 2 is poorly or ambiguously worded, which could increase the percentage of variance explained by spurious or irrelevant factors, thereby reducing the evidence for test validity [44]. Future research may benefit from refining the wording of this item and then reanalysing the psychometric properties of the cervical collar placement rubric.
Fifth, the internal consistency of the instruments was assessed using the Omega coefficient, yielding adequate values in line with those described in the literature [58]. Furthermore, the inter-rater reliability of the instruments ranged from acceptable to moderate for all final items included [60].
Strengths and limitations
This study has both strengths and limitations. In terms of strengths, a standardised instrument validation process was followed, resulting in two assessment rubrics for the cervical and spinal immobilisation of trauma patients. This may help to understand training needs and address deficits in basic practical trauma management skills among students, medical [12] and nursing professionals [11]. In this context, training in the use of these procedures is important, as ignorance could lead to their non-use or inappropriate use in situations where they could be beneficial [29]. Current research on pre-hospital cervical spine immobilisation techniques, including both complete immobilisation and movement restriction, is characterised by limited evidence and methodological weaknesses. The existing data fail to demonstrate clear benefits of these practices, whilst suggesting possible risks and negative consequences [31, 33,34,35]. Therefore, further research should aim to determine the specific clinical scenarios and severity thresholds that warrant the application of these restrictive immobilisation techniques, with the objective of minimising potential harm or complications to patients.
In terms of study limitations, attention should be drawn to the small sample size in a single educational centre, which means that the results cannot be generalised to other contexts. Further evidence of instrument validity should be obtained in larger samples, including both health sciences students and multidisciplinary teams of healthcare providers. It should be noted that the X-collar used in this study is a specific type of cervical collar with unique features that differ from other used rigid collars. This choice of collar may limit the generalizability of the rubric to services using different collar types. It should also be noted that there was a discrepancy between the acceptable CVR, I-CVI and poor psychometric performance of Item 2 in the cervical collar placement rubric. This highlights the need for future research to refine the content validity for this item and explore potential improvements to its formulation in new samples. Furthermore, although we have provided percentiles, future studies should perform sensitivity and specificity analyses using AUC-ROC curves [66].
Practical implications
This study provides two validated tools designed to objectively assess nursing students' skills in the application of pre-hospital procedures to patients with severe polytrauma when necessary. The validated instruments serve to standardise simulation-based training in the initial management of polytrauma patients and to assess student performance. This may help both students and professionals in the field to overcome deficiencies in these two practical skills. The improved skills in cervical collar and spinal board application, as assessed by these validated tools, may translate into better patient outcomes, potentially reducing complications related to improper immobilisation and contributing to long-term benefits for patients through enhanced pre-hospital care. These instruments may facilitate the transfer of knowledge and skills acquisition among both health sciences students and outpatient emergency care professionals.
Conclusions
This study developed and validated two rubrics to assess nursing student competence in cervical collar and spinal board application in a simulated pre-hospital setting. Items were generated from the relevant literature and their content validity was assessed by experts and in the target population. Psychometric analysis revealed a single-factor structure for both instruments. Item 2 of the cervical collar rubric had low item-total correlations, poor factor loading and reliability, suggesting that it should be revised in future research. Both rubrics had adequate internal consistency and inter-rater reliability ranging from acceptable to substantial. The rubrics are specific and validated instruments for assessing critical skills in polytrauma patient care, enhancing training and safety in pre-hospital emergencies.
Availability of data and materials
The data supporting the findings of this study are available on formal request via University of Alicante from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
References
Hanshaw SL, Dickerson SS. High fidelity simulation evaluation studies in nursing education: A review of the literature. Nurse Educ Pract. 2020;46:102818. https://doi.org/10.1016/j.nepr.2020.102818.
Alinier G, Platt A. International overview of high-level simulation education initiatives in relation to critical care. Nurs Crit Care. 2014;19(1):42–9. https://doi.org/10.1111/nicc.12030.
Berwin JT, Pearce O, Harries L, Kelly M. Managing polytrauma patients. Injury. 2020;51(10):2091–6. https://doi.org/10.1016/j.injury.2020.07.051.
Hardy BM, King KL, Enninghorst N, Balogh ZJ. Trends in polytrauma incidence among major trauma admissions. Eur J Trauma Emerg Surg. Published online December 19, 2022. https://doi.org/10.1007/s00068-022-02200-w.
Campos-Serra A, Pérez-Díaz L, Rey-Valcárcel C, Montmany-Vioque S, Artiles-Armas M, Aparicio-Sánchez D, et al. Resultados del Registro Nacional de Politraumatismos español ¿Dónde estamos y a dónde nos dirigimos? Cir Esp. 2023;101(9):609–16. https://doi.org/10.1016/j.ciresp.2022.12.008.
World Health Organization: WHO. Injuries and violence. 2024. Available from: https://www.who.int/news-room/fact-sheets/detail/injuries-and-violence.
BOE-A-2014–749 Real Decreto 22/2014, de 17 de enero, por el que se modifica el Real Decreto 836/2012, de 25 de mayo, por el que se establecen las características técnicas, el equipamiento sanitario y la dotación de personal de los vehículos de transporte sanitario por carretera. 2014. Available from: https://www.boe.es/eli/es/rd/2014/01/17/22.
Jouda M, Finn Y. Training in polytrauma management in medical curricula: A scoping review. Med Teach. 2020;42(12):1385–93. https://doi.org/10.1080/0142159X.2020.1811845.
Morra C, Nguyen K, Sieracki R, Pavlic A, Barry C. Trauma-informed Care Training in Trauma and Emergency Medicine: A Review of the Existing Curricula. West J Emerg Med. 2024;25(3):423–30. https://doi.org/10.5811/westjem.18394.
Larraga-García B, Quintana-Díaz M, Gutiérrez Á. The Need for Trauma Management Training and Evaluation on a Prehospital Setting. Int J Environ Res Public Health. 2022;19(20):13188. https://doi.org/10.3390/ijerph192013188.
Mohamed YM, Khalifa AM, Eltaib FA. Impact of Nursing Intervention Protocol about Polytrauma Care during the Golden Hour on Nurses’ Performance. Egyptian J Health Care. 2020;11(3):292–309. https://doi.org/10.21608/ejhc.2020.119015.
Tsang B, McKee J, Engels PT, Paton-Gay D, Widder SL. Compliance to advanced trauma life support protocols in adult trauma patients in the acute setting. World J Emerg Surg. 2013;8(1):39. https://doi.org/10.1186/1749-7922-8-39.
Kreinest M, Goller S, Rauch G, et al. Application of Cervical Collars - An Analysis of Practical Skills of Professional Emergency Medical Care Providers. PLoS One. 2015;10(11):e0143409 Published 2015 Nov 20. https://doi.org/10.1371/journal.pone.0143409.
Popp D, Zimmermann M, Kerschbaum M, Matzke M, Judemann K, Alt V. Präklinische Polytraumaversorgung: Beständige Herausforderung im präklinischen Rettungswesen [Prehospital treatment of polytrauma: Ongoing challenge in prehospital emergency services]. Unfallchirurgie (Heidelb). 2023;126(12):975–84. https://doi.org/10.1007/s00113-023-01383-0.
Stirparo G, Gambolò L, Bottignole D, et al. Enhancing Physicians’ Autonomy through Practical Trainings. Ann Ig. 2024. https://doi.org/10.7416/ai.2024.2638.
PHTLS: Prehospital Trauma Life Support. 10th ed. Jones & Bartlett Learning; 2023.
Häske D, Gross Z, Atzbach U, et al. Comparison of manual statements from out-of-hospital trauma training programs and a national guideline on treatment of patients with severe and multiple injuries. Eur J Trauma Emerg Surg. 2022;48(3):2207–17. https://doi.org/10.1007/s00068-021-01768-z.
Feller R, Furin M, Alloush A, Reynolds C. EMS Immobilization Techniques. In: StatPearls. Treasure Island (FL): StatPearls Publishing; October 3, 2022.
Eisner ZJ, Diango K, Sun JH. Education and training of prehospital first responders in low- and middle-income countries. Surgery. 2024. https://doi.org/10.1016/j.surg.2024.03.009.
James D, Pennardt AM. Trauma Care Principles. In: StatPearls. Treasure Island (FL): StatPearls Publishing; May 31, 2023.
Henry SM, Westerband D, Adelstein M. Trauma training transformed: empowering nurse practitioners and physician assistants in advanced trauma life support teaching. Trauma Surg Acute Care Open. 2024;9(1):e001345. https://doi.org/10.1136/tsaco-2023-001345.
Abelsson A, Rystedt I, Suserud BO, Lindwall L. Learning by simulation in prehospital emergency care - an integrative literature review. Scand J Caring Sci. 2016;30(2):234–40. https://doi.org/10.1111/scs.12252.
Raurell-Torredà M, Llauradó-Serra M, Lamoglia-Puig M, et al. Standardized language systems for the design of high-fidelity simulation scenarios: A Delphi study. Nurse Educ Today. 2020;86:104319. https://doi.org/10.1016/j.nedt.2019.104319.
Molloy MA, Holt J, Charnetski M, Rossler K. Healthcare simulation standards of best practiceTM simulation glossary. Clin Simul Nurs. 2021;58:57–65.
McLaughlin C, Barry W, Barin E, et al. Multidisciplinary Simulation-Based Team Training for Trauma Resuscitation: A Scoping Review. J Surg Educ. 2019;76(6):1669–80. https://doi.org/10.1016/j.jsurg.2019.05.002.
Prinsen CAC, Mokkink LB, Bouter LM, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res. 2018;27(5):1147–57. https://doi.org/10.1007/s11136-018-1798-3.
Terwee CB, Bot SD, de Boer MR, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42. https://doi.org/10.1016/j.jclinepi.2006.03.012.
Häske D, Beckers SK, Hofmann M, et al. Performance Assessment of Emergency Teams and Communication in Trauma Care (PERFECT checklist)-Explorative analysis, development and validation of the PERFECT checklist: Part of the prospective longitudinal mixed-methods EPPTC trial. PLoS ONE. 2018;13(8):e0202795. https://doi.org/10.1371/journal.pone.0202795.
Usero-Pérez MDC, Jiménez-Rodríguez ML, González-Aguña A, et al. Validation of an evaluation instrument for responders in tactical casualty care simulations. Rev Lat Am Enfermagem. 2020;28:e3251. https://doi.org/10.1590/1518-8345.3052.3251.
Pandor A, Essat M, Sutton A, et al. Cervical spine immobilisation following blunt trauma in pre-hospital and emergency care: A systematic review. PLoS ONE. 2024;19(4):e0302127. https://doi.org/10.1371/journal.pone.0302127.
Underbrink L, Dalton AT, Leonard J, et al. New Immobilization Guidelines Change EMS Critical Thinking in Older Adults With Spine Trauma. Prehosp Emerg Care. 2018;22(5):637–44. https://doi.org/10.1080/10903127.2017.1423138.
Kreinest M, Gliwitzky B, Schüler S, Grützner PA, Münzberg M. Development of a new Emergency Medicine Spinal Immobilization Protocol for trauma patients and a test of applicability by German emergency care providers. Scand J Trauma Resusc Emerg Med. 2016;24:71. https://doi.org/10.1186/s13049-016-0267-7.
Gräff P, Bolduan L, Macke C, Clausen JD, Sehmisch S, Winkelmann M. Where Do We Stand on Cervical Spine Immobilisation? A Questionnaire among Prehospital Staff. J Clin Med. 2024;13(8):2325. https://doi.org/10.3390/jcm13082325.
McDonald N, Kriellaars D, Weldon E, Pryce R. Head-Neck Motion in Prehospital Trauma Patients under Spinal Motion Restriction: A Pilot Study. Prehosp Emerg Care. 2021;25(1):117–24. https://doi.org/10.1080/10903127.2020.1727591.
Lee SJ, Jian L, Liu CY, Tzeng IS, Chien DS, Hou YT, et al. A Ten-Year Retrospective Cohort Study on Neck Collar Immobilization in Trauma Patients with Head and Neck Injuries. Medicina. 2023;59(11):1974. https://doi.org/10.3390/medicina59111974.
Slavec A, Drnovšek M. A perspective on scale development in entrepreneurship research. Econ Business Rev. 2012;14(1):39–62. https://doi.org/10.15458/2335-4216.1203.
Lloret-Segura S, Ferreres-Traver A, Hernández-Baeza A, Tomás-Marco I. El análisis factorial exploratorio de los ítems: una guía práctica, revisada y actualizada. An. psicol. 30(3):1151–69. Available from: https://revistas.um.es/analesps/article/view/analesps.30.3.199361.
Ferrando PJ, Lorenzo-Seva U, Hernández-Dorado A, Muñiz J. Decálogo para el Análisis Factorial de Ítems de Prueba. Psicothema. 2022;34(1):7–17. https://doi.org/10.7334/psicothema2021.456.
McKim C. Using the literature to create a scale: An innovative qualitative methodological piece. Int J Soc Res Methodol. 2023;26(3):343–51. https://doi.org/10.1080/13645579.2022.2026138.
Morgado FFR, Meireles JFF, Neves CM, Amaral ACS, Ferreira MEC. Scale development: ten main limitations and recommendations to improve future research practices [published correction appears in Psicol Reflex Crit. 2017;30(1):5. Psicol Reflex Crit. 2017;30(1):3. Published 2017 Jan 25. https://doi.org/10.1186/s41155-016-0057-1.
Urrutia EM, Barrios AS, Gutiérrez NM, Mayorga CM. Método óptimo para la validez de contenido [Optimal method for content validity]. Revista Cubana de Educación Médica Superior. 2014;28 (3):547–558. Availabe from: http://scielo.sld.cu/scielo.php?script=sci_arttext&pid=S086421412014000300014&lng=es&tlng=es.
Varela-Ruiz, M., Díaz-Bravo, L. & García-Durán, R. Descripción y usos del método Delphi en investigaciones del área de la salud. Inv Ed Med 2012;1(2):90–95. Availabe from: https://www.elsevier.es/es-revista-investigacion-educacion-medica-343-articulo-descripcion-usos-del-metodo-delphi-X2007505712427047.
Escobar-Pérez, J & Cuervo-Martínez, A. Validez de contenido y juicio de expertos: una aproximación a su utilización. Avances en medición. 2008;6, 27–36. https://www.researchgate.net/publication/302438451_Validez_de_contenido_y_juicio_de_expertos_Una_aproximacion_a_su_utilizacion#fullTextFileContent.
Muñiz J, Fonseca-Pedrero E. Diez pasos para la construcción de un test [Ten steps for test development]. Psicothema. 2019;31(1):7–16. https://doi.org/10.7334/psicothema2018.291.
Pedrosa I, Suárez-Álvarez J, García-Cueto E. Evidencias sobre la validez de contenido: avances teóricos y métodos para su estimación. Acción psicol. 2013; 10(2):3–18. Available from: http://scielo.isciii.es/scielo.php?script=sci_arttext&pid=S1578-908X2013000200002&lng=es. https://doi.org/10.5944/ap.10.2.11820.
Lawshe CH. A quantitative approach to content validity. Pers Psychol. 1975;28:563–75. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x.
Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi- H. Design and implementation content validity study: development of an instrument for measuring patient-centered communication. J Caring Sci. 2015;4(5):165–78. https://doi.org/10.15171/jcs.2015.017.
Polit DF, Beck CT. The content validity index: Are you sure you know what’s being reported? critique and recommendations. Res Nurs Health. 2006;29(5):489–97. https://doi.org/10.1002/nur.20147.
Bandalos, D. L. y Finney, S. J. (2010). Factor Analysis: Exploratory and Confirmatory. En G. R. Hancock y R. O. Mueller (Eds.), Reviewer’s guide to quantitative methods. Routledge: New York.
Jordan P, Spiess M. Rethinking the interpretation of item discrimination and factor loadings. Educ Psychol Measur. 2019;79(6):1103–32. https://doi.org/10.1177/0013164419843164.
Flora DB, Labrish C, Chalmers RP. Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis. Front Psychol. 2012;3(55):1–21. https://doi.org/10.3389/fpsyg.2012.00055.
Horn JL. A rationale and test for the number of factors in a factor analysis. Psychometrika. 1965;30:179–85. https://doi.org/10.1007/BF02291817.
Bartlett MS. Tests of significance in factor analysis. British J Mathematical Statistical Psychol. 1950;3(2):77–85. https://doi.org/10.1111/j.2044-8317.1950.tb00285.x.
Kaiser HF. A second generation Little Jiffy. Psychometrika. 1970;35:401–15. https://doi.org/10.1007/BF02291817.
Tabachnick BG, y Fidell, L. S. Using multivariate statistics. Boston: Allyn and Bacon; 2001.
Green SB, Yang Y. Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika. 2009;74(1):155–67. https://doi.org/10.1007/s11336-008-9099-3.
Yang Y, Green SB. Evaluation of structural equation modeling estimates of reliability for scales with ordered categorical items. Methodology. 2015;11(1):23–34. https://doi.org/10.1027/1614-2241/a000087.
Viladrich MC, Brunet AA, Doval E. A journey around alpha and omega to estimate internal consistency reliability. Anales de Psicología. 2017;33(3):755–82. https://doi.org/10.6018/analesps.33.3.268401.
Cohen J. A Coefficient of Agreement for Nominal Scales. Educ Psychol Measur. 1960;20(1):37–46. https://doi.org/10.1177/001316446002000104.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.
Australian Council for Educational Research. Middle years ability test: Teacher manual. Victoria, Australia: ACER Press; 2005.
Velicer WF. Determining the number of components from the matrix of partial correlations. Psychometrika. 1976;1976(41):321–7. https://doi.org/10.1007/BF02293557.
Swartz EE, Tucker WS, Nowak M, et al. Prehospital Cervical Spine Motion: Immobilization Versus Spine Motion Restriction. Prehosp Emerg Care. 2018;22(5):630–6. https://doi.org/10.1080/10903127.2018.1431341.
Uzun DD, Jung MK, Weerts J, et al. Remaining Cervical Spine Movement Under Different Immobilization Techniques. Prehosp Disaster Med. 2020;35(4):382–7. https://doi.org/10.1017/S1049023X2000059X.
Martin C, Boissy P, Hamel M, Lebel K. Instrumented Pre-Hospital Care Simulation Mannequin for Use in Spinal Motion Restrictions Scenarios: Validation of Cervical and Lumbar Motion Assessment. Sensors (Basel). 2024;24(4):1055. Published 2024 Feb. https://doi.org/10.3390/s24041055.
Streiner DLNG, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015.
Acknowledgements
The authors would like to thank the fourth year students of the University of Alicante’s Bachelor’s Degree in Nursing, 2022-23 and 2023-24, for their generous participation in the study.
Funding
This work was supported by the University of Alicante, within the Research Programme in University Teaching of the Institute of Education Sciences (Research in University Teaching Programme) 2022 (approval number: 5766). The funder was not involved in the development of the work.
Author information
Authors and Affiliations
Contributions
J P-G, N M-P and S E made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; J P-G, N M-P and S E was involved in drafting the original manuscript; J P-G, N M-P, AI G-G, L J-A, N G-A, R J-S and S E revised it critically for important intellectual content; J P-G, N M-P, AI G-G, L J-A, N G-A, R J-S and S E gave final approval of the version to be published; J P-G, N M-P, AI G-G, L J-A, N G-A, R J-S and S E agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
All participants were informed of the voluntary nature of their participation and the way in which the data collected would be processed. The data collected were treated confidentially and used only for research purposes. Written informed consent was obtained from all participants. This study adhered to the criteria of the Declaration of Helsinki and the European Union Standards of Good Clinical Practice and was approved by the Research and Ethics Committee of the University of Alicante (reference number: UA-2022–07-15).
Consent for publication
Not Applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Perpiñá-Galvañ, J., Montoro-Pérez, N., Gutiérrez-García, A.I. et al. Development and validation of assessment instruments for cervical collar and spinal board placement in simulated environments for nursing students in the care of polytrauma patients. BMC Med Educ 24, 1080 (2024). https://doi.org/10.1186/s12909-024-06061-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12909-024-06061-2