Skip to main content

Constructing an evaluation index system for clinical nursing practice teaching quality using a Delphi method and analytic hierarchy process-based approach

Abstract

Background

The key step in evaluating the quality of clinical nursing practice education lies in establishing a scientific, objective, and feasible index system. Current assessments of clinical teaching typically measure hospital learning environments, classroom teaching, teaching competency, or the internship quality of nursing students. As a result, clinical evaluations are often insufficient to provide focused feedback, guide faculty development, or identify specific areas for clinical teachers to implement change and improvement. Therefore, the purpose of our study was to to construct a scientific, systematic, and clinically applicable evaluation index system of clinical nursing practice teaching quality and determine each indicator’s weight to provide references for the scientific and objective evaluation of clinical nursing practice teaching quality.

Methods

Based on the “Structure-Process-Outcome” theoretical model, a literature review and Delphi surveys were conducted to establish the evaluation index system of clinical nursing practice teaching quality. Analytic Hierarchy Process (AHP) was employed to determine the weight of each indicator.

Results

The effective response rate for the two rounds of expert surveys was 100%. The expert authority coefficients were 0.961 and 0.975, respectively. The coefficient of variation for the indicators at each level ranged from 0 to 0.25 and 0 to 0.21, and the Kendall harmony coefficients were 0.209 and 0.135, respectively, with statistically significant differences (P < 0.001). The final established index system included 3 first-level, 10 second-level, and 29 third-level indicators. According to the weights computed by the AHP, first-level indicators were ranked as “Process quality” (39.81%), “Structure quality” (36.67%), and “Outcome quality” (23.52%). Among the secondary indicators, experts paid the most attention to “Teaching staff” (23.68%), “Implementation of teaching rules and regulations (14.14%), and “Teaching plans” (13.20%). The top three third-level indicators were “Level of teaching staff” (12.62%), “Structure of teaching staff” (11.06%), and “Implementation of the management system for teaching objects” (7.54%).

Conclusion

The constructed evaluation index system of clinical nursing practice teaching quality is scientific and reliable, with reasonable weight. The managers’ focus has shifted from outcome-oriented to process-oriented approaches, and more focus on teaching team construction, teaching regulations implementation, and teaching design is needed to improve clinical teaching quality.

Peer Review reports

Background

As an extension of school teaching, clinical nursing practice teaching is an essential constituent of nursing education, as it is a vital link to cultivating students’ practical ability, and it plays a pivotal role in employment choice, career development, and professional quality of nursing students [1]. Evaluation is a form of action research committed to creating changes in the evaluated process by offering applicable recommendations [2]. Based on evaluation results, the original teaching plans and activities can be adjusted timely. This ensures effective quality at every stage of the teaching process, guaranteeing overall teaching quality and continuous improvement.

The critical step in evaluating the quality of clinical nursing practice education lies in establishing a scientific, objective, and feasible index system based on evaluation objectives. Evaluation of teaching without having effective teaching indicators not only does not improve the quality of instruction but also causes quality fall [3]. Evaluations in the clinical teaching are fraught with problems. Current assessments of clinical teaching typically measure hospital learning environments [4, 5], classroom teaching [6], or the internship quality of nursing students [7, 8]. As a result, clinical evaluations are often insufficient to provide focused feedback, guide faculty development, or identify specific areas for clinical teachers to implement change and improvement [9].

In recent years, major national reforms of postgraduate medical education have taken place in numerous countries, including reforms about the requirements of teaching and assessment strategies [10]. China is no exception. As an interdisciplinary subject, medical education is related to the implementation of the “Healthy China” strategy [11]. With the rapid development of the nursing profession, hospitals in China undertake teaching tasks beyond internships. These tasks include instructing nursing interns, nurses attending advanced studies and specialist nurse trainees, continuing education for nurses, and other training and assessment work. Therefore, the existing index system cannot comprehensively evaluate the quality of clinical nursing practice teaching.

Therefore, the purpose of our study was to construct a scientific, systematic, and clinically applicable evaluation index system of clinical nursing practice teaching quality and determine the weight of each indicator based on the “Structure-Process-Outcome” model to provide references for the scientific and objective evaluation of clinical nursing practice teaching quality.

Methods

Construction of the evaluation index system of clinical nursing practice teaching quality

Establish a study group

A research group is composed of 7 nursing managers and nursing experts. Among these, one has a senior professional title (Deputy Director of Nursing Department, in charge of clinical nursing practice education), one has an associate senior professional title, two have an intermediate professional title, and three are nursing undergraduate students.

The main tasks of this research group were as follows: responsible for reviewing the literature, producing a first draft of the key indicators, establishing an expert inquiry form, determining the consulting experts, collecting their views and opinions on the indicator system, and statistical analysis.

Conceptual model

To evaluate the quality of clinical nursing practice teaching, we used the “Structure—Process—Outcome” framework described by Donabedian [12]. His three-part approach makes quality assessment possible, assuming structure (e.g., attributes of material or human resources and organizational structure) influences process (what is done in giving and receiving care), which influences outcome (e.g. health status) [12]. We chose Donabedian’s model as it is widely used and allows both researchers and policymakers to conceptualize the underlying mechanisms that may contribute to poor quality of clinical practice nursing teaching.

Construct consultation questionnaire

A systematic literature search was performed using PubMed, Medline, CNKI, VIP, and Wangfang databases from the inception of each database to December 2020. The following main search terms were used: “analytic hierarchy process (AHP)”, “Delphi method”, “clinical nursing”, “quality of teaching”, “clinical education”, “indicator” and “indicator system”. Based on Donabedian’s model, the study group generated an original draft of the evaluation indicator system consisting of 3 first-level, 10 second-level, and 28 third-level indicators. The initial draft was verified for readability and feasibility by two education experts.

Delphi expert consultation questionnaire included three parts: an explanation of the questionnaire, basic information of experts, and the main text of the questionnaire: (1) Explanation of the questionnaire, included the research background, purpose, and meaning; (2) Basic information of experts, included the expert’s age, education background, position, professional title, years of experience in nursing teaching and management, the degree of the expert’s familiarity with the indicators (Cs) and the educational level of the expert and the basis for judgment (Ca); (3) Main text of questionnaire, included the content of each evaluation indicator, and the method of scoring the importance of each item. The experts were required to rate each item on a five-point Likert scale from 1 (unimportant) to 5 (very important), give comments, and suggest additional items.

Selection of the experts

The number of consultation experts was usually between 15 and 50 [13]. Moreover the more experts there were, the more reliable the result would be. In our study, the purposeful sampling method was used to select 18 experts from 4 tertiary hospitals and 2 nursing schools in Beijing as consulting experts, including 14 clinical nursing practice teaching experts (77.8%) and 4 academic nursing teaching management experts (22.2%). The inclusion criterion of the experts in this study was: (a) bachelor’s degree or above, intermediate professional title or above; (b) engaged in clinical nursing practice teaching, clinical nursing teaching management in tertiary hospitals, or nursing education for at least ten years; (c) informed consent, actively participates in this study and able to guarantee to complete two rounds of questionnaires.

The panel of experts in this study were aged between 35 and 62 years old (mean 44.44 ± 6.93), engaged in clinical nursing practice teaching for 12- 42 years (mean 22.56 ± 7.91), or in nursing teaching management for 9–36 years (mean 17.33 ± 7.84). There were 4 experts with senior professional titles (22.2%), 11 with associate senior professional titles (61.1%), and 3 with intermediate professional titles (16.7%). Among these, 3 had doctoral degrees (16.7%), 7 had master’s degrees (38.9%), and 8 had bachelor’s degrees (44.4%). All of them hold leadership positions in nursing teaching in the department.

Conduct expert consultation

In Jan 2021, the research group launched the first round of Delphi consultation with the selected experts. The consultation questionnaire was sent via WeChat or email to the experts. After collecting the first round of questionnaires, the index items were analyzed concerning expert opinions. The index items were analyzed according to the criteria that the coefficient of variation should be less than 0.25, the mean should be greater than 3.5, and the full score rate should be above 20% [8]. If the coefficient of variation is significant, it indicates a disagreement among experts on the item. Based on the first round of consultation, items were deleted, modified, and added to form the second round of consultation questionnaire. In March 2021, the second round of consultation questionnaires was carried out. The experts were also given two weeks to fill in the questionnaires. In the second round of Delphi consultation, the experts reached a consensus, and all the index items met the selection criteria.

Construct and conduct expert judgment matrix

To understand the relative importance of each indicator, we adopted the Analytic Hierarchy Process (AHP) developed by Saaty to construct an expert judgment matrix [14, 15]. According to Triantaphyllou and Mann, “the AHP is a decision support tool which uses a multi-level hierarchical structure of objectives, criteria, subcriteria, and alternatives. The pertinent data are derived by using a set of pairwise comparisons. These comparisons are used to obtain the weights of importance of the decision criteria, and the relative performance measures of the alternatives in terms of each individual decision criterion” [16].

The questionnaires were presented by paired indicators. For each question, both sides had a factor, and the more important one was selected first by the professionals. Then, the score scale of the relative importance (1–9) was determined. In this scale, 1, 3, 5, 7, and 9 corresponded to “equally important,” “slightly more important,” “moderately more important,” “strongly more important,” and “absolutely more important,” respectively, while 2, 4, 6, and 8 represented importance levels between adjacent levels. A sample question is (What do you think about the relative importance of the two structure indicators [Conditions of the department and teaching staff]? Please check the proper field and then determine the relative importance.)

In May 2021, we invited the experts who participated in the second round to complete a comparison matrix. The experts were also given two weeks to fill in the questionnaires.

Data analysis

Data were analyzed using SPSS version 16.0, and the degree of experts’ activity was expressed by the questionnaire response rate. The self-evaluation standard of expert familiarity (Cs) was to assign values to each entry and finally calculate the arithmetic mean based on the assignment method of Cs = 0.9 (very familiar), Cs = 0.7 (familiar), Cs = 0.5 (generally familiar), Cs = 0.3 (less familiar), Cs = 0.1 (very unfamiliar). Experts’ judgment basis (Ca) was that they divided the degree of influence of factors that affected problem judgment into large-, medium-, and small-level. Then they assigned values to the different influence degrees as follows: theoretical analysis (0.3, 0.2, 0.1), practical experience (0.5, 0.4, 0.3), understanding of domestic and foreign counterparts (0.1, 0.1, 0.1), intuition (0.1, 0.1, 0.1). Finally, the arithmetic mean was calculated. The authority coefficient of the experts (Cr) was determined by the coefficient of judgment basis (Ca) and the coefficient of familiarity (Cs), calculated as Cr = (Ca + Cs)/2, and the results are acceptable when Cr > 0.7 [17]. The degree of expert opinion concentration was expressed by the full score rate, mean, and standard deviation of item importance. The coefficient of variation (CV) was used to assess the consistency of the experts’ opinions concerning the indicators. Besides, we used Kendall’s coefficient of concordance (Kendall’s W) to test the consistency of experts’ opinions.

AHP was used to calculate the weight of each indicator, and the consistency ratio < 0.1 was considered a reasonable weight distribution [7].

Results

The enthusiasm of the experts

In the first and second rounds of expert consultation, 18 questionnaires were sent out, and 18 questionnaires were effectively recovered. The effective recovery rate of the expert consultation form was 100.0%. In the first round, 12 experts put forward some suggestions on the evaluation index of clinical nursing practice teaching, accounting for 66.7%. In the second round, 8 experts put forward relevant suggestions on the evaluation index of clinical nursing practice teaching, accounting for 44.4%. It is generally believed that the effective recovery rate of the questionnaire is more than 70%, indicating that the experts in this study have high enthusiasm and great interest in this field [18].

Expert authority

The judgment coefficient (Ca) was 0.944, the familiarity coefficient (Cs) was 0.978, and the authority coefficient (Cr) was 0.961 in the first round. The judgment coefficient (Ca) was 0.950, the familiarity coefficient (Cs) was 1.000, and the authority coefficient was (Cr) 0.975 in the second round, indicating a high degree of authority.

Concentration degree of expert opinions

The concentration degree of expert opinions in this study was mainly represented by the average score of importance (X), the standard deviation of the score of importance (S), and the ratio of full-score K (%). In the first round, the X of the 41 indicators was 4.22 to 5.00, the S was 0.00 to 1.07, and the K was 50.0% to 100.0%. In the second round, the X of the 42 indicators was 4.39 to 5.00, S was 0.00 to 0.92, and K was 66.1% to 100%. These data showed that the concentration degree of expert opinions was highly concentrated.

Coordination degree of expert opinions

The coordination degree of expert opinions was measured using the coefficient of variation and the Kendall Coordination Coefficient (W). The coefficient of variation of indicators in round 1 was 0 to 0.25. The coefficient of variation of indicators in round 2 was 0 to 0.21. The Kendall coordination coefficients of the 2 Delphi surveys were 0.209 and 0.135, respectively (P < 0.05), indicating that the degree of expert coordination was good (Table 1).

Table 1 Coordination degree of expert opinions

The result of expert consultation

In round 1, 12 experts suggested amendments to some indicators. During the discussion, some indicators were amended by the study group as follows. There are three first-level and ten second-level indicators, with the number unchanged. The “regulations” in the second-level indicators was changed to “implementation of teaching rules and regulations”. Third-level indicators were screened as follows: (1) Delete “teaching arrangement” “specially-assigned person in charge of teaching” and “the role of head nurses in teaching management”; (2) Add “targeted teaching plans” “evaluation of teaching process” “nursing competence of nursing interns” and “achievements of teaching related scientific research”; (3) Two indications “implementation of continuing nursing education” and “teaching situation within the department” were merged into one indicator “implementation of teaching plans within the department”.

In round 2, the consulting experts only revised some of the wording. The expert panel then achieved consensus about the final indicator system, which consists of three 3 first-level, 10 second-level, and 29 third-level indicators, as shown in Table 2. A study flow diagram of the Delphi process is shown in Fig. 1.

Table 2 Evaluation index system of clinical nursing practice teaching quality
Fig. 1
figure 1

Flow diagram of the Delphi process

Analysis of matrix and weight of indicators

Analysis of matrix and weight of the first-level indicators

Matrix and weight analysis of the first-level indicators (Structure quality, Process quality, and Outcome quality) are shown in Table 3. Among the three first-level indicators of the evaluation index system of clinical nursing practice teaching quality, “Process quality”(A2) was perceived as more important than “Structure quality”(A1) and “Outcome quality”(A3). The relative contribution was 39.81%, 36.67%, and 23.52%, respectively.

Table 3 Analysis of matrix and weight of the first-level indicators

Analysis of matrix and weight of second-level indicators

The relative weights of second-level indicators are shown in Table 4. The second-level indicators analysis showed that in the “Structure quality”, “Teaching staff” (B2) was perceived as more important than “Conditions of the department” (B1) (within-dimensional weight: 64.57% and 35.43%, respectively). Among the sub-dimensions of “Process quality”, the relative importance was “Implementation of teaching rules and regulations” (B3) (35.53%), followed by “Teaching plans” (B4) (33.15%), and “Implementation process of teaching plans”(B5) (31.32%). Among the sub-dimensions of “Outcome quality”, the relative importance was “Nursing competence of the teaching object” (B8) (28.40%), followed by “Examination scores of teaching subjects” (B7) (21.19%), “Annual teaching workload” (B6) (20.05%), “Teaching evaluation” (B9) (16.17%), and then “Teaching achievements” (B10) (14.19%).

Table 4 Analysis of matrix and weight of the second-level indicators

Weight analysis of third-level indicators

The relative weights of third-level indicators are shown in Table 5. The 10 third-level indicators with the highest ranks were: 1) level of teaching staff (C5); 2) structure of teaching staff (C4); 3) implementation of the management system for teaching objects (C7); 4) Implementation of the management system for teachers (C6); 5) targeted teaching plans (C8); 6) teaching conditions of the department (C2); 7) teaching plans can meet the training requirements (C9); 8) teaching atmosphere of the department (C3); 9) basic conditions of the department (C1); and 10) teaching arrangements (C11). The top 1, 2, 6, 8, and 9 indicators fell into the “Structure quality” dimension; numbers 3–5, 7, and 10 into the “Process quality” dimension.

Table 5 Weight analysis of third-level indicators

Discussion

The evaluation index system of clinical nursing practice teaching quality constructed in this study is scientific. First, it is based on the Structure-Process-Outcome quality structure model as a theoretical model, and the practice shows that the theory is very mature in establishing evaluation indicators of clinical nursing teaching quality [7]. Second, the quality evaluation indicator system is established based on an extensive literature review, Then, the content of the index system is finally formed through two rounds of the Delphi method. Finally, the Analytic Hierarchy Process was utilized to determine the weights of each hierarchical indicator, and the consistency ratios for all levels were below 0.1, indicating a reasonable allocation of weights for the indicators.

The reliability of the research results is closely related to the representativeness, enthusiasm, authority, and consistency of the consulted experts [19]. The representativeness of the selected experts determines the authority of the research results [17]. In this study, the 18 selected experts were experienced clinical nursing teaching management experts from hospitals or higher education institutions.Besides, these experts have been engaged in clinical nursing practice teaching or nursing teaching management for more than 15 years, and all hold leadership positions in nursing teaching in the department. Furthermore, 15 experts (83.3%) had associate senior or higher professional titles. Therefore, the expert panel in our study was well structured and possessed rich theoretical knowledge and practical experience in the research field, making their opinions highly representative. The effective response rate for both rounds of consultation was 100%, and the percentages of experts providing modification suggestions were 66.7% and 44.4%, respectively, indicating a high level of enthusiasm from the experts. The expert authority coefficients for both rounds were greater than 0.9, and considering their professional titles and educational backgrounds, the experts demonstrated a high level of authority. The Kendall’s coefficient of concordance for expert opinions was 0.209 and 0.135 for the two rounds, respectively, with a p-value < 0.001, indicating that the weight distribution of indicators at all levels is reasonable.

The unique contributions of our research is that we have incorporated as many indicators as possible related to clinical nursing practice teaching quality and established a comprehensive evaluation system from the perspective of clinical nursing teaching managers. This can be reflected in the following three aspects. First, we include not only indicators related to outcome quality but also indicators related to structure and process quality. Second, the index system we established includes both quality and quantity indicators. Finally, the index system we established is not used to evaluate specific teaching objects but to evaluate all teaching objects comprehensively. These are quite different from previous studies, which focused more on the competency assessment for clinical nursing teachers [20, 21] and the clinical skills of students [22, 23]. However, clinical nursing teachers’ and students’ competency cannot fully reflect the quality of clinical nursing practice teaching. As pointed out by Xu et al. [24], nursing education managers’ standardized and systematic supervision, management, and evaluation of clinical teaching quality is crucial to ensure the quality of clinical nursing teaching. The evaluation index system in our study was established from the perspective of clinical nursing teaching managers and can meet the rapid development of teaching task assessment requirements.

Similar to other findings [1, 25, 26], the “Process quality” (39.81%) has the highest weight among the first-level indicators. This reflects a shift in the managers’ focus from outcome-oriented to process-oriented approaches. Among all the third-level indicators, “Implementation of the management system for teaching objects” and “Implementation of the management system for teachers” ranked 3 and 4, and fell into the second-level indicator “Implementation of teaching rules and regulations”, indicating the importance of teaching rules and regulations. Teaching rules and regulations are the foundation for the orderly operation of clinical nursing teaching. In recent years, management rules and regulations for teachers have been gradually developed or improved, including selection systems, work systems, incentive systems, evaluation and assessment systems, training systems, and management systems for teaching objects [27,28,29,30]. Implementing these regulations directly affects whether the qualifications of teachers meet the regulations, the standardization and enthusiasm of teaching work, the quality of teaching objects, and the achievement of educational goals. Therefore, the weight value for these indicators is relatively high. We also found that “Targeted teaching plans” and “Teaching plans can meet the training requirements” ranked 5 and 7, and “Teaching arrangements” ranked 10 and fell into the second-level indicator “Implementation process of teaching plans”, which reflects the importance that teaching managers attach to the teaching plan. It is suggested that teaching plans should be tailored to different teaching objects, objectives, and learning durations, and appropriately arranged.

We found that “Level of teaching staff” and “Structure of teaching staff” ranked 1 and 2 and fell into the second-level indicator “Teaching staff”. This indicates that teaching faculty is crucial in the clinical nursing practice teaching quality evaluation index system. While care for patients must be the top priority for healthcare workers, universities must also ensure that teaching can be adequately delivered [30]. Therefore, faculty development is key to ensuring quality clinical teaching [31]. Clinical teachers face different challenges as they are expected to produce high-impact research, contribute to medical education, and deliver high-quality patient care, virtually all simultaneously [32]. Currently, undergraduate nursing and master’s education in China are developing rapidly, and teaching requirements and standards are becoming increasingly high. Moreover, the variety and quantity of specialized nurse training are constantly increasing, which also puts forward high requirements for the teaching level of clinical nursing teachers [33]. As teaching managers, on the one hand, we should select high-level teachers. On the other hand, we should strengthen training to improve the nursing team’ overall quality and clinical nursing teaching outcomes.

Our findings demonstrated that “Conditions of the department” is a crucial indicator of clinical nursing practice teaching quality. As shown in Table 5, “Basic conditions of the department”, “Teaching conditions of the department” and “Teaching atmosphere of the department” ranked high among all indicators, at 9th, 6th, and 8th respectively. These three indicators belong to the “Conditions of the department” indicator. Although the importance of teaching conditions for teaching quality has been fully recognized [34, 35], poor teaching conditions appear to be an international problem identified quite some time ago [30, 35]. Basic conditions of the department, including number of beds, types of diseases and number of patients in the department, radio of nurse/ bed, and discipline status of the department, reflect the level of discipline and the level of busyness of nursing work. A department with high-level development disciplines exposed teaching subjects to more advanced technologies. Clinical teachers in a busy department may not have enough time to engage in teaching activities, while lack of time is a significant barrier to planning and delivering good clinical teaching [30]. Teaching conditions of the department, including separate teaching places, teaching equipment, teaching aids (such as simulated equipment), and teaching materials, require financial support. Although multiple studies have demonstrated the effectiveness of simulation in the teaching of basic science and clinical knowledge, procedural skills, teamwork, and communication, as well as assessment at the undergraduate and graduate medical education levels [36], advanced simulators were not popularized in hospitals in China due to insufficient funding and technical support [37]. The pedagogical atmosphere at the ward is another factor influencing student nurses’ motivation to choose nursing as a career [36]. The positive learning atmosphere allows students to have more positive relationships with other team members, to feel genuinely involved in ward activities, and to be more motivated to explore new skills in clinical practice [37]. Therefore, we suggest increasing economic investment, establishing a teaching atmosphere, and improving departmental conditions to enhance the quality of teaching.

Interestingly, we found that among the first-level indicators, the weight of the “outcome indicator” was the smallest, and all the weights of the third-level indicators were relatively small. Medicine is an applied discipline with solid practicality. Nursing competence directly affects nursing quality and patient safety. Only with good nursing competence can they better serve patients. As with previous studies, student learning outcomes were deemed to be an important indicator of high-quality teaching [30, 38]. Therefore, “Nursing competence of the teaching object” and “Examination scores of teaching subjects” ranked high in the second-level indicators of “Outcome indicator”. However, we should also realize that students’ performance is not only related to the teacher but also to the students themselves. Perhaps due to this reason, the weights of these two indicators are slightly lower among all the second-level indicators. The weight of “Annual teaching workload” takes third place, indicating that when evaluating the clinical teaching quality, nursing managers fully recognized the impact of the department’s annual teaching workload, such as the number and duration of different teaching objects received by the department, teaching workload, examination workload, etc., and fully recognize its labor value. In addition, attention should be paid to teaching evaluation results, such as the satisfaction evaluation of teaching objects towards departments/teachers and the evaluation of teaching management personnel towards teachers, to evaluate clinical teaching work comprehensively and improve teaching quality continuously.

Limitations

Our study has some limitations. First, due to funding and the time limit of the study, we only selected 18 experts from four tertiary hospitals and two nursing colleges in Beijing, China, to conduct the Delphi survey. Second, we used the Delphi and Analytic Hierarchy Process to construct the evaluation index system for clinical nursing practice teaching quality. These methods heavily rely on the subjective judgment of experts, which may lead to unstable and one-sided results, and lack face-to-face communication, which may result in the loss of other perspectives. Therefore, the study’s results may be biased. Third, the indicator system established in this study has not been tested for reliability, validity, and empirical application.

Conclusions

This study established an evaluation index system of clinical nursing practice teaching quality, which included 3 first-level indicators, 10 second-level indicators, and 29 third-level indicators. The managers’ focus has shifted from outcome-oriented to process-oriented approaches. Among the second-level indicators, the experts regarded “Teaching staff”, “Implementation of teaching rules and regulations”, and “Teaching plans” more important than other indicators. Given their importance in teaching quality evaluation, more focus on teaching team construction, teaching regulations implementation, and teaching design is needed to improve clinical teaching quality. In future studies, we will continue to obtain feedback from a broader sample of experts from different regions to improve the evaluation metrics established in our study. Besides, we will design a rating scale by converting the weight value of each three-level index into a score on a 100-point scale to test the applicability and effectiveness of the evaluation index system in different contexts, and the evaluation scores could provide clues for guiding the management of clinical nursing practice teaching quality at different levels. We expect that the index system will contribute to evaluating comprehensively and improving the quality of clinical nursing practice teaching.

Availability of data and materials

The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.

References

  1. Wei J, Fang X, Qiao J, Liu H, Cui H, Wei Y, Ji X, Xu B, Han Q, Jing X. Construction on teaching quality evaluation indicator system of multi-disciplinary team (MDT) clinical nursing practice in China: a Delphi study. Nurs Educ Pract. 2022. https://doi.org/10.1016/j.nepr.2022.103452.

    Article  Google Scholar 

  2. Neuman A, Shahor N. Co-evaluation or “two are better than one.” Int J Interdiscip Soc Sci: Annu Rev. 2007;2(3):377–86. https://doi.org/10.18848/1833-1882/CGP/v02i03/52338.

    Article  Google Scholar 

  3. Yarmohammadi E, Jazayeri M, Khamverdi Z, Kasraei S, Rezaei-Soufi L. Evaluation of the importance of effective teaching method indicators from dental students’ prospects. Avicenna J Dent Res. 2013;5(1). https://doi.org/10.17795/ajdr-20188.

  4. Saarikoski M, Isoaho H, Warne T, Leino-Kilpi H. The nurse teacher in clinical practice: developing the new sub-dimension to the Clinical Learning Environment and Supervision (CLES) Scale. Int J Nurs Stud. 2008;45(8):1233–7. https://doi.org/10.1016/j.ijnurstu.2007.07.009.

    Article  Google Scholar 

  5. Browning M, Pront L. Supporting nursing student supervision: an assessment of an innovative approach to supervisor support. Nurse Educ Today. 2015;35(6):740–5. https://doi.org/10.1016/j.nedt.2015.02.003.

    Article  Google Scholar 

  6. Shen L, Yang J, Jin X, Hou L, Shang S, Zhang Y. Based on Delphi method and analytic hierarchy process to construct the evaluation index system of nursing simulation teaching quality. Nurse Educ Today. 2019;79:67–73. https://doi.org/10.1016/j.nedt.2018.09.021.

    Article  Google Scholar 

  7. Huang M, Li Y, Zhang X, et al. Construction of evaluation index system for clinical teaching quality supervision of nursing undergraduates. Chinese J Pract Nurs. 2021;37(1):7–17. https://doi.org/10.3760/cma.j.cn211501-20200414-01861.

    Article  Google Scholar 

  8. Lei W, Han Y, Tan J, Yan R, Zhang Y, Xiang T, et al. Construction of an evaluation index system for clinical nursing teaching quality based on CIPP evaluation model. J Nurs Sci. 2023;38(8):84–6. https://doi.org/10.3870/j.issn.1001-4152.2023.08.084.

    Article  Google Scholar 

  9. Conigliaro RL, Stratton TD. Assessing the quality of clinical teaching: a preliminary study. Med Educ. 2010;44(4):379–86. https://doi.org/10.1111/j.1365-2923.2009.03612.x. PMID: 20444073.

    Article  Google Scholar 

  10. Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Academic Med. 2018;93(8):1107–9. https://doi.org/10.1097/ACM.0000000000002044.

    Article  Google Scholar 

  11. Zhang Z, Wu Q, Zhang X, Xiong J, Zhang L, Le H. Barriers to obtaining reliable results from evaluations of teaching quality in undergraduate medical education. BMC Med Educ. 2020;20(1):333. https://doi.org/10.1186/s12909-020-02227-w.

    Article  Google Scholar 

  12. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743–8. https://doi.org/10.1001/jama.260.12.1743.

    Article  Google Scholar 

  13. McPherson S, Reese C, Wendler MC. Methodology Update: Delphi studies. Nurs Res. 2018;67(5):404–10. https://doi.org/10.1097/NNR.0000000000000297.

    Article  Google Scholar 

  14. Saaty TL. Translated to French, Indonesian, Spanish, Korean, Arabic, Persian, and Thai, latest edition, revised. In: Decision Making for Leaders; The Analytical Hierarchy Process for Decisions in a Complex World, Belmont, CA: Wadsworth. Pittsburgh: RWS Publications; 2000.

    Google Scholar 

  15. Saaty T. ision making with the analytic hierarchy process. Int J Serv Sci. 2008;1(1):83–98. https://doi.org/10.1504/IJSSCI.2008.017590.

    Article  Google Scholar 

  16. Triantaphyllou E, Mann SH. Using the analytic hierarchy process for decision making in engineering applications: some challenges. Int J Industr Eng Applic Pract. 1995;2(1):35–44.

    Google Scholar 

  17. Sandford BA, Hsu CC. The delphi technique: making sense of consensus. Pract Assess Res Eval. 2007;26(10):289–304. https://doi.org/10.7275/PDZ9-TH90.

    Article  Google Scholar 

  18. Lei Y, Liu J, Tang Q, Qiong L, Xi Z, Li CX, et al. Constructing ecmo care quality evaluation index system based on “structure-process-outcome” three-dimensional theoretical model. Int J Food Eng Technol. 2019;2(2):27–35. https://doi.org/10.11648/J.IJFET.20180202.13.

    Article  Google Scholar 

  19. Liu W, Sun J, Huang Q, Baoxin Su, Ding S. The development of quality evaluation index system for clinical teaching of nurses by using 3D structure model. Chinese J Nurs Educ. 2017;14(5):351–5. https://doi.org/10.3761/j.issn.1672-9234.2017.05.007.

    Article  Google Scholar 

  20. He H, Zhou T, Zeng D, Ma Y. Development of the competency assessment scale for clinical nursing teachers: results of a Delphi study and validation. Nurse Educ Today. 2021;101:104876. https://doi.org/10.1016/j.nedt.2021.104876.

    Article  Google Scholar 

  21. Ye J, Tao W, Yang L, Xu Y, Zhou N, Wang J. Developing core competencies for clinical nurse educators: an e-Delphi-study. Nurse Educ Today. 2022;109:105217. https://doi.org/10.1016/j.nedt.2021.105217.

    Article  Google Scholar 

  22. Mayen S, Roman C, Cermolacce M, Colson S. Élaboration d’un outil d’évaluation des compétences en stage des étudiants infirmiers en pratique avancée, mention psychiatrie et santé mentale à partir d’une méthode Delphi [Using the Delphi method to develop a clinical skills assessment tool for advanced practice nurses in psychiatry and mental health]. Rech Soins Infirm. 2024;154(3):43–54. https://doi.org/10.3917/rsi.154.0043.

    Article  Google Scholar 

  23. Peng Q, Gao Y, Liu N, Gan X. Development of a tool for assessing the clinical competency of Chinese master’s nursing students based on the mini-CEX: a Delphi method study. BMJ Open. 2024;14(3):e078719. https://doi.org/10.1136/bmjopen-2023-078719.

    Article  Google Scholar 

  24. Xu Y, Ouyang X, Qiu YR, et al. Development of the quality index system for clinical nursing teachers. Chin J Nurs Educ. 2014;11(5):343–7.

    Google Scholar 

  25. , Aimei L, Hongmei LI, Huanling G, Zhaoxia T, University SM. Preliminary construction of evaluation index system of nursing case teaching. Nurs Res. 2019;33(2):204–8. https://doi.org/10.12102/j.issn.1009-6493.2019.02.005.

    Article  Google Scholar 

  26. Liu T, Liu Q, Fu XL, et al. Research on the application of multidisciplinary. 2019.

    Google Scholar 

  27. Yufeng Du. Selection and training of the clinical nursing teacher. J Nurs Adm. 2004;4(7):40–2. https://doi.org/10.3969/j.issn.1671-315X.2004.07.019.

    Article  Google Scholar 

  28. Yunfang He, Jing Y, Bo X, Miao Z, Ling R, Wei G, et al. Thoughts on the construction of the incentive mechanism for the teaching staff of standardizing residency training based on Maslows hierarchy of needs and teacher development. Chinese J Grad Med Educ. 2021;5(3):213–6. https://doi.org/10.3969/j.issn.2096-4293.2021.03.005.

    Article  Google Scholar 

  29. Zhiying F, Ren J, Yanfeng Z, Zhiyou P, Fangping B, Xianhui K. Evaluation of visiting scholars in pain department on teaching hospitals and on clinical instructors. Chinese J Med Educ. 2023;43(2):153–5. https://doi.org/10.3760/cma.j.cn115259-20220612-00769.

    Article  Google Scholar 

  30. Schiekirka-Schwake S, Anders S, von Steinbüchel N, Becker JC, Raupach T. Facilitators of high-quality teaching in medical school: findings from a nation-wide survey among clinical teachers. BMC Med Educ. 2017;17(1):178. https://doi.org/10.1186/s12909-017-1000-6.

    Article  Google Scholar 

  31. Gagnon N, Bernier C, Houde S. How best can faculty development support teachers in clinical settings? British J Hospital Med (London, England : 2005). 2022;83(5):1–8. https://doi.org/10.12968/hmed.2021.0671.

    Article  Google Scholar 

  32. Abrahamson S. Time to return medical schools to their primary purpose: education. Acad Med. 1996;71(4):343–7. https://doi.org/10.1097/00001888-199604000-00008.

    Article  Google Scholar 

  33. Wong FKY. Development of advanced nursing practice in China: act local and think global. Int J Nurs Sci. 2018;5(2):101–4. https://doi.org/10.1016/j.ijnss.2018.03.003.

    Article  Google Scholar 

  34. Xiao JF, Wu JG, Shi CH. Yi chuan = Hereditas. 2011;33(12):1409–1413. https://doi.org/10.3724/sp.j.1005.2011.01409.

  35. Saleh AM, Al-Tawil NG, Al-Hadithi TS. Teaching methods in hawler college of medicine in Iraq: a qualitative assessment from teachers’ perspectives. BMC Med Educ. 2012;12:59. https://doi.org/10.1186/1472-6920-12-59.

    Article  Google Scholar 

  36. Okuda Y, Bryson EO, DeMaria S Jr, Jacobson L, Quinones J, Shen B, Levine AI. The utility of simulation in medical education: what is the evidence? Mount Sinai J Med New York. 2009;76(4):330–43. https://doi.org/10.1002/msj.20127.

    Article  Google Scholar 

  37. Zhang YH, Nelda S, Godfrey, Yu GL, Cheng H, Shi CQ, Qiuhua Shen, et al. Exploring the use of Simman 3G to simulate human situations in nursing practice teaching between China and the United States. Chinese J Pract Nurs. 2016;32(30):2387-2389. https://doi.org/10.3760/cma.j.issn.1672-7088.2016.30.018.

  38. Westphale S, Backhaus J, Koenig S. Quantifying teaching quality in medical education: the impact of learning gain calculation. Med Educ. 2022;56(3):312–20. https://doi.org/10.1111/medu.14694.

    Article  Google Scholar 

Download references

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

The study design was conducted by Shengxiao Nie and Lei Wang. Data collection was performed by Shengxiao Nie and Lei Wang. Data analysis were carried out by Shengxiao Nie. The manuscript was written by Shengxiao Nie.

Corresponding author

Correspondence to Shengxiao NIE.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Ethics Committee of Beijing Hospital (protocol number: 2023BJYYEC-155–01). Each participant received a cover letter containing detailed information about the study’s purpose, methods, potential conflicts of interest, researcher affiliations, anticipated benefits, and possible risks. Participants were informed of their right to decline participation or withdraw consent at any time without consequences. We ensured individual anonymity and data confidentiality throughout the research by using anonymous identifiers. Informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

NIE, S., WANG, L. Constructing an evaluation index system for clinical nursing practice teaching quality using a Delphi method and analytic hierarchy process-based approach. BMC Med Educ 24, 772 (2024). https://doi.org/10.1186/s12909-024-05770-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-024-05770-y

Keywords