Skip to main content

Effectiveness of technology-enhanced teaching and assessment methods of undergraduate preclinical dental skills: a systematic review of randomized controlled clinical trials

Abstract

Background

To investigate the effectiveness of technology-enhanced teaching and assessment methods of undergraduate preclinical skills in comparison to conventional methods.

Methods

A comprehensive search strategy was implemented using both manual and electronic search methods, including PubMed, Wiley, ScienceDirect, SCOPUS, and the Cochrane Central Register of Controlled Trials. The search and selection of articles that met the inclusion criteria were carried out in duplicates. A Cochrane data extraction form for RCTs was used to extract the relevant information from all included articles. Risk of bias of all included articles was assessed independently by two authors using the Cochrane risk of bias tool.

Results

A total of 19 randomized controlled clinical trials met the inclusion criteria and were included in this review. The majority of the studies included in this review have a high risk of bias mainly due to incomplete data, lack of blinding of the examiners, and due to other biases, such as small sample sizes, not accounting for additional hours of training, and the lack of calibration of examiners grading the preparations. Conflicting results were reported in the included studies with regards to whether there were differences between the intervention and control groups in the outcome measure of quality of students’ performance. A meta-analysis could not be done for this study due to the heterogeneity among the included studies.

Conclusions

Technology-enhanced teaching and assessment tools used in preclinical skills training of undergraduate dental students have the potential to improve students’ performance. However, due to the conflicting outcomes reported in the 19 studies included in this systematic review and their high risk of bias, better quality studies are required to find a definitive answer to the research question of this systematic review.

Peer Review reports

Background

According to the American Dental Association (ADA), producing competent dental graduates is an aim that dental schools thrive to achieve [1]. This can be accomplished through a good curriculum that helps them to continue honing their skills and knowledge over their lifetime and serve their communities [2]. One method of testing the effectiveness of a curriculum is through students’ assessment of the learning outcomes stated in the curriculum. There are two main forms of students’ assessment, formative and summative assessments. Formative assessment evaluates the learning process of students at any point during the teaching program through methods such as self-reflection, in-course assignments, and course-feedback. This improves the quality of learning through understanding of the individual strengths and weaknesses. Also, it aids as a reflection on students’ strategies to improve their performances. Summative assessment, on the other hand, is mainly implemented at the end of a course or program or at strategic stages of a course or a program and may include a variety of assessment methods such as, written exams, Objective Structured Clinical Examination (OSCE), oral and clinical skills exams. While the latter form of assessment can promote motivation through recognition of achievements that the student has obtained, it does not allow for students to reflect on areas that require improvement. It is very important to have a good balance of both forms of assessment, as focusing solely on the summative type may result in lower quality learning, while focusing on the formative type can lead to students not achieving the level of competency required in their course or program [3, 4].

Traditionally, preclinical training in dentistry relied on practicing skills on plastic teeth under the supervision of dental experts [5]. These plastic teeth are typically placed in jaws within a dental simulator. All preparations done on the plastic teeth are then checked and graded by an experienced dental instructor. The use of traditional typodont (manikin-head) has always been considered a valuable tool for simulating patient care procedures [6]. The advantages of using these methods may include low cost, an effective method of improving hand-eye coordination and manual dexterity, and the fact that it has good long-term credibility being the method of choice for decades in preclinical dental training. However, there are also major drawbacks, including the inability to calibrate the evaluation process due to the general focus on task outcome and a heavy reliance on the instructor’s subjective evaluation. This highlights a lack of consistency in students’ evaluations, even when evaluating the same work on different occasions. In order to overcome these limitations, computerized dental teaching and assessment systems were suggested as potential alternative feedback and assessment tools that can improve student’s learning and self-assessment experiences [7].

Recently, there has been an evolution in the development and implementation of computerized technologies such as virtual reality, augmented reality, and haptic technology with feedback in dental training. Virtual reality is a computer-simulated environment [8]. Augmented reality refers to a form of technology that integrates both a real environment and a virtual environment to provide an immersive experience [9]. Haptic technology is a more recent form of technology that involves tactile sensation while interacting with computer-generated objects [10]. All of these forms of technology could potentially enhance the learning and teaching of manipulative skills, particularly during preclinical training [11,12,13,14]. However, there are many disadvantages that are associated with technology-enhanced assessment systems. The systems are typically very expensive and require training for both staff and students, particularly if the assessment tool is complex. Due to the fact that this would require considerable funding and resources in order to prepare both staff and students in its use, technology-enhanced assessment tools should be able to enhance learning and teaching of practical skills at a greater level than the traditional method of assessment in order to justify the cost and time. Therefore, the aim of this systematic review was to investigate the effectiveness of technology-enhanced teaching and assessment methods of undergraduate preclinical skills in comparison to conventional methods.

Methods

Protocol and registration

This systematic review was conducted according to PRISMA guidelines (Preferred Reporting Items for Systematic Review and Meta-Analysis) [15] and was registered at the Open Science Framework database (https://osf.io) under the registration code: osf.io/xvm7t.

Eligibility criteria

Studies which fulfil the following criteria were selected:

  • Population: undergraduate dental student’s preclinical skills

  • Intervention: technology-enhanced teaching and assessment methods including but not limited to digital scanners, virtual reality, augmented reality, and haptic technology

  • Comparison: conventional teaching and assessment methods (using a manikin and manual assessment using a periodontal probe, explorer, or/and mouth mirror by a dental instructor)

  • Primary Outcome measure: effectiveness of technology-enhanced assessment tools when compared to conventional assessment tools in terms of minimizing procedural errors

  • Secondary outcome measures: student satisfaction, time taken to complete the preparation

  • Study design: randomized controlled clinical trials

  • Publication dates: No limit on the date of publication was applied. The search was conducted until January 2020.

  • Exclusion criteria: studies that included non-dental students, post-graduate dental students, did not have full-text articles written in English, or were not randomized controlled trials were excluded.

Information sources

All studies were obtained through a comprehensive search strategy using electronic and manual search methods to locate both indexed and non-indexed articles. The electronic search was performed with the guidance of a formally qualified librarian. Furthermore, hand-searching of reference lists of the included articles were also examined. The electronic search strategy included the following databases: PubMed, Wiley, ScienceDirect, SCOPUS, and the Cochrane Central Register of Controlled Trials.

Search strategy

The above databases and hand searching were performed independently by two reviewers (SM and ME). Any disagreements between them was resolved by discussion and reaching a consensus, but if they were unable to reach such a consensus a third reviewer was consulted (KK). A search strategy was developed using a combination of MeSH, non-medical terms, and keywords based on the above PICO domains. The following keywords were used to search the databases following advice from a formally qualified librarian and were adapted to each database respectively.

  1. 1.

    dent* AND (student OR assess* OR evaluation) AND preclinical AND (“technology-enhanced” OR virtual OR haptic);

  2. 2.

    “dental student” AND (assess* OR evaluation) AND preclinical AND (“technology-enhanced” OR virtual OR haptic OR simulation);

  3. 3.

    (dental OR dentist) AND (student OR undergraduate) AND preclinical AND (“technology enhanced” OR haptic) AND (assessment OR competency);

  4. 4.

    dent* AND (student OR undergraduate) AND preclinical AND (“technology-enhanced” OR haptic OR simul* OR virtual) AND (assess* OR competency);

  5. 5.

    dent* AND (student OR undergraduate) AND preclinical AND (“technology-enhanced” OR haptic OR digital OR 3D OR simul* OR virtual OR computer OR e-learning).

The manual hand-search included the following four journals:

  1. 1.

    Journal of Dental Education (2000–2019)

  2. 2.

    European Journal of Dental Education (2000–2019)

  3. 3.

    International Journal of Technology Assessment in Healthcare (2001–2019)

  4. 4.

    Medical Teacher (2000–2019)

Data extraction

A Cochrane data extraction form for Randomized Controlled Trials (RCTs) was used in this systematic review. Data was extracted independently by two reviewers recording the following items: author, year of publication, sample size, study setting, year of study, discipline being assessed, the technology-enhanced assessment intervention, the main findings, the grading assessment method, faculty calibration and the grading rubric. A summary of this information can be seen in Tables 1 and 2. A meta-analysis was not performed because of the great methodological heterogeneity among the studies examined, mainly because of the different technology-enhanced assessment methods, different disciplines involved, different levels of students, as well as different grading assessment methods. The majority of the studies included had a moderate or high risk of bias.

Table 1 Summary of the Data from the Studies Included in this Review
Table 2 Summary of the Assessment Criteria used in the Studies Included in this Review

Risk of bias assessment of individual studies

The quality of included articles was assessed independently by two authors using the Cochrane risk of bias tool (RoB 2.0). An agreement between the two assessors should have been formed prior to reaching a final decision regarding the overall risk of bias of any study. In the case of disagreement, however, a third assessor was consulted to reach the final decision.

The Cochrane risk of bias tool includes seven domains namely: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other types of bias [34]. The overall risk of bias was allocated as follows: an overall grade of low risk was given if all domains were graded as low risk of bias, a grade of unclear risk was given if one or more domains were graded as unclear risk of bias, and a grade of high risk was given if one or more domains were graded as high risk of bias.

Evaluation of quality of evidence

A quality grade related to the outcome measure was given to the included studies based on the Grading of Recommendation, Assessment, Development, and Evaluation (GRADEpro Guideline Development Tool, gradepro.org). This tool contained 5 domains for rating the quality of evidence as high, moderate, low, or very low [35].

Results

Study selection

The kappa statistic for the agreement between the reviewers of searching and selection of studies was 0.86. Following inspection of the titles and abstracts, a total of 1257 articles were initially obtained for assessment, including 1219 from the five electronic databases (PubMed, Wiley, Cochrane Central Register of Controlled Trials, ScienceDirect, and SCOPUS), 31 from the manual hand search of the journals (Journal of Dental Education, European Journal of Dental Education, International Journal of Technology Assessment in Healthcare, and Medical Teacher), and 7 from the reference lists. After removal of duplicates, 1107 articles remained and from those, a further 1013 were removed as they were not directly relevant to the research question of the current systematic review. This left 94 articles for potential inclusion in our systematic review. After reading through the full texts of these 94 articles, 75 were excluded due to various reasons i.e. 28 were not randomized clinical trials, 19 were reviews, 15 did not include dental students entirely, 10 did not include a conventional group for comparison in the randomized clinical trial, 2 were unavailable in English, and 1 was an incomplete registered clinical trial. Thus, a total of 19 studies were included in the final analysis of this review (Fig. 1).

Fig. 1
figure 1

Flow Diagram of Study Identification and Selection using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) format

Study characteristics

Nine studies (47.4%) took place in Europe [17, 18, 20,21,22, 28, 30,31,32], five studies (26.3%) took place in North America [7, 16, 19, 23, 26], three studies (15.8%) were conducted in Asia [25, 29, 33], one study (5.2%) was conducted in Oceania [27], and one study (5.2%) did not mention the country setting [24]. One study (5.2%) had a sample size of 9 [7], three studies (15.8%) had a sample size between 11 and 30 [18, 19, 27], 11 studies (57.9%) had a sample size between 31 and 50 [17, 20,21,22, 24, 25, 28, 30,31,32,33], two studies (10.5%) had a sample size between 51 and 70 [16, 29], and two studies (10.5%) had a sample size between 71 and 90 students [23, 26]. Seven studies (36.8%) assessed second year undergraduate dental students [7, 16,17,18, 23, 26, 33], four (21%) assessed first year [19,20,21,22], four (21%) assessed fourth year [24, 27, 29, 30], two (10.5%) assessed third year [28, 31], one (5.2%) assessed fifth year [25], and one (5.2%) assessed a mixture of fourth and fifth year [32]. Most of these studies (57.9%) assessed the student’s ability in operative dentistry [16,17,18, 20,21,22,23, 28, 30, 31, 33]. Five (26.3%) were about prosthodontics [7, 25,26,27, 29], one (5.2%) was a mixture of both operative dentistry and prosthodontics [19], one (5.2%) about endodontics [24], and one (5.2%) about oral surgery [32]. In total, there were four main types of technology-assessments that were used in all the 19 studies. Ten studies (52.6%) used virtual reality [16,17,18,19,20,21,22,23, 25, 33], six studies (31.5%) used a digital scanner [7, 26, 27, 29,30,31], two (10.5%) used augmented reality [28, 32], and one (5.2%) used haptic technology [24].

Risk of bias within studies

One study had a low risk of bias [24], 5 had an unclear risk of bias [21, 22, 27, 29, 33], and 13 had a high risk of bias [7, 16,17,18,19,20, 23, 25, 26, 28, 30,31,32], (Fig. 2). A summary of the percentage of allocation of risk of bias grades in each domain can be seen in Fig. 3.

Fig. 2
figure 2

Risk of Bias Assessment for each Included Study

Fig. 3
figure 3

Summary of the Percentage Allocation of Risk of Bias Grades in each Domain Across all Included RCTs

Out of the total 19 studies, 17 had allocation concealment bias [16,17,18,19,20,21,22,23, 25,26,27,28,29,30,31,32,33], and of these 15 also had another form of selection bias namely, random sequence generation [16, 19,20,21,22,23, 25,26,27,28,29,30,31,32,33]. Six studies had performance bias in terms of either blinding of the personnel or the participants involved [7, 16, 19, 23, 26, 30]. Six studies had concerns regarding incomplete outcome data [7, 16, 19, 20, 23, 25]. Five studies had other forms of bias such as lack of blinding and calibration of the examiners who are grading the preparations, a small sample size, and a lack of monitoring of additional hours of practice by the students outside of the allocated training time [7, 16, 23, 26, 32]. Two studies had a form of detection bias [23, 26], and two studies had reporting bias [17, 18]. Only one study had no form of bias [24].

Description of study findings

Five studies provided a questionnaire on student’s experience using a technology-enhanced assessment system, to their intervention group [17, 18, 28, 29, 32]. In two of these studies, most participants believed they could improve their self-learning, self-assessment, and/or assessment abilities using the technology-enhanced method over the conventional method [28, 29]. Two of these studies revealed that most students in the intervention group did not feel that the conventional method would be replaced [17, 18]. In one study, the intervention group reported being more confident in their ability in administering an inferior alveolar dental block in comparison to their peers in the control group [32].

Operative dentistry

Eleven studies compared technology-enhanced assessment to conventional assessment in operative dentistry. Two studies by Nagy et al., [30] and Wolgin et al., [31] used a digital scanner to assess cavity preparations in comparison to a control group. Nagy et al., [30] reported that the intervention group had significantly smaller deviations of the mean occlusal width, approximate depth, and shoulder width in their second preparations. In comparison, the control group did not show any significant difference in mean measurements between the first and second preparations. Wolgin et al., [31] reported that using the digital scanner was just as effective as the conventional form of supervision, as there was no significant difference between the intervention and comparison groups in regard to the cavity dimensions.

Five studies specifically used the DentSim Virtual Reality System [16, 20,21,22,23]. These five studies all reported an overall increase in performance for the groups involved, however, results of the technical scores of the intervention and control groups varied in these studies. Wierinck et al., [20] reported that the intervention group which used the DentSim without feedback significantly outperformed the control group in performance scores in the retention tests. However, no significant differences were found between the groups in the transfer tests. Another study by Urbankova, [23] found that the intervention group performed significantly better than the control group for the first two examinations, but not on the last examination. A third study by LeBlanc et al., [16] reported that there were no significant differences in overall performance scores between the groups, but that the intervention group improved significantly more than the control group. The last two studies (Wierinck et al., [21]; Wierinck et al., [22]) reported that the intervention groups using DentSim had significantly better performance than the control group in both the immediate and delayed retention tests. The intervention groups, however, took a significantly longer preparation time compared to the control group. In Wierinck et al., [22] study during the delayed retention test and delayed transfer test, only one intervention group differed significantly from the control group, but in the second study by Wierinck et al., [21], both intervention groups had a significantly better performance than the control group in these tests.

Quinn and his co-workers used an unspecified virtual reality machine (Quinn et al., [17] involved two intervention groups and one control group; while Quinn et al., [18] had one intervention and one control group). Both studies reported that generally there were no statistically significant differences between the intervention and control groups. In Quinn et al., [17], the intervention group with real-time and conventional feedback scored significantly higher than the control group in one criteria, the outline form. The rest of the scores in all groups were not statistically significant. The second study by Quinn et al., [18] reported a variation in the significant differences found between the intervention and conventional training groups. Some criteria failed to show significant differences between the two groups, while the remaining criteria showed a significant difference with the virtual reality group showing worse qualitative scores.

Llena’s et al., [28] study used an Augmented Reality Software and mobile application. It reported that there was a significantly higher average score in the intervention group for class I cavity preparations, but no significant differences were observed between the two groups in the class II occlusal box cavity preparation exercise. In another study by Murbay et al., [33], the Moog Simodont was used and reported that there was a significant improvement as a result of being exposed to the dental trainer.

Prosthodontics

In prosthodontics, two studies reported the use of the E4D Compare software [7, 26]. E4D Compare scanning software is used as a virtual assessment tool for matching and comparing standard ideal tooth preparation with the operator’s dental work. Sadid-Zadeh et al., [7] reported that the intervention group, interacted only with the software, had consistently a higher percentage of acceptable crown preparations than the control group and the faculty assisted intervention group. All groups showed improvement over time, with undercuts being the most common error, along with unsupported enamel, finish of the preparation, finish line width, amount of occlusal reduction and contour of the preparation. This study found that using the E4D Compare software was just as effective as conventional training. In Gratton et al., [26] study, there was no statistically significant difference between the intervention group and the control group with regards to technical scores and self-evaluation scores during fixed prosthodontics preparation. However, there was a significant difference between the two groups with regards to the average faculty grading, as faculty consistently gave higher average scores than the average E4D Compare grade.

Two studies used another digital scanner as their method of intervention [27, 29]. Tiu et al., [27], reported that the conventional group with no tutor assistance had inconsistent results compared to the intervention group which was able to achieve the acceptable range for preparation finish-line dimensions. By the fourth session, 70% of the intervention group were able to achieve acceptable total occlusal convergence angles (TOC) and finish-line dimensions in their crown preparations. This was outperforming the other groups in the overall acceptable preparations. Liu et al., [29], revealed that there was a significant difference between both the intervention and control groups in practical scores with the intervention group scoring higher than the control group in the overall preparation score.

Kikuchi et al., [25], used the DentSim virtual reality system. This study reported that the intervention groups had significantly higher average scores than the control group. Total scores increased with experience in the intervention groups between experiments, but there was no significant difference in total scores in the control group between experiments. Preparation time was significantly shorter in the control group compared to the intervention groups. The scores for wall incline in the intervention groups were higher than the control group in all experiments. Undercuts decreased with experience, but damage to adjacent teeth was not significantly different among all groups. Scores for margin location in the intervention groups were significantly higher than the control group, but not in chamfer width, wall smoothness, finish line continuity, interproximal clearance resistance, and retention.

The DentSim virtual reality system was also tested in a study by Jasinevicius et al., [19], and was reported to have no significant difference in the number and quality of preparations between both the intervention and control groups.

Endodontics

Only one study was related to endodontics by Suebnukarn et al., [24]. A haptic virtual reality simulator intervention was used to evaluate procedural error and treatment time during access cavity preparation. There were no significant differences between the groups with regards to their average error scores, tooth mass removal, and task completion time before training. Error score reduction for both the virtual reality simulator and conventional training groups after training was not significantly different. The intervention group had a significant reduction in tooth mass removal after training when compared to the control group. There was no significant difference in task completion time after training between both groups.

Oral surgery

Only one study was related to oral surgery by Mladenovic et al., [32]. It reported that the intervention group had a higher average score and a more limited range of responses on the questionnaire than the control group after using the augmented reality device. The average time for performing anesthesia in the experimental group was significantly lower than the control group. The intervention group had a higher success rate than the control group, but this difference was not statistically significant. Heart rate significantly increased in both groups when performing anesthesia, but there was no significant difference in heart rate between the two groups.

Assessment of the quality of evidence

The quality of evidence according to GRADE was rated overall as low. Although one RCT was rated as high as it had an adequate sample size and good control of confounding factors and a few limitations, the remaining RCTs were rated as low due to high risk of bias, small sample sizes, conflicting findings and other confounding variables such as, not accounting for additional hours of training, and the lack of calibration of examiners grading the preparations.

Discussion

This systematic review was designed to assess whether or not there is a difference between technology-enhanced and conventional teaching and assessment methods of preclinical undergraduate dental skills with regards to the quality of the preparation, time taken to complete the preparation and student satisfaction.

From a total of 19 studies that were included in this review, seven reported that there were no significant differences between the intervention and control groups. From these seven studies, two [16, 23] reported that there was a significant improvement rate in their intervention groups. A third study by Quinn et al., [18] had reported that most assessment criteria had no-significant differences, but the criteria that showed significant differences favored the control group.

The remaining 12 studies reported significant differences between the intervention and control groups. Of these, five studies [20, 24, 25, 28, 32] reported that there were significant differences in favor of the intervention groups in some of the exercises or tasks that were not necessarily related to the quality of the preparations or tasks. For example, the time needed to complete the task [25, 32] or only showing significant differences in some of the tasks or exercise criteria that were assigned [20, 24, 28]. This variation in the findings regarding the effectiveness of technology-enhanced teaching and assessment systems may be attributed to the methodology used to assess the students, the type of machine/ system used, and due to the different preclinical courses being assessed.

Formative assessment is an important component of student’s assessment and is usually carried out in preclinical labs through self-assessment or faculty assessment during tooth preparation exercises [4]. However, filling out a self-assessment form does not necessarily improve student ability. In addition, faculty assessment is not considered as an objective method of assessment [36]. The use of digital scanning software, virtual reality, and augmented feedback to visualize students’ preparations in a three-dimensional space may have allowed them to improve the positioning of the handpiece and themselves as they completed their assigned exercises. These technologies also present an objective assessment of measurements of tooth preparations, allowing students to grasp technical skills such as crown reduction, cavity depth, and smoothness at a visual level. Though, if feedback is provided frequently, it can cause a dependency on them, resulting in poor scores during retention practical exams when the feedback option is removed as was seen by Wierinck et al., [21]. However, it should be mentioned that a similar study by the same authors (Wierinck et al., [22]) conflicts with this finding as they found the group with frequently provided feedback performed better. It is not surprising that the effectiveness of student’s feedback in preclinical skills labs may vary greatly between different settings due to several factors. In the conventional method of training, feedback is mainly provided by experienced instructors who are not always present to correct students’ posture and grip on the handpiece during their work. Thus, it is not uncommon for students to wait an extended period of time for faculty feedback. A lower faculty to student ratio which is fairly common in larger preclinical skill labs will contribute to longer waiting times. This can be countered with technology-enhanced assessment systems which offer an instant feedback system, allowing the students to work without waiting for a long period of time. The feedback received from the conventional method of assessment is also subjective, as students may be given different advice on the same preparation from different instructors. Technology-enhanced assessment systems can theoretically reduce the number of instructors needed in a preclinical skills lab and allow a lower faculty to student ratio without sacrificing student performance, as for many systems, a feedback option is available for the student to use. However, more hours may need to be assigned to the staff and students to train them on the use of the system. The cost of supplying an entire preclinical skills lab with technology-enhanced teaching and assessment methods may not be possible for some dental schools.

Summative assessment is another important component of student’s assessment which is typically performed during final examinations [4]. With this in mind, students that have significantly higher hours of practice will have a greater probability of outperforming their peers with a smaller number of hours of practice. Studies by LeBlanc et al., [16] and Urbankova, [23] reported that the intervention group showed a significant improvement compared to the control group despite a lack of significant difference between the two groups in the final score. This may indicate that these systems can promote faster learning in poorly performing students during preclinical lab training. With haptic feedback, students can use tactile sensation to help them differentiate between tooth layers as though they are practicing on a real tooth. The augmented feedback allowed students to view tooth mass loss and handpiece movements during endodontic access preparation, which helped them to control the handpiece better for a more conservative approach.

Although more studies in this systematic review found significant differences between technology-enhanced and conventional teaching and assessment methods, these studies had several limitations and biases. Several studies either did not note or limit the hours that students practiced outside of laboratory working hours or gave the students additional practice hours with the technology-enhanced systems [16, 23, 32]. Blinding of the participants and in some cases the personnel was not possible, and the lack of blinding may have encouraged some students in both groups to work out of hours to outperform or keep up with their peers in the opposing group with regards to the technical score.

There appears to be potential for the use of technology-enhanced teaching and assessment systems in preclinical dental skills to improve the technical and visual experiences, particularly in students who are disadvantaged. However, better quality studies with larger sample sizes are required to find a definitive answer to the effectiveness of these technology-enhanced teaching and assessment systems in preclinical dental skill laboratories. Participants should be randomized using a proper randomization method into an intervention and control group and allocated the same number of hours of practice. If they are allowed to practice outside of laboratory hours, this should ideally be monitored and accounted for. It is preferable that future studies provide both an objective method of assessment using these technology-enhanced systems and a subjective traditional method of assessment using calibrated faculty grading in order to accurately compare the two methods of assessment. Future studies should also focus on using specific technology-enhanced teaching and assessment systems with the same inclusion criteria and measuring similar outcomes such as, quality of the preparation/ procedural errors, time taken to complete the preparation and students’ satisfaction. This would allow us to combine the results and perform a meta-analysis resulting in the provision of a better evidence.

Limitations of this study

A meta-analysis could not be done for this study due to the significant heterogeneity among the included studies and the high risk of bias found in the majority of the studies.

This systematic review included all forms of technology-enhanced teaching and assessment systems such as virtual systems, augmented reality systems, and digital scanners. Even within the same assessment system, different machines do not necessarily work the same way. This introduces difficulty in applying the findings of one system to another with certain degree of accuracy. Furthermore, different undergraduate dental courses and disciplines were included. As a result, the studies did not necessarily measure the same outcomes in the preclinical courses which made it difficult to accurately compare individual studies. For example, the use of technology-enhanced teaching and assessment systems in endodontic exercises would have different parameters compared to those used in operative dentistry.

The majority of the studies included in this review had a high risk of bias mainly due to incomplete data, lack of blinding of the examiners, and due to other biases, such as small sample size, not accounting for additional hours of training, and the lack of calibration of examiners grading the preparations. Many studies either reported that student training hours were not monitored outside of training time or did not specify that these outside-of-training-time hours were controlled [16, 23, 32]. This may have led to a skew of the result outcomes for many reasons. It is very likely that students who saw their peers using a new method of teaching and grading may have felt that they were behind in regards to clinical skills acquisition and thus may have chosen to stay after-hours in the training laboratories to improve their skills in order to keep up with their peers. On the other hand, students in the groups that use technology-enhanced teaching and assessment systems may also have felt overconfident in their practical skills and in turn chosen to not attend extra training sessions, making the difference between both groups being due to the number of hours practiced rather than the type of teaching and assessment method.

Conclusions

Technology-enhanced teaching and assessment tools have the potential to improve learning and performance of undergraduate dental students during preclinical skills training. These tools can be used as an adjunct to complement drawbacks of the current traditional teaching and assessment methods. However, further studies with standardized and better design are required to form a definitive answer to the research question posed in this systematic review.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

Abbreviations

ADA:

American Dental Association

OSCE:

Objective Structured Clinical Examination

PRISMA:

Preferred Reporting Items for Systematic Review and Meta-Analysis

PICO:

Problem/Patient/Population, Intervention/Indicator, Comparison, and Outcomes

RCTs:

Randomized Controlled Trials

RoB:

Risk of Bias

GRADEpro:

Grading of Recommendation, Assessment, Development, and Evaluation

References

  1. Howe J, Field M. The National Academies Collection: Reports funded by National Institutes of Health. In: Field MJ, editor. Dental Education at the Crossroads: Challenges and Change. Washington (DC): National Academies Press (US); 1995.

    Google Scholar 

  2. Ivanoff CS, Ivanoff AE, Yaneva K, Hottel TL, Proctor HL. Student perceptions about the mission of dental schools to advance global dentistry and philanthropy. J Dent Educ. 2013;77(10):1258–69.

    Article  Google Scholar 

  3. Boud D. Assessment and the promotion of academic values. Stud High Educ. 1990;15(1):101–11.

    Article  Google Scholar 

  4. Maki PL. Developing an assessment plan to learn about student learning. J Acad Librariansh. 2002;28(1):8–13.

    Article  Google Scholar 

  5. Xia P, Lopes AM, Restivo MT. Virtual reality and haptics for dental surgery: a personal review. Vis Comput. 2013;29(5):433–47.

    Article  Google Scholar 

  6. Nunez DW, Taleghani M, Wathen WF, Abdellatif HMA. Typodont versus live patient: predicting dental students’ clinical performance. J Dent Educ. 2012;76(4):407–13.

    Article  Google Scholar 

  7. Sadid-Zadeh R, D'Angelo EH, Gambacorta J. Comparing feedback from faculty interactions and virtual assessment software in the development of psychomotor skills in preclinical fixed prosthodontics. Clin Exp Dent Res. 2018;4(5):189–95.

    Article  Google Scholar 

  8. Lee M, Lee SA, Jeong M, Oh H. Quality of virtual reality and its impacts on behavioral intention. Int J Hosp Manag. 2020;90:102595.

    Article  Google Scholar 

  9. Wu H-K, Lee SW-Y, Chang H-Y, Liang J-C. Current status, opportunities and challenges of augmented reality in education. Comput Educ. 2013;62:41–9.

    Article  Google Scholar 

  10. Sreelakshmi M, Subash TD. Haptic Technology: A comprehensive review on its applications and future prospects. Materials Today: Proc. 2017;4(2, Part B):4182–7.

    Google Scholar 

  11. Buchanan JA. Use of simulation technology in dental education. J Dent Educ. 2001;65(11):1225–31.

    Article  Google Scholar 

  12. Huang T-K, Yang C-H, Hsieh Y-H, Wang J-C, Hung C-C. Augmented reality (AR) and virtual reality (VR) applied in dentistry. Kaohsiung J Med Sci. 2018;34(4):243–8.

    Article  Google Scholar 

  13. San Diego JP, Cox MJ, Quinn BFA, Newton JT, Banerjee A, Woolford M. Researching haptics in higher education: the complexity of developing haptics virtual learning systems and evaluating its impact on students’ learning. Comput Educ. 2012;59(1):156–66.

    Article  Google Scholar 

  14. Towers A, Field J, Stokes C, Maddock S, Martin N. A scoping review of the use and application of virtual reality in pre-clinical dental education. Br Dent J. 2019;226(5):358–66.

    Article  Google Scholar 

  15. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Br Med J. 2009;339:b2535.

    Article  Google Scholar 

  16. LeBlanc V, Urbankova A, Hadavi F, Lichtenthal R. A preliminary study in using virtual reality to train dental students. J Dent Educ. 2004;68(3):378–83.

    Article  Google Scholar 

  17. Quinn F, Keogh P, McDonald A, Hussey D. A pilot study comparing the effectiveness of conventional training and virtual reality simulation in the skills acquisition of junior dental students. Eur J Dent Educ. 2003;7(1):13–9.

    Article  Google Scholar 

  18. Quinn F, Keogh P, McDonald A, Hussey D. A study comparing the effectiveness of conventional training and virtual reality simulation in the skills acquisition of junior dental students. Eur J Dent Educ. 2003;7(4):164–9.

    Article  Google Scholar 

  19. Jasinevicius TR, Landers M, Nelson S, Urbankova A. An evaluation of two dental simulation systems: virtual reality versus contemporary non-computer-assisted. J Dent Educ. 2004;68(11):1151–62.

    Article  Google Scholar 

  20. Wierinck E, Puttemans V, Swinnen S, van Steenberghe D. Effect of augmented visual feedback from a virtual reality simulation system on manual dexterity training. Eur J Dent Educ. 2005;9(1):10–6.

    Article  Google Scholar 

  21. Wierinck E, Puttemans V, van Steenberghe D. Effect of reducing frequency of augmented feedback on manual dexterity training and its retention. J Dent. 2006;34(9):641–7.

    Article  Google Scholar 

  22. Wierinck E, Puttemans V, Van Steenberghe D. Effect of tutorial input in addition to augmented feedback on manual dexterity training and its retention. Eur J Dent Educ. 2006;10(1):24–31.

    Article  Google Scholar 

  23. Urbankova A. Impact of computerized dental simulation training on preclinical operative dentistry examination scores. J Dent Educ. 2010;74(4):402–9.

    Article  Google Scholar 

  24. Suebnukarn S, Hataidechadusadee R, Suwannasri N, Suprasert N, Rhienmora P, Haddawy P. Access cavity preparation training using haptic virtual reality and microcomputed tomography tooth models. Int Endod J. 2011;44(11):983–9.

    Article  Google Scholar 

  25. Kikuchi H, Ikeda M, Araki K. Evaluation of a virtual reality simulation system for porcelain fused to metal crown preparation at Tokyo medical and Dental University. J Dent Educ. 2013;77(6):782–92.

    Article  Google Scholar 

  26. Gratton DG, Kwon SR, Blanchette D, Aquilino SA. Impact of digital tooth preparation evaluation technology on preclinical dental Students’ technical and self-evaluation skills. J Dent Educ. 2016;80(1):91–9.

    Article  Google Scholar 

  27. Tiu J, Cheng E, Hung T-C, Yu C-C, Lin T, Schwass D, Al-Amleh B. Effectiveness of crown preparation assessment software as an educational tool in simulation clinic: a pilot study. J Dent Educ. 2016;80(8):1004–11.

    Article  Google Scholar 

  28. Llena C, Folguera S, Forner L, Rodríguez-Lozano FJ. Implementation of augmented reality in operative dentistry learning. Eur J Dent Educ. 2018;22(1):e122–30.

    Article  Google Scholar 

  29. Liu L, Li J, Yuan S, Wang T, Chu F, Lu X, Hu J, Wang C, Yan B, Wang L. Evaluating the effectiveness of a preclinical practice of tooth preparation using digital training system: a randomised controlled trial. Eur J Dent Educ. 2018;22(4):e679–86.

    Article  Google Scholar 

  30. Nagy ZA, Simon B, Tóth Z, Vág J. Evaluating the efficiency of the dental teacher system as a digital preclinical teaching tool. Eur J Dent Educ. 2018;22(3):e619–23.

    Article  Google Scholar 

  31. Wolgin M, Grabowski S, Elhadad S, Frank W, Kielbassa AM. Comparison of a prepCheck-supported self-assessment concept with conventional faculty supervision in a pre-clinical simulation environment. Eur J Dent Educ. 2018;22(3):e522–9.

    Article  Google Scholar 

  32. Mladenovic R, Pereira LAP, Mladenovic K, Videnovic N, Bukumiric Z, Mladenovic J. Effectiveness of augmented reality Mobile simulator in teaching local anesthesia of inferior alveolar nerve block. J Dent Educ. 2019;83(4):423–8.

    Article  Google Scholar 

  33. Murbay S, Chang JWW, Yeung S, Neelakantan P. Evaluation of the introduction of a dental virtual simulator on the performance of undergraduate dental students in the pre-clinical operative dentistry course. Eur J Dent Educ. 2020;24(1):5–16.

    Article  Google Scholar 

  34. Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savović J, Schulz KF, Weeks L, Sterne JAC. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. Br Med J. 2011;343:d5928.

    Article  Google Scholar 

  35. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, de Beer H, et al. GRADE guidelines: 1. Introduction—GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

    Article  Google Scholar 

  36. Mays KA, Branch-Mays GL. A systematic review of the use of self-assessment in preclinical and clinical dental education. J Dental Educ. 2016;80(8):902–13.

    Article  Google Scholar 

Download references

Acknowledgements

The authors of this study would like to acknowledge Mrs. Nadia Masoud, Director of Libraries-University of Sharjah for her assistance in performing the electronic literature search.

Funding

There are no sources of funding nor any conflicts of interest.

Author information

Authors and Affiliations

Authors

Contributions

KK: literature review concept design, analysis and interpretation of data, drafting of manuscript. ME: analysis and interpretation of data, drafting of manuscript. SM: analysis and interpretation of data, drafting of manuscript. SK: analysis and interpretation of data, drafting of manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohamed El-Kishawi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khalaf, K., El-Kishawi, M., Mustafa, S. et al. Effectiveness of technology-enhanced teaching and assessment methods of undergraduate preclinical dental skills: a systematic review of randomized controlled clinical trials. BMC Med Educ 20, 286 (2020). https://doi.org/10.1186/s12909-020-02211-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-020-02211-4

Keywords