Skip to main content

Diagnostic branched tree as an assessment and feedback tool in undergraduate pharmacology education

Abstract

Background

Multiple-choice, true-false, completion, matching, oral presentation type questions have been used as an evaluation criterion in medical education for many years. Although not as old as other question types, performance evaluation and portfolio-like assessment types, can be called alternative evaluation, have been used for a considerable time. While summative assessment maintains its importance in medical education, the value of formative assessment is gradually increasing. In this research, the use of Diagnostic Branched Tree (DBT), which is used both as a diagnostic and feedback tool, in pharmacology education was examined.

Methods

The study was conducted on 165 students (112 DBT, 53 non-DBT) on the 3rd year of undergraduate medical education. 16 DBTs prepared by the researchers were used as data collection tool. Year 3 first committee was elected for implementation. DBTs were prepared according to the pharmacology learning objectives within the committee. Descriptive statistics, correlation and comparison analyzes were used in the analysis of the data.

Results

DBTs with the most wrong exits are DBTs entitled phase studies, metabolism, types of antagonism, dose-response relationship, affinity and intrinsic activity, G-protein coupled receptors, receptor types, penicillins and cephalosporins. When each question in the DBTs is examined separately, it is seen that most of the students could not answer the questions correctly regarding phase studies, drugs that cause cytochrome enzyme inhibition, elimination kinetics, chemical antagonism definition, gradual and quantal dose response curves, intrinsic activity and inverse agonist definitions, important characteristics of endogenous ligands, changes in the cell as a result of G-protein activation, ionotropic receptor examples, mechanism of action of beta-lactamase inhibitors, excretion mechanism of penicillins, differences of cephalosporins according to generations. As a result of the correlation analysis, the correlation value calculated between the DBT total score and the pharmacology total score in the committee exam. The comparisons showed that the average score of the pharmacology questions in the committee exam of the students who participated in the DBT activity was higher than the students who did not participate.

Conclusions

The study concluded that DBTs are a candidate for an effective diagnostic and feedback tool. Although this result was supported by research at different educational levels, support could not be shown in medical education due to the lack of DBT research in medical education. Future research on DBTs in medical education may strengthen or refute our research results. In our study, receiving feedback with DBT had a positive effect on the success of the pharmacology education.

Peer Review reports

Background

Multiple choice questions, short answer questions, conventional and structured oral questions are used as traditional assessment and evaluation tools in undergraduate medical education [1]. Recently, alternative measurement and evaluation methods have been started to be used to measure the student’s level of knowledge [2, 3]. Although the validity and reliability of traditional measurement and evaluation methods are high, they are occasionally insufficient in determining learning levels [4]. Therefore, it is important to reach a holistic result in measurement and evaluation by using alternative methods in medical education.

The Diagnostic Branched Tree (DBT) method, which is one of the alternative measurement and evaluation methods, consists of consecutive true and false questions. It aims to detect information patterns and misconceptions in students’ thinking structures (5). The diagnostic tree was defined as a diagnostic alternative evaluation method in 1994 [6] and it was stated that this method would clearly reveal important assumptions regarding knowledge structures (7). While DBT evaluates concept definitions and basic concepts at the initiation steps, it detects misconceptions and their causes at the following steps. The difference of DBT from the classic true-false questions is that the questions are related to each other, and each decision made by the student affects the subsequent decision [8]. Preparing to question the interconnected information network is one of the most important differences compared to the classical true-false tests [5, 7]. However, more effort and experience are required in order to prepare interconnected questions, and they are structured in a more difficult way. Consequently, it is not a widely used method because it is difficult to prepare for inexperienced instructors and the awareness of the method is low among educators.

Although there is no scientific study in which DBT is used in medical school education (for two examples of those used in this study see Appendix 1), studies have been conducted on the primary and secondary education periods. Even though positive data have been obtained about the DBT application in these studies [9, 10], the available data are quite limited.

In the studies in which primary school teachers were included, it was stated that the DBT is among the least known alternative assessment methods, and therefore this method is used less frequently than other alternative methods [11, 12]. In a study conducted among sixth and seventh grade social studies teachers, it was also shown that the DBT is among the least known methods [13]. In addition, it has been determined that science and technology teachers have knowledge deficiencies related to the DBT method [14]. It has been shown that the use of structured grid, DBT and prediction observation explanation tools, which are alternative measurement and evaluation tools, makes a significant difference in the achievement and attitudes of sixth grade students [15]. In a study comparing the DBT method and other alternative methods in seventh grade students, it was determined that the lowest exam score was obtained in the DBT method, and the importance of giving information to increase the level of success was emphasized [4].

Measurement and evaluation in education are used for many purposes. These purposes include establishing classroom equilibrium, planning, and conducting instruction, placing students, providing feedback and incentives, diagnosing student problems and disabilities, and judging and grading academic learning and progress [16, 17]. This purpose can be summative and/or formative. Summative means that the assessment has been conducted for decision-making or certification purposes, such as deciding who is admitted, progresses, or qualifies. Formative relates to the feedback function of assessment or, more precisely, how the assessment informs the students about their performance [18]. According to some authors in the literature, class evaluation serves purposes such as preliminary, formative, summative, and diagnostic [19]. The primary aim is to recognize students’ personal characteristics such as skills, attitudes, and physical features. Formative purpose takes place in the learning-teaching process. It gives information on how far the objectives have been achieved and what kind of adjustments should be made in the learning materials and environment. The summative purpose is used to confirm student achievement and summarize the outcome of the educational process. Certification of training is also within the scope of this purpose. The diagnostic objective is related to the skills and other characteristics that are prerequisites for the current teaching or that ensure the achievement of teaching goals. This assessment attempts to predict the conditions that will negatively affect learning. The DBT method is one of the formative measurement and evaluation methods [20]. According to Miller, Linn and Gronlund [16], diagnostic trees are especially effective in determining what kind of learning difficulties students can experience.

DBTs are used as a feedback tool within the scope of both diagnostic and formative assessment activities in measurement and evaluation. Although there are limited data on the pre-university education period, there is no study comparing DBT with traditional assessment and evaluation methods in medical education. The aim of this research is to use DBT as a diagnostic and feedback tool within the scope of medical school pharmacology course and to examine its effectiveness.

Methods

The current study utilizes the DBT to determine which subjects were the most incorrectly learned by the students during the teaching process. In this respect, this examination is a descriptive research [21]. On the other hand, in this study, DBTs were used as a feedback tool within the scope of the formative assessment. The effectiveness of the feedback was evaluated by examining the relationships between the pharmacology scores of the exam held at the end of the committee and the results of the DBT. Hence, this study is also a correlational research [22].

Participants

The participants of the research are the 3rd year students studying at Çanakkale Onsekiz Mart University Faculty of Medicine in the 2022–2023 academic year. The total number of third year students is 165. The research was intended to be carried out with all the students and a meeting was held. Students were informed by the researchers about the purpose of the research and how to implement the application. In addition, it was stated that the participation would be carried out on a voluntary basis and the results of the activity would not have any effect on the success of the students. The number of students participating in the research on a voluntary basis was 112. 53 students did not participate in the study. For this reason, the sampling taken in the research has become convenient sampling (convenience sampling [also known as haphazard or accidental samples.]). Researchers sometimes choose to sample individuals who are already there for research. For example, many researchers collect data from university students because they are ready and willing to participate in research [23]. In addition, we would like to point out that randomization could not be performed because the participation in our study was on a voluntary basis, and this is a limitation of our study in terms of participation and selection bias. The distribution of the students participating in the study according to some of their variables is given in Table 1.

Table 1 Distribution of the students participating in the research according to some variables

Data Collection Tool

DBTs were used as a data collection tool in the research. DBTs suitable for the purpose of the research were not found in the literature. For this reason, diagnostic trees have been developed by researchers specifically for the pharmacology course and in accordance with the learning objectives of the third year committee 1. The following steps were followed in the development of the data collection tool:

  • The class and time in which DBTs can be used were selected.

The third year of the pharmacology course is the most intensive year at Çanakkale Onsekiz Mart University Faculty of Medicine. Committee 1 was deemed suitable for implementation.

  • It was determined for which learning objectives the DBT would be prepared.

DBTs are prepared in accordance with the learning objectives of the pharmacology courses in the 3rd Year Committee 1 curriculum.

  • DBTs were prepared for the learning objectives, considering the lecture formats.

It was deemed appropriate to prepare 16 DBTs by examining the learning objectives and the way they were acquired within the subjects. The prepared DBTs were presented to the expert opinion of 4 academicians who are experts in the field of pharmacology and an academician who is an expert in the field of medical education (especially in assessment). Experts were allowed to give expert opinion on each DBT as “appropriate”, “can be used after correction” and “not suitable” (with recommendations if any). Are the expert opinions consistent and reliable? In order to answer this question, 5 experts (4 pharmacology and 1 medical education) using 3 different categories (appropriate, can be used after correction and not suitable) were evaluated with the Krippendorff Alpha coefficient. Inter-rater reliability can be examined with Kripphendorff Alpha in the process performed by more than two raters assigning codes to more than two categories [24]. The calculated value was determined as 0.87. This value is an indicator of high consistency. Upon this, necessary corrections were made on the DBTs in line with the expert opinions. In addition, item-content validity index (I-CVI) and scale-level-content validity index (S-CVI) values were calculated in line with expert opinions. The calculated I-CVI values for the items are not below 0.80. The S-CVI value calculated for the overall measurement tool was also determined as 0.83. According to the literature, these values indicate high content validity [25, 26]. Finally, DBTs were shown to a linguist, and the opinions were taken in terms of clarity, simplicity, and conformity with language rules. According to the opinions received from the linguistics expert, final arrangements were made and the DBTs were finalized before the application (see Appendix 1).

  • Pre-application was made to determine the clarity of DBTs before the actual application.

Before the actual application, a preliminary application was made with 5 students to determine the clarity of DBTs. The students with the pre-application are year 4 students and they are the students who took the pharmacology course in the previous year. In the application, the intelligibility of DBTs was found sufficient by 5 students. As a result of all these processes, it was decided by the researchers that the DBTs were ready for the actual application.

  • Performing the DBTs actual application.

After the courses of the pharmacology in Committee 1 were completed, the activity was conducted with 112 students who volunteered to participate a week before the committee exam was held. At the end of the activity, the questions in the DBTs were discussed with the students and feedback was given. According to the exits reached by the students, the points with information deficiencies were conveyed to the students.

  • After the application, item difficulty, item discrimination and Kuder Richardson (KR-20) reliability analysis of 16 different DBTs and 4 true-false questions in each of the DBTs.

After the application using DBTs, item difficulty, item discrimination evaluation was performed for 4 true-false questions in each of 16 DBTs and a total of 64 true-false questions (see Appendix 2). When the item difficulty value approaches 1, the item is interpreted as an easy question to answer, and when it approaches 0, it is interpreted as a difficult question to answer [27]. As will be seen in Appendix 2, items were generally easy questions. Compared to the other questions, DBT5_4, DBT8_1, DBT8_3, DBT9_2 and DBT15_4 were challenging for the participants. The average test difficulty value of the DBT measurement tool was calculated and a value of 0.77 was obtained. In this case, it can be interpreted that the measurement tool is an easily answerable tool. If the measurement tool developed in the measurement and evaluation literature is to be used for a special purpose and there is no level of ease or difficulty required for this purpose, the items in the measurement tool are expected to be at medium difficulty level (0.50). In this research, the purpose of using DBT is to perform an information-based diagnosis and to give feedback to the students through DBT rather than to determine the learning levels of the students. For this reason, it is expected that the true-false questions in the DBTs should be structured in a way that can be answered easily. In addition, each true-false question is a type of question that offers 50% chance of being answered correctly and has a disadvantage in this respect. When the item discrimination value approaches 1, the item discrimination feature increases, and when it approaches 0 it decreases [27]. In fact, if the item discrimination value is calculated as a negative value, there will be a situation where the item works backwards, with participants with generally low knowledge answering the question correctly, while those with high knowledge answering the question incorrectly. This is an undesirable situation. As seen in Appendix 2, items across DBT mostly had low discrimination value. This situation is correlated with item difficulty, and it was not a surprise for researchers to achieve such a result. The KR-20 reliability level was calculated by using the total scores from all 16 DBTs. The KR-20 reliability level was calculated as 0.77. According to Nunnally and Bernstein [28], sufficient reliability should be at least 0.70 and above. The level of reliability obtained is at an acceptable level according to the literature.

Data analysis

After the DBT activity, a discussion session was held with the students on the questions and feedback was given. The data obtained in this research and their analyses can be listed as follows:

  • Various sociodemographic data: These data were obtained with a form given to students just before the DBT activity. In this form, the students are asked “Gender”, “High School Type”, “Reason for Choosing Medical School”, “Does he/she consider choosing Pharmacology for post-graduate education?”, “Does he/she read Pharmacology Textbooks?”, Does he/she have the ability to read Pharmacology Textbooks in English?” and “What is the importance of pharmacology in medical education?“ data has been collected about the characteristics. These data are summarized with descriptive statistics (frequency and percentages).

  • DBT data: The results of the 16 DBTs applied were used in two ways. In each DBT, the students’ answers were given a score of “1” in case of reaching the correct exit and “0” in case of reaching all other exits, and the students’ scores were determined based on a total of 16 points by collecting the correct answers of each student and entering them into the statistical software separately. These data were used for comparison and correlation analyses.

  • The scores of the students from pharmacology questions of the 2022–2023 third year first committee exam have been determined. The relationships of these scores with the DBT scores were examined. Moreover, committee exam pharmacology scores of the students who participated and did not participate in the DBT activity were compared.

It was determined that the DBT total score and the committee exam pharmacology total score did not show normal distribution. For this reason, Spearman Brown rank difference correlation coefficient was preferred in the relationship analysis and Mann Whitney U Test was preferred in the comparison analysis.

Results

DBT results as a diagnostic tool

One of the expectations in the DBT application is to identify the information that students most misunderstand within the scope of their learning objectives. For this purpose, the status of each student going to the correct exit in each DBT was examined. DBTs with the most wrong exits and the common mistakes made by the students were examined in detail. The correct answer percentages of the students within the scope of DBTs are given in Table 2.

Table 2 Percentage of students reaching correct and incorrect exits for each DBT

Among the DBTs, the most accurate exits are reached in DBTs entitled sources and nomenclature of drugs, absorption and penicillins-1. DBTs with half the percentages of reaching the correct exit and reaching the wrong exit are DBTs entitled distribution, elimination, ligands and spare receptors and penicillins-2. DBTs with the most wrong exits are DBTs entitled phase studies, metabolism, types of antagonism, dose-response relationship, affinity and intrinsic activity, G-protein coupled receptors, receptor types, penicillins-3 and cephalosporins. On the DBTs, in which a large number of students reached the wrong exit, analyzes were made on the basis of the wrong answers of the students for each question. The results of the analysis are given in Table 3.

Table 3 The most incorrectly answered questions in DBTs with the most mistakes done

When each question in the DBTs is examined separately, it is seen that most of the students could not answer the questions correctly regarding phase studies, drugs that cause cytochrome enzyme inhibition, elimination kinetics, chemical antagonism definition, gradual and quantal dose response curves, intrinsic activity and inverse agonist definitions, important characteristics of endogenous ligands, changes in the cell as a result of G-protein activation, ionotropic receptor examples, mechanism of action of beta-lactamase inhibitors, excretion mechanism of penicillins, differences of cephalosporins according to generations. The questions prepared include basic pharmacology topics such as pharmacokinetics and pharmacodynamics, and antibiotics such as penicillins and cephalosporins.

Usefulness of DBT as a feedback tool

A total of 112 students participated in the DBT study. The relationship between the total scores obtained from the DBTs and the total scores they obtained from the pharmacology questions in the committee exam was examined. The results are given in Table 4.

Table 4 The relationship between DBT scores and committee exam pharmacology scores

As a result of the correlation analysis, the correlation value calculated between the DBT total score and the pharmacology total score in the committee exam was 0.446. The obtained correlation value is a moderate correlation indicator [29,30,31,32]. The calculated correlation is positive and significant. In this case, it can be interpreted that when the DBT total score increases, the pharmacology total score in the committee exam also increases.

While 112 students participated in the DBT activity, 53 students did not volunteer and did not participate. Comparison of the total score of the pharmacology questions in the committee exam of the students who did and did not participate DBT activity was performed. The results are shown in Table 5.

Table 5 Comparison of committee exam pharmacology scores according to DBT activity participation

The comparisons showed that the average of the total score of the pharmacology questions in the committee exam of the students who participated in the DBT activity was higher than the students who did not participate. The significant difference obtained was recorded in the medium effect size [33]. This situation may be related to the fact that the students participating in the DBT activity are students who take care of their participation in the lessons, as well as it can be interpreted that they benefit from receiving feedback with the DBT activity.

Discussion

Formative assessment and evaluation methods, which make the measurement and evaluation methods used in undergraduate medical education from being monotonous, provide feedback to the students while the education continues, and thus enable the students to be aware of their deficiencies, are being added to the medical school education curricula day by day. Summative assessments are designed to ensure the self-regulation and accountability of universities, while formative assessments aim to guide the learning process by focusing on providing feedback to students. Formative assessments such as reflective portfolios, self and peer assessment, mini-clinical evaluation exercise and objective structured clinical examination stations are used in medical education programs [34].

The DBT method, which is an alternative formative method, is an application consisting of consecutive true and false questions, in which the student will take two different paths according to the choice of true or false for each question, and can reach a total of 4, 8, 16 or 32 different exits depending on the number of columns. The exit gate that the student reaches in the last question gives information about which questions were answered incorrectly. Thus, it is very practical in terms of showing the student’s lack of knowledge and allows special feedback to be given to the student. In addition, when the data of all students are examined, it provides the instructor with data about the general knowledge deficiencies of the class by showing most frequently incorrectly answered questions in the whole group. Its designation as a diagnostic is based on the fact that this method quickly distinguishes errors on a broad, branched pathway. For this reason, this method not only makes measurement and evaluation, but also detects the subjects that are difficult, not well understood, and unlearned on a student-based or class-based basis. The data obtained in this method will also allow the trainer to update the training materials that will be used in the following years and to make them more understandable.

Although there are data on the fact that formative applications improve the learning process [35,36,37] in studies comparing formative assessment with summative assessment in medical school students, there is no data on the use of DBT method in medical school education programs. In our study, students who participated in the DBT activity had higher scores in the pharmacology questions of the committee exam than those who did not participate in the activity.

The available DBT data are the data about its use in the pre-university education period. A web-based application inspired by the descriptive branched tree application significantly increased the science and technology course scores of sixth, seventh and eighth grade students compared to traditional education [10]. In a study conducted in elementary school eighth grade students, it was concluded that the DBT technique, which is used by integrating conceptual change into the text, is effective in correcting misconceptions [9]. It has been determined that the DBT developed for the eleventh-grade high school chemistry course is reliable in terms of difficulty and distinctiveness [38].

The DBT method has some difficulties and disadvantages. It is believed that the reliability of the scores obtained by this method is not high due to the high factor of chance in answering correctly [39]. The chance of answering each true-false question in the flow is 50%. Although this statement is correct when evaluated step by step, if the tree is evaluated as a whole, that is, if only reaching the correct exit is a success criterion, the probability of correct prediction is lower than other techniques [8]. For example, while this ratio is 1/5 in a five-choice multiple choice question, this ratio is 1/8 in a DBT with 8-exit and 1/16 in a DBT with 16-exit. However, DBT may be an inadequate method for measuring high-level learning skills due to the fact that it consists of true-false questions [8]. In addition, it is thought to be a more appropriate method for small groups, as it will be more difficult to give student-based feedback to a large group of students. Moreover, it was stated as a difficulty that the students could not see all of the questions in this method [5]. For example, DBT studies conducted among high school students [38, 40, 41] and education faculty university students [42] have different statements in each cell. However, when this type of DBT is used, it is not possible for every student to see all of the questions since the path to be taken is different according to the answer given by the student. Therefore, in our study, we used the same questions in the same column so that each question could be seen by all students. Thus, we aimed to reduce the deviations due to answering different questions. In this way, our other purpose in practice was to see in which parts students and the class had difficulties and to give appropriate feedback. We first determined the main topics in accordance with our medical school curriculum, and then we started to prepare the DBT questions. Since there is no example in the literature for medical school, we thought it would be more appropriate to make some changes in the format used mostly in the pre-university education period. For example, although 3-tier DBT was used in DBT studies conducted with 10th grade [40, 41] and 11th grade high school students [38], we used 4-tier DBT in our study. While a general statement is written in the 1st tier of the DBT concept, that is, the general knowledge level of the student is questioned, more specific statements about the same topic and statements about subtopics are added to the next tiers. Thus, the level of knowledge of the student in the sub-topics of the relevant subject is also evaluated. In fact, the purpose of using 4-tiered DBT instead of 3-tiered structure was to increase the number and variety of sub-topics questioned.

DBT should be done in the classroom in order to detect students’ misconceptions and to show whether the relationships between concepts are established correctly [43]. We conducted our study evaluation face-to-face in a classroom setting.

It has been shown that DBT method is not preferred by pre-university social science teachers. As the reason, it is stated that there is insufficient competence in the use of this method [20].

Pharmacology courses in undergraduate education programs in medical faculties are held in the third year of education in most universities in the form of classical classroom education, and assessment and evaluation are done in the form of committee exams consisting of multiple-choice questions. Factors such as the quality level and the difficulty level of the multiple-choice questions push the students to think and work on this axis, not to learn the subjects and increase their knowledge, but to pass the class. For students, getting a passing grade is the main goal, and for this, they try to save the day by only studying from the lecture notes, instead of obtaining long-term gains by studying comprehensively from the textbooks. Approximately 2/3 (64.3%) of the students who participated in our study stated that they did not read pharmacology textbooks. If the assessment and evaluation questions are not of high quality, the traditional exams do not go beyond distinguishing those who memorize well and those who do not memorize well.

Due to the reasons mentioned, innovative practices aimed at improving and enriching the measurement and evaluation method in the education system will be valuable. Formative assessments in the form of mid-term exams while training continues are a good example of these innovative practices.

The questions prepared in our study were applied at the beginning of the academic year in accordance with the curriculum, so the topics mainly include pharmacokinetics and pharmacodynamics, which are the basic topics of pharmacology, and where concepts and definitions are more intense. When we examined the most frequently incorrectly answered questions in our study, it was found that more mistakes were made in the questions where information about new concepts, definitions and differentiating points were tested. It is expected that students will make mistakes in this type of basic information in a lesson where they have just started to learn the terminology. It is thought that students who do not repeat and reinforce their knowledge make more mistakes. Furthermore, the majority of the students in the classroom only study from the instructor’s lecture notes and do not read textbooks or other materials related to the course at all. Students had difficulty with questions that were not clearly stated in the lecture notes but could be solved by combining the information from different parts of the course (For example, the 3rd question in the 1st tree). Further studies are required for specific assessments of the systems and diseases.

The DBT application not only makes measurement and evaluation, but also detects the subjects that are difficult, not well understood, and unlearned, on a student-based or class-based basis, in interrelated subjects. Thanks to the feedback given to the student after the activity, students will be able to identify their shortcomings, realize the current level of knowledge and the subtopics that need to be improved. In this method, it will be possible to study more focused and participate in the main exam more prepared.

Using only traditional tests is insufficient for measurement and evaluation [5]. Testing different measurement and evaluation methods in undergraduate pharmacology education and adding them to the curriculum after observing positive results will be beneficial in improving the quality of pharmacology education. In this context, there is a need for new study data in which different evaluation methods are tested in comparison with control groups. It is obvious that making the pharmacology course, which has an important place in undergraduate education, higher quality in terms of both educational materials and assessment and evaluation methods will provide benefits in educating better physicians.

Data Availability

The datasets used and/or analyzed during the study are available from the corresponding author upon reasonable request.

References

  1. Holzinger A, Lettner S, Steiner-Hofbauer V, Capan Melser M. How to assess? Perceptions and preferences of undergraduate medical students concerning traditional assessment methods. BMC Med Educ. 2020 Sep;17(1):312.

  2. Zulkifli AF. Student-centered approach and alternative assessments to improve students’ learning domains during health education sessions. Biomed Hum Kinet 2019 Jan 1;11(1):80–6.

  3. Papapanou M, Routsi E, Tsamakis K, Fotis L, Marinos G, Lidoriki I et al. Medical education challenges and innovations during COVID-19 pandemic. Postgrad Med J 2022 May 1;98:321–7.

  4. Akkanat C, Karamustafaoglu S, Gökdere M. The Comparison of 7th Grade Students’ Scores Achieved Through Different Assessment Tools in “The Granular Structure of Matter” Unit. Educational Research Association The International Journal of Educational Researchers [Internet]. 2015;6(2):15–31. Available from: http://www.eab.org.trhttp://ijer.eab.org.tr.

  5. Kocaarslan M. Diagnostic branched tree technique and its use in the unit called change and diagnosis of matter in the program of science and technology at fifth grade. Mustafa Kemal University Journal of Social Sciences Institute. 2012;9:269–79.

    Google Scholar 

  6. Nichols PD. A Framework for Developing Cognitively Diagnostic Assessments. Rev Educ Res [Internet]. 1994;64(4):575–603. Available from: http://rer.aera.net.

  7. Demir M. Alternative Assessment Methods in Primary Education: Review and Future Directions. In: Current Studies in Educational Disciplines [Internet]. 2021. p. 227–88. Available from: https://www.researchgate.net/publication/350633406.

  8. Bahar M, Nartgün Z, Durmuş S, Bıçak B. Geleneksel-Tamamlayıcı Ölçme ve Değerlendirme Teknikleri Öğretmen El Kitabı. 3.Baskı. Ankara: PagemA Yayıncılık.; 2009.

    Google Scholar 

  9. Şahin Ç, Çepni S. Developing of the Concept Cartoon, Animation and Diagnostic Branched Tree Supported Conceptual Change Text: “Gas Pressure.” Eurasian J Phys Chem Educ [Internet]. 2011;25–33. Available from: http://www.eurasianjournals.com/index.php/ejpce.

  10. Taş E, Çetinkaya M, Karakaya Ç, Apaydin Z. An investigation on web designed Alternative Measurement and Assessment Approach. Educ Sci. 2013;38(167):196–210.

    Google Scholar 

  11. Karamustafaoğlu S, Çağlak A, Meşeci B. Alternatif Ölçme Değerlendirme Araçlarına İlişkin Sınıf Öğretmenlerinin Öz Yeterlilikleri. Amasya Üniversitesi Eğitim Fakültesi Dergisi [Internet]. 2012;1(2):167–79. Available from: http://dergi.amasya.edu.tr.

  12. Özyurt M, Duran U. Investigation of Elementary School Teachers’ Self-Efficacy Perceptions and Frequencies of Usages of Alternative Assessment Methods. In: Multidisciplinary Academic Conference. 2017. p. 538–47.

  13. Yalçınkaya E. Sosyal Bilgiler Öğretmenlerinin Ölçme ve Değerlendirme Tekniklerini Kullanma Düzeyleri. Educ Sci (Basel). 2010;5(4):1558–71.

    Google Scholar 

  14. Coruhlu TS, Nas SE, Cepni S. Problems facing science and technology teachers using alternative assesment technics: Trabzon Sample. Yuzuncu Yil Egitim Fakultesi Dergisi [Internet]. 2009;6(1):122–41. Available from: http://efdergi.yyu.edu.tr.

  15. Kırıkkaya Buluş E, Vurkaya G. Alternatif Değerlendirme Etkinliklerinin Fen ve Teknoloji Dersinde Kullanılmasının Öğrencilerin Akademik Başarıları ve Tutumlarına Etkisi. Kuram ve Uygulamada Eğitim Bilimleri. 2011;11(2):985–1004.

    Google Scholar 

  16. Miller MD, Linn RL, Gronlund NE. Measurement and assessment in teaching. 10th ed. New Jersey: Pearson Education, Inc.; 2009.

    Google Scholar 

  17. Russell MK, Airasian PW. Classroom Assessment. New York: McGraw-Hill Companies, Inc.; 2012.

    Google Scholar 

  18. Schuwirth LW, van der Vleuten CP. Understanding medical education: evidence, theory and practice. In: Swanwick T, editor. How to Design a useful test: the Principles of Assessment. New Jersey: John Wiley & Sons, Inc.; 2010. pp. 195–207.

    Google Scholar 

  19. Oosterhof A. Developing and using Classroom assessments. Third. New Jersey: Pearson Education, Inc.; 2003.

    Google Scholar 

  20. Caliskan H, Kasikci Y. The application of traditional and alternative assessment and evaluation tools by teachers in social studies. In: Procedia - Social and Behavioral Sciences. 2010. p. 4152–6.

  21. Porta M. A Dictionary of Epidemiology. Sixth. Oxford: Oxford University Press; 2014.

    Book  Google Scholar 

  22. Fraenkel JR, Wallen NE, Hyun HH. How to design and evaluate research in education. Eight. McGraw-Hill Companies Inc.; 2012.

  23. Huck SW, Beavers AS, Esquivel S. Sample. In: Frey BB, editor. The SAGE Encyclopedia of Research Design. Second. Thousands Oaks. California: SAGE Publications, Inc.; 2022. pp. 1448–52.

    Google Scholar 

  24. Krippendorff K. Content analysis an introduction to its methodology. Second. USA: SAGE Publications, Inc.; 2004.

    Google Scholar 

  25. Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 2015;49(6):560–75.

    Article  Google Scholar 

  26. Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar AR. Design and implementation content validity study: development of an instrument for measuring patient-centered communication. J Caring Sci. 2015;4(2):165–78.

    Article  Google Scholar 

  27. Nitko AJ, Brookhart SM. Educational assessment of students. Sixth Edition. Harlow: Pearson Education Limited; 2014.

    Google Scholar 

  28. Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. McGraw-Hill; 1994.

  29. Green SB, Salkind NJ, Using. SPSS for Windows and Macintosh analyzing and understanding the data. New Jersey: Pearson Education, Inc.; 2005.

    Google Scholar 

  30. Pallant J. SPSS Survival Manual. A Step by Step Guide To Data Analysis using IBM SPSS. 6th ed. New York: McGraw-Hill Education; 2016.

    Google Scholar 

  31. Rumsey DJ. Statistics all-in-one for dummies. New Jersey: John Wiley & Sons, Inc.; 2022.

    Google Scholar 

  32. Schober P, Schwarte LA. Correlation coefficients: appropriate use and interpretation. Anesth Analg 2018 May 1;126(5):1763–8.

  33. Cohen J. Statistical power analysis for the behavioral Sciences Second Edition. Lawrence Erlbaum Associates Publishers; 1988.

  34. Ferris H, O’ Flynn D. Assessment in Medical Education; What Are We Trying to Achieve? International Journal of Higher Education. 2015 May 22;4(2):139–44.

  35. Krasne S, Wimmers PF, Relan A, Drake TA. Differential effects of two types of formative assessment in predicting performance of first-year medical students. Adv Health Sci Educ. 2006 May;11(2):155–71.

  36. Kesavan KP, Palappallil DS. Effectiveness of formative assessment in motivating and improving the outcome of summative assessment in pharmacology for medical undergraduates. Journal of Clinical and Diagnostic Research. 2018 May 1;12(5):FC08-FC11.

  37. Arja SB, Acharya Y, Alezaireg S, Ilavarasan V, Ala S, Arja SB. Implementation of formative assessment and its effectiveness in undergraduate medical education: an experience at a Caribbean Medical School. MedEdPublish. 2018 Jun 13;7:131.

  38. Sekerci AR. Development of diagnostic branched tree test for high school chemistry concepts. Oxidation Communications [Internet]. 2015;38:1060–7. Available from: https://www.researchgate.net/publication/292391570.

  39. Celen U. Psychometric Properties of Diagnostic branched Tree. Egitim ve Bilim. 2014 Aug 6;39(174):201–13.

  40. Prodjosantoso A, Hertina AM, Irwanto I. The Misconception diagnosis on ionic and covalent bonds concepts with three Tier Diagnostic Test. Int J Instruction. 2019;12(1):1477–88.

    Article  Google Scholar 

  41. Kirbulut ZD, Geban O. Using Three-Tier Diagnostic Test to Assess Students’ Misconceptions of States of Matter. Eurasia J Math Sci Technol Educ. 2014;10(5):509–21.

    Article  Google Scholar 

  42. Geçgel G, Şekerci AR. Identifying alternative concepts in some Chemistry Topics using the diagnostic branched tree technique. Mersin Univ J Fac Educ. 2018;14(1):1–18.

    Google Scholar 

  43. Şahin Ç, Kaya GA, Review Of The Research On Alternative Assessment Evaluation. : A content analysis. Nevşehir Hacı Bektaş Veli Üniversitesi SBE Dergisi. 2020 Dec 30;10(2):798–812.

Download references

Acknowledgements

Not Applicable.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

Authors’ contributions study conception, design, analysis, and interpretation: ET, ÇT; study design and conception: ET, ÇT; analysis and interpretation: ET, ÇT.

Corresponding author

Correspondence to Ender Tekeş.

Ethics declarations

Ethics approval and consent to participate

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was approved by Çanakkale Onsekiz Mart University Scientific Research Ethics Committee (No: 2022-YONP-0665 [date:22/09/2022, decision no:16/10]). Informed consent was obtained from all individual participants included in the study.

Consent for publication

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Appendix 1 Feedbacks. Appendix 2 Item Analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tekeş, E., Toraman, Ç. Diagnostic branched tree as an assessment and feedback tool in undergraduate pharmacology education. BMC Med Educ 23, 374 (2023). https://doi.org/10.1186/s12909-023-04342-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-023-04342-w

Keywords