Differences in medical student performance on examinations: exploring score variance between Kolb's Learning Style Inventory classifications

Background Kolb’s Cycle of Learning Theory acts as a foundational framework for the evolution of knowledge gained by learners throughout their education. Through Kolb’s cycle of experiential learning, one’s preferred way of learning could impact academic achievement in the pre-clinical years of medical education. Methods The medical student classes of 2020 and 2021 at a public university in the southeastern U.S. were invited to complete Kolb’s Learning Style Inventory (LSI). For those participants completing the LSI, examination results for their pre-clinical blocks were obtained and matched to the LSI results. Examination scores (locally-developed examinations and customized National Board of Medical Examiners (NBME) final examinations) were compared by LSI classification for each examination using Kruskal-Wallis Test. Results Out of 360 possible participants, 314 (87.2%) completed the Learning Style Inventory. Convergers and Assimilators made up 84.1% [Convergers (n = 177, 56.4%), Assimilators (n = 87, 27.7%)]. Accommodators (n = 25, 7.9%) and Divergers (n = 25, 7.9%) made up the remaining sample. Accomodators’ scores were significantly lower on locally-developed examinations in Principles of Medicine, Hematology, and Gastrointestinal System. The only NBME examination that demonstrated a significant difference across learning styles was from the Cardiovascular block. Conclusions Upon reviewing Kolb’s LSI, our study indicated that performance on the customized NBME examinations minimized the variance in performance compared to locally-developed examinations. The lack of variance across learning styles for all but one NBME final examination appears to provide a more equitable assessment strategy.


Background
Medical schools are doing more to enhance the diversity of their medical school classes [1]. In doing so, matriculating students hail from a variety of backgrounds, whether it be the type of degree, education level, age, race, and/or ethnicity. Matriculating medical school classes are a kaleidoscope of learners, bringing with them past experiences with various teaching modalities, interests, and positive or negative reinforcement throughout life. It is for these reasons Kolb's learning styles [2] could be a beneficial way to characterize the diversity of matriculating students without looking into the minute details (i.e., sex, socioeconomic status, race, age, etc.). Kolb developed his theory building off this taxonomy, postulating that knowledge transforms through experiences, initially learning through perceiving the material culminating in processing material [2].
In the 1980s, so-called learning styles were emphasized as a means of matching students' preferred method of learning to specific teaching modalities [3]. Since then, there is evidence to discount this notion [4][5][6]. However, upon closer evaluation of Kolb's Experiential Learning Theory and Learning Style Inventory (LSI), the cycle of learning Kolb developed provides a framework for understanding student learning orientations [7]. This framework could be used to guide approaches to learning medical school content in a way that serves any type of learner.
In Kolb's model, learning orientations could be described as tensions between active experimentation (AE) vs. reflective observation (RO) and abstract conceptualization (AC) vs. concrete experience (CE). Active experimentation describes a preference for action, contrasting with reflective observation which indicates a propensity to consider possibilities before committing to an action. Abstract conceptualization prefers the development of theories and concepts to explain events whereas concrete experience emphasizes experiential learning [8][9][10]. From these learning orientations, Kolb established four learning styles: Divergers (concrete experiences with reflection), Assimilators (abstract conceptualization with reflection), Convergers (abstract conceptualization with experimentation), and Accomodators (concrete experiences with experimentation) [6,11].
The LSI identifies where one enters the cycle of learning. The entry point has been influenced by prior educational experiences and personal preferences. This entry point indicates learner emphasis of action over reflection and abstract thinking over concrete experiences. Therefore, a learning situation may complement a preferred learning style or present a challenge to learning [12].
Regardless of background experiences or learning orientations, medical students must pass multiple-choice examinations during the pre-clinical curriculum.
Although our medical school admits a diverse student body, admissions decisions are not based on diverse learning styles. Given our students' diverse backgrounds and degrees, students with a particular learning style may face challenges adapting to learning medical school content. For this study, we investigated if there were differences based on each examination's performance across students' learning style. Previous studies have shown learners who preferred abstract learning (Assimilators and Convergers) perform better on standardized examinations [6,9,13]. We hypothesized they would outperform on examinations in our study as well.

Kolb's Learning Styles Inventory
At a large public school of medicine in the southeastern U.S., all matriculating medical students in the classes of 2020 and 2021 were invited by email to participate and complete the Kolb Learning Style Inventory over a 2week period at the beginning of the school year. If medical students completed the survey link in the email, they consented to participating in the study. The survey was completed prior to any examinations.
The newest iteration of the Kolb LSI, version 3.1, was used. It is a 12-item, forced-choice questionnaire ranking participants' responses against the four learning styles [7]. Participants ranked four statements characterizing their styles for active experimentation versus reflective observation and concrete experience versus abstract conceptualization. Once ranking the forced-choice items, columns are summed to provide a score based on concrete experience, reflective observation, abstract conceptualization, and active experimentation. The original model was built on the idea that learning preferences could be described using two planar dimensions: active experimentation (AE) vs. reflective observation (RO) in the x-dimension, and abstract conceptualization (AC) vs. concrete experience (CE) in the y-dimension. From these two sets of dimensions, Kolb was able to establish four types of learning styles, or preferences. An algorithm was constructed to plot results within the four learning preferences, plotting their learning preference in a RO-AE (x-axis) vs. CE-AC (y-axis) format ( Fig. 1). For example, a student may have scores in each area that would map into the Converger quadrant or category.

Curriculum and assessments
Students undergo 4-8 week organ-system blocks that contain 1-3 examinations throughout the block period. Although blocks differ by numbers of other assignments, a significant portion of their grade is determined on their passing these examinations, and an overall grade of 70% or higher is determined passing. Organ-system course blocks included Principles of Medicine, Immunology, Hematology, Cardiovascular System, Respiratory System, Urinary System, Gastrointestinal System, Endocrinology, Musculoskeletal System, and Reproductive Medicine. The final examination for most blocks is a customized National Board of Medical Examiners (NBME) examination, with the exception of the Principles of Medicine course. All other examinations administered to the medical students were locally-developed examinations. Examination results obtained for this study were all part of the medical students' normal course of study.
The locally-developed examinations were developed by a cohort of faculty, including but not limited to course directors, guest lecturers, and directors for the preclinical phase of the curriculum. This cohort wrote their own multiple-choice questions and answers based on course learning objectives. The NBME examinations were created by a team of block directors as well as directors for the pre-clinical phase of the curriculum. Questions used for the final exam were not comprehensive but assessed new material presented after the most recent examination. These questions were not created by the cohort of faculty; instead, questions were fielded and selected from a larger NBME question bank that matched the course learning objectives. It is important to note that there is no difference in score expectation among all of the examinations, so a passing score of 70% is equitable across all of the exams throughout the preclinical phase.
The Office of Medical Education matched examination percentages with LSI results. Identification numbers were replaced with random identifiers. The list was then randomized for anonymity. Data were analyzed using IBM SPSS v. 25 (Chicago, IL). This study was reviewed by the University of North Carolina Institutional Review Board and determined the study was exempt.

Statistical analysis
LSI categories were compared for each examination. The data violated assumptions for using analysis of variance. Therefore, we used related-samples Wilcoxon Rank Sum Tests and Friedman's Two-Way Analysis of Variance tests to compare locally-developed examinations. We also used Kruskal-Wallis Tests to compare medical student LSI categories by examination. Post hoc pairwise comparisons were analyzed to determine specific differences between the learning styles for each examination.

Discussion
Based on our study findings, the use of locally-developed examinations may be poor indicators for proficiency in any subject. This conclusion was drawn based on the significant score differences in which Accommodators had worse outcomes, and Convergers and Assimilators consistently scored higher than their peers. What is most exceptional is that when compared to NBMEs, this statistical significance and variance in scores were minimized across learning styles. The only NBME examination in which this was not the case is the Cardiovascular System block, but the significance in scores does not resemble the typical dynamic as mentioned above and in previous research [6].
Variations in locally developed examination performance may be explained in part by the learning styles. According to Kolb, Accommodators have a preference for concrete examples [7]. As noted by An and Carr [14], needing concrete examples may be associated with novice learners, who most likely have not learned how to abstract general rules in their learning process. Consequently, test-expectancy effects may explain why Accommodators can perform better on NBME examinations, because there are many resources available that give them concrete experiences with multiplechoice examinations of that nature [15]. They therefore have a better sense of what to expect on an NBME examination versus a locally-developed examination.
Additionally, our findings that most NBME examination scores were lower than locally-developed examinations contradict findings from other studies [9,16]. The lack of statistical significance and paired comparisons implies that customized NBME examinations may better reflect student performance than locally-developed examinations. Given the diversity of students being accepted to medical school who may be at varying points in Kolb's cycle of learning, administering the NBME examinations for all summative assessments may be warranted for fairness, whether it be the midterm or final.
We have hypothesized the difference in the Cardiovascular System NBME examination may be the result of the customization of the examination. The faculty for this block evaluate the item analysis from the previous year's examination to remove items that were easy (i.e., with high percent correct values) or flawed (i.e., with a low discrimination indices). We theorize that this distinct type of customization, which was not used in any of the other NBME exams, may have resulted in the examination mirroring locally-developed examinations. As we have shown, the locally-developed examinations favor Convergers and Assimilators.
Locally-developed examinations are created by the block director(s) themselves, and there is research that supports that the development of the curriculum for any given block can be skewed towards a specific preference or group of preferences [17]. Prior research indicated medical professionals are predominantly Convergers and Assimilators, which may influence how they construct examination items [17]. For concrete thinkers, variations in how individual faculty write examinations may pose challenges for encoding strategies to do well on these examinations [15]. Given sufficient training, locallydeveloped examinations may reflect the difficulty found in NBME-style questions [18,19]. Training faculty not only to write but also to peer-evaluate questions could offer more equanimity in assessments [20].
It has also been shown that learning style classifications evolve over time [8]. For these studies we administered the LSI at only one point. For future studies, it may be interesting to administer the LSI at multiple points in time to determine if there is a shift toward being more of a Converger or Assimilator as a result of the medical school environment [8]. This same study could then be continued into the clinical training environment since learning in that setting is much different than traditional classroom learning. The types of assessments employed in the clinical settings may also be influenced by LSI results, which has not really been studied to date.
Customized NBME examinations have been available for many years; the time commitment and cost for the examinations have been a deterrent for full execution in the pre-clinical curriculum. Although we have opted to utilize them entirely within the pre-clinical curriculum, not every medical school will have the resources to do that.
This study is limited in that it was conducted at a single institution that resulted in groupings that had no more than 25 students in two of the LSI groups.
However, our sample compares to other studies with similar breakdowns in Kolb's classifications for medical professionals [17]. Further study is warranted to determine if this dip in nationally standardized examinations was unique to these classes. Item analyses for the customized examinations should also be further explored to determine if they represent competence versus discrimination of performance.

Conclusions
By identifying the learning styles of matriculating medical students, our study indicates that use of a nationally-standardized examination may minimize variance in scores across learning styles. It is important to note that the variance in NBMEs was insignificant for all examinations except for the Cardiovascular NBME. Therefore, it would be understood that the remaining NBMEs show no discrimination by learning presentation versus learning processing. One way to help concrete thinkers may be to provide practice questions from locally developed examinations to help them better prepare.