- Research
- Open access
- Published:
Self and peer feedback engagement and receptivity among medical students with varied academic performance in the clinical skills laboratory
BMC Medical Education volume 24, Article number: 1065 (2024)
Abstract
Background
Medical students benefit from direct observation of clinical performance and timely feedback from multiple sources. While self and peer feedback have been the focus of numerous research studies, how they influence feedback engagement and receptivity in medical students of varying achievement levels in the clinical skills laboratory setting remains relatively unexplored.
Methods
We conducted an exploratory qualitative study to investigate students’ engagement and receptivity to self and peer feedback across academic performance levels at a medical teaching institution. Data from five focus groups with third-year medical students(n = 25) were thematically analysed.
Results
The ways in which low and high performing students engaged with self-assessment and peer feedback were divided into three categories: affective (or interpersonal), orientational, and transformational. Lower achievers believed they lacked the necessary skills to effectively engage in self and peer feedback interventions, but they agreed with higher achievers who recognized that peer feedback combined with prior knowledge of learning objectives allowed them to take ownership of monitoring their own development over time. Learners' emotional maturity in response to feedback ratings and feedback from activities testing clinical cognition had an impact on self-regulated learning.
Conclusions
Creating a trusting environment is critical for improving the acceptability of peer feedback. It is also critical to provide multiple opportunities for self-assessment in order to improve one's judgment. Giving learners the ability to actively seek and engage with feedback encourages participation in the feedback cycle, focusing on self-regulation rather than reliance on feedback providers. Feedback and action plan development can be improved by teaching students how to understand criticism, manage emotions constructively, and practice developing evaluative judgment and self-regulation skills. Based on the study findings an integrated three stage training model is recommended for effective self- and peer feedback practice for undergraduate medical education.
Introduction
In medical education, fostering feedback engagement and receptivity is not just beneficial but imperative, as it directly impacts the quality of patient care and the overall effectiveness of future physicians. It is essential in the educational development of medical students, particularly in clinical skills laboratories where hands-on practice and performance assessment are critical. Feedback literacy involves effectively seeking, understanding, engaging with, and using feedback for improvement [1]. This skill is crucial in medical education, where continuous self-assessment and peer assessment are necessary for professional growth and competence. Thus, feedback is a cornerstone of medical students’ learning and development, providing critical information to improve clinical skills and ultimately become effective physicians [2].
Kluger and DeNisi’s Feedback Intervention Theory (FIT) describes feedback as information about the difference between observed and standard performance, directing learners’ attention to task-motivation and task-learning processes [3]. This often leads to increased self-effort, including self-assessment, self-reflection, and self-regulation, enhancing feedback engagement and performance. Self-regulated learning is a cyclical process where students plan for a task, set learning objectives, monitor their performance, and reflect on outcomes [4]. This continuous cycle is crucial for fostering improvement. However, a significant challenge in feedback design lies in understanding how learners interpret and utilize feedback. Recognizing learners’ agency through collaborative knowledge is essential for their ability to make sense of feedback, use it effectively for improvement, and develop feedback literacy [2].
Feedback literacy demands that learners actively engage in planning and using feedback to enhance their performance. It involves practical strategies to improve feedback quality and encompasses learner engagement and satisfaction [5]. Developing feedback literacy necessitates a shift from reliance on educators to active participation by learners in providing, understanding, and integrating feedback from themselves and their peers. This active involvement is not just beneficial but imperative for cultivating a deeper, more autonomous learning process that ultimately leads to better educational outcomes and professional readiness [6].
Peer feedback, especially in formative settings, helps students understand the feedback they provide and receive. In medical education, peer feedback entails offering constructive comments on observed competencies such as clinical skills, knowledge, and professionalism, aiming for mutual learning [7] and improvement. Engaging in self-monitoring, self-assessment, peer feedback, and self-regulation is critical for medical practitioners to identify their strengths, weaknesses, and learning needs accurately [8]. Analysing their own and others’ work sharpens students’ evaluation skills, deepens their understanding of performance criteria, and enhances feedback literacy. This process is inextricably linked to the reflective practice educational paradigm, which is vital for critical thinking, problem-solving, and self-regulated learning skills [9]. Hence, integrating peer feedback is not merely an educational tool but a fundamental component of producing competent, reflective, and autonomous medical professionals.
A systematic review has demonstrated that combining self-assessment with external feedback significantly optimizes feedback utilization and clinical performance [10]. Despite implementing strategies to engage students through reflection, seeking feedback, and maintaining attentiveness, engagement levels remain disappointingly low [11]. Moreover, self and peer assessment systems in medical education are underdeveloped, and students frequently show reluctance to participate in peer assessments [12]. Newly qualified physicians also report feeling unprepared to provide effective feedback due to the limited use of self- and peer feedback during their training [13].
The underrepresentation of medical students' experiences with self- and peer-feedback interventions across different academic levels creates a significant gap in understanding their perceptions and engagement with feedback. Recent studies have highlighted the necessity for additional qualitative research to examine how academic maturity influences feedback receptiveness and acceptance [14]. Therefore, exploring the factors influencing medical students’ perceptions and engagement with peer feedback is imperative. Addressing these issues is critical to developing more effective feedback systems that not only enhance learning outcomes but also better prepare future physicians for the practical realities of their profession.
This study aims to investigate the engagement and receptivity of self and peer feedback among medical students with diverse academic standings, exploring its impact on their self-evaluation and peer critiques. By examining these dynamics, the research seeks to develop a framework to enhance the effectiveness of peer feedback during the pre-clinical phase of undergraduate medical education. This framework will provide critical insights into optimizing feedback processes to support learning and foster professional development, ultimately better preparing medical students for their future roles as competent and reflective practitioners.
This research will be grounded in several established educational theories that underscore the importance of feedback in learning processes and the development of feedback engagement and receptivity among medical students. Kolb’s Experiential Learning Theory (ELT) suggests that students learn clinical skills effectively by practicing, analysing performance, understanding feedback, and applying it for improvement through a cycle of concrete experience, reflective observation, abstract conceptualization, and active experimentation [15]. Zimmerman’s Self-Regulated Learning (SRL) Theory describes a cycle of planning, monitoring, and evaluating one’s learning, with feedback playing a crucial role in helping students set goals, monitor progress, and reflect on outcomes to adjust strategies and improve performance [4]. Engaging in self-assessment and peer feedback helps medical students develop the ability to regulate their learning, adjust strategies, and enhance performance [4]. FIT states that feedback interventions compare observed performance with a standard, directing learners’ attention to task-motivation and task-learning processes, prompting self-reflection and self-regulation [3]. This theory offers insights into how feedback can motivate medical students and direct their learning efforts. Additionally, Sociocultural Theory emphasizes the role of social interaction and collaboration in learning [16]. Peer feedback promotes knowledge exchange and mutual learning, aligning with this theory and underscoring the significance of peer feedback and collaborative learning in developing professional competencies.
Hence by drawing on these theoretical frameworks, we aim to integrate them into the study's methodology and suggested framework, ensuring that our approach is both conceptually grounded and practically focused on enhancing feedback engagement and literacy among medical students.
Methods
Context and setting
The study occurred at a large medical teaching institution, involving first- to third-year medical students. At the end of each six-week hybrid Problem-Based Learning (PBL) body system-based theme, students must demonstrate competence in clinical examination skills on standardized patients in the clinical skills laboratory.
Study design
The structure of this process is grounded in Kolb’s Experiential Learning Theory (ELT), which emphasizes a cyclic process of learning involving four stages: concrete experience, reflective observation, abstract conceptualization, and active experimentation [15]. Students are introduced to this process at the beginning of the academic year, during formal teaching sessions, where they are informed about their roles in both receiving and providing feedback during formative clinical skills assessments. This feedback is not merely evaluative but is integrated as a critical component of self-regulated learning, aligning with Zimmerman’s framework, which highlights planning, self-monitoring, and self-reflection [4].
Experiential learning process
Each learning cycle begins with students engaging in concrete experience by performing clinical examination skills. They then proceed to reflective observation, which involves structured feedback sessions where supervising tutors and peers observe the students’ performance and provide immediate verbal and written feedback, recorded in a logbook. This feedback process fosters self-reflection, allowing students to assess their performance and identify areas for improvement, consistent with Kolb’s model. The feedback received is also framed within Zimmerman’s SRL principles, emphasizing how students can use feedback to set future goals and refine their strategies for learning, particularly in preparation for the end-of-semester OSCE (Objective Structured Clinical Examination).
Students’ engagement with feedback continues through abstract conceptualization, where they connect their reflective insights with learning principles and knowledge application. The active experimentation stage is highlighted in subsequent performance cycles where students apply feedback (“feedforward”) to improve their skills. This structured cycle reinforces the growth of self-regulated learning habits, emphasizing goal setting, self-evaluation, and proactive adaptation.
Peer feedback and reflection
In the peer feedback component, students participate in a similar learning cycle. They are instructed on how to provide actionable feedback by modelling the approach used by tutors, focusing on three key questions: what was done well, what was not done well, and what can be improved. These peer sessions embody Zimmerman’s SRL cycle [4], where students must plan their feedback delivery, monitor their peers’ performance critically, and reflect on their own learning while providing and receiving feedback. During these informal logbook assessments, small groups of two students take turns performing a skill while the other observes and provides feedback immediately, thus reinforcing learning through immediate application (active experimentation).
Global rating and debriefing
Students are informed that their performances are evaluated using global ratings rather than numerical marks, aiming to foster mastery orientation rather than performance orientation. Following each logbook assessment, tutors debrief students, guiding them through a structured reflection process. This debriefing aligns with Kolb’s reflective observation stage and helps students integrate theoretical concepts into practical performance, promoting a continuous cycle of improvement. Debriefing, defined as a “facilitated reflection in the cycle of experiential learning to help identify and close gaps in knowledge and skills,” is crucial for reinforcing this learning model [17].
While there are no progressive expectations for feedback provision from years 1 through 3, all students, regardless of their year, are expected to participate fully in this process. This consistency ensures the gradual development of feedback literacy and self-regulated learning across all levels of their medical education.
Sample
We employed a purposive sampling strategy to select participants from a pool of two hundred and thirty-nine 3rd year undergraduate medical students based on their academic performance and experience with formative logbook assessment feedback. The sample included ten high performers (> 70%), ten average performers (50–69%), and ten low performers (< 50%). Exclusion criteria encompassed students outside these performance categories and those lacking at least one year of experience with formative logbook assessment feedback. Prospective participants received an email invitation outlining the study’s purpose, emphasizing voluntary participation, and ensuring confidentiality. A total of twenty-five medical students (n = 25) agreed to participate, meeting inclusion criteria and providing written consent. Given the study’s focus on depth of understanding rather than generalizability, no specific sample size criteria were predetermined. Five focus groups were conducted, accommodating five students each based on consent and availability, comprising two groups of high performers, two groups of average performers, and one group of low performers. Participants were assured of the option to withdraw from the study at any point.
Data collection
Adopting an explorative qualitative methodology [18], medical students from different academic performance level’s perceptions of an existing feedback culture on their ability to self-assess and provide peer feedback were explored using a semi-structured interview schedule during the focus group discussions. The interview schedule with open-ended questions was based on the literature to ensure construct validity and was piloted with a few students to assess efficacy. Table 1 illustrates the focus group trigger questions, detailing the thematic content and explaining the prompts or guiding questions used during the discussions, to provide valuable context and insight into the research process. Focus group discussions were opted for instead of individual semi-structured interviews as it builds on the group dynamics to freely explore the issues in context, depth and detail [18]. The authors facilitated the focus groups, which were monitored by a neutral moderator to ensure impartiality. All five focus groups lasted 60 min, and audio was recorded. During data collection and analysis, a constant comparative method was employed. This involved iteratively comparing new data with previously collected data to identify recurring patterns, themes, or categories. As data collection progressed, the researchers continuously reviewed and analysed incoming data to determine if additional interviews or data points were yielding novel insights or repeating information already captured. Clarification and additional responses were sought as needed, until data saturation was achieved in each of the five groups. The researcher kept all study participants’ information confidential, and the findings were released anonymously by replacing participant IDs with codes.
Ethics approval and consent to participate
Ethical approval for this study was granted (HSS/2213/017D) by the University of KwaZulu-Natal’s Ethics Committee.
Data analysis
Interviews were recorded, anonymized, cleaned, and transcribed verbatim. The authors read all of the transcripts independently. The data was thematically analysed using continuous systematic text condensation, a content and thematic analysis method [18]. Inductive coding was used to identify patterns in the data by examining it, with a focus on participants’ overall attitudes toward eliciting, processing, and implementing self and peer feedback across academic performance groups. Using keywords and text chunks, several aspects of feedback processes related to learner behaviour towards feedback reception were identified and coded. The components were feedback awareness, utility, self-efficacy, and accountability. The content of each coded group was condensed and summarized. Three overarching themes and seven applicable sub-themes were identified and retrieved by generalizing descriptions and concepts specific to receiving and implementing self- and peer feedback among students from the various academic performance levels, as well as perceptions of a feedback culture on their ability to self-assess and provide peer feedback. After several re-readings of transcripts and discussions among the authors, a consensus on themes and sub-themes was reached.
We acknowledge and conscientiously address our unique positions and potential biases while conducting our thematic analysis. As a clinical skills lecturer, RA is directly involved in the teaching and assessing of clinical skills to undergraduate medical students. This first-hand experience provides useful insights, but it also introduces biases based on daily interactions and teaching beliefs. VSS, on the other hand, is a health professions education researcher who provides an outside perspective without being directly involved in the students’ training. This dual approach, which combines intimate understanding and external objectivity, aims to provide a balanced and comprehensive analysis. It is critical for readers to understand our respective roles and perspectives, as they will undoubtedly shape our interpretation of the data. This transparency enhances the credibility and rigor of our study, allowing for a nuanced exploration of the complexities within medical education.
The themes and sub-themes together with supporting quotations are described below.
Results
Fifteen females (60%) and 10 males (40%) participated in the study. The students were grouped based on their end-of-year OSCE (objective structured clinical examination) scores: two groups of five students each had scores of > 70% (F1 & F5), two groups of five students each had scores of 50–69% (F3 & F4), and one group of five students had scores of less than 50% (F2). For analysis purposes, the higher-performance category consisted of groups F1 and F5 combined, while the lower-performance category included groups F2, F3, and F4 combined.
The major themes identified from the data were categorized into three domains: affective or interpersonal, orientational and transformational. The affective component or affect captures how students perceive the purpose and value of feedback in relation to their self-assessment and peer interactions. The orientational domain explores how feedback shapes students’ academic expectations and positions them within the learning environment, contributing to a culture of feedforward. The transformational domain encompasses the developmental aspects of feedback, focusing on how it supports learning, behaviour change, and future performance improvement.
As illustrated in Table 2, two sub-themes were identified in the affective and orientational domains and three within the transformational domain.
The three themes and seven sub-themes identified relating to perceptions of engaging with self-assessment and peer feedback between the low and high performing students are illustrated below with representative quotes.
Affective or interpersonal component
Benefits and challenges of self-assessment
The higher-performing students found self-assessment beneficial to their skills development. They felt comfortable with tutors assessing their self-reflection on their performance before giving feedback:
… It is actually good to my confidence because she [tutor] will always ask you, what have you done right and what have you done wrong… [F5]
However, lower-performing students were not confident to self-assess their performance and noted their own self-preservation biases:
Some of us would not be comfortable, asking us to assess our performance before the tutor gives feedback, because you would be scared to say that it was competent. However, when the teacher gives you the exact feedback, you feel ok, but the truth is you do not know how you did it… [F3].
Considering the difficulty with self-assessment among lower performers, the higher performers suggested the use of a ‘rubric checklist.’ They believed this tool would help them ensure they had covered all aspects of a task and identify specific areas where they encountered problems:
I feel like if the examiner at the logbook assessment gave you the checklist afterwards, it would be a lot more specific; it will help you reflect, and you would know exactly where you fell short. [F1].
Benefits and challenges of peer assessment
The higher-performing students took peer assessment seriously as they appreciated the value of receiving feedback. Peer assessment motivated them to be better prepared for their assessments and the provision of feedback fostered high levels of responsibility in them:
When providing peer feedback, I pay more attention on how I am being assessed…when you are assessing someone, you are very responsible…because…you do not want to give the wrong information…if you are prepared for when you're assessed, then you should be equally as prepared to assess if you want to do well. [F1].
The lower-performing participants took peer assessment less seriously, did not prepare for the skills, and had difficulties giving feedback:
I am not prepared for it…we do take [peer]assessment lightly…we end up looking down at the protocol as if…how should they do it, and at the end we must give feedback and it is difficult. [F3].
Further on, there were real concerns regarding the honesty and accuracy of peer feedback as the lower-performing students thought peer assessment was not constructive, credible, or effective, as it is not done genuinely and conscientiously:
…it is not as helpful as the logbook sessions with tutors. With peer assessments you can just write the thing [feedback] at residence and come and submit it without doing the actual skill [F2].
One of the reasons the lower-performing students thought tutor feedback was more credible was because they felt that peers lacked the necessary expertise to provide useful feedback:
I prefer the feedback from the clinicians…they know what is expected of us…our peers—they only know so much and so [it] can be superficial. [F3]
However, as they reflected that it was not done sincerely, these practices interestingly made them reflect on their professional development as doctors. They hence requested monitoring:
…you are giving…Superior Performance, Competent Performance, even if they do not deserve them. At the end of the day, you question your ability to assess and provide feedback and what type of doctor you will become…we need some sort of controlling factors. [F2].
Interpersonal relationships and feedback engagement
Both categories of students perceived interpersonal relationships as integral to engaging with feedback and expressed the need to feel that their tutors and peers cared and tried to help them. Responding and engaging with feedback depends on good relationship to facilitate learning:
Tutors are nice and they care. I mean they watch you and after they ask you how you felt about your performance, they will show you where you went wrong and then you say okay this is why it needs to improve. [F1].
I mean our friends are nice, they actually care. I mean from what they teach us and what we take from the peer assessment sessions… you want to do better. [F3]
However, there was some social discomfort as they were concerned that friendship bias controlled how they rated their peers:
I have experienced this personally when you are giving feedback; no one likes a Weak Pass. I mean especially if you are assessing your friend, then they just tell you straight out, you are not giving me a Weak Pass [chuckles], you know. [F2].
This quote reveals that the term “Weak Pass” carries significant weight among students, often perceived negatively and leading to social and emotional discomfort. The reaction to a “Weak Pass” indicates that it is viewed not just as a grade, but as a judgment that can affect personal relationships and self-esteem. Students’ reluctance to assign or receive a “Weak Pass” suggests it is seen as a borderline or insufficient achievement, signalling inadequacy or failure to meet expected standards. This highlights the tension between maintaining peer relationships and providing honest, constructive feedback, thereby complicating the peer assessment process.
Alluding further to friendship bias, they mentioned that friends always wanted to impress you and not let you down, and hence would always give positive feedback, which questions the credibility of the feedback:
In some ways, the peer feedback can depend on who we work with, I guess, like it can be a bit of nepotism because your friend will always give you 'Competent'. [F5]
Orientational component
Clarifies task requirements
The lower-performing students appreciated having prior knowledge of the task's learning objectives to appraise their work against standard grading criteria:
…going through the protocol then going to skills logbook assessment session helps to assess myself and to understand the feedback we get from our tutors. [F3]
Higher performers once again suggested that a rubric checklist would aid understanding of the assessment criteria and better facilitate self-assessment:
…there is a lot of grey area like what defines competence…maybe a checklist, what is competent, or what is superior performance; that may be a good indication of how students can assess themselves. [F5].
Feedback seeking behaviour
It was encouraging to note that both lower and higher-performing student participants sought peer feedback on their own. Prior knowledge of the task learning objectives through the clinical skills protocol and standard feedback questions in the logbook facilitated their seeking and giving feedback:
Last time I was preparing for OSCE, I was with my roommate, and he would have the skills protocol and…a stopwatch to time eight minutes for a long station. And he would tick the things in the protocol that I did and…the things that I did not do. So, in a way that is like encouraging specially to work as a team, and the feedback we get there is relatable—it helps you improve in your ways. [F3].
As a requirement for peer assessment, higher-performing students saw the value of seeking feedback as an intervention that improved their peer-assessment skills and reflection to self-regulate their learning:
…you need to ensure that you are prepared…taking time out of your study time to make sure that you learn the techniques so that you are ready to be assessed by your peer. And ensure that you are mentally stable to listen to the feedback as well. [F1].
The lower-performing students, however, indicated a preference for seeking tutor feedback over peer feedback:
I think it is better to approach the tutor because sometimes our peers are not sure. [F3]
There were varied responses from the higher-performing students regarding their attitude towards self-assessing or seeking feedback. Some preferred to seek peer feedback or work on their own rather than approach a tutor:
… not from the tutors… I [prefer] to work on it by myself or I could also do it with my friend … [a tutor would be] "a last resort" for me. [F1]
The judgment they might receive from the tutor during the face-to-face encounter may explain their hesitancy in seeking tutor feedback:
…they are a bit too strict or judging you, that makes it harder to approach them. [F1]
A good relationship and rapport with the tutor and a favourable learning environment facilitates learning and feedback seeking:
… regarding trust and relationship…students…will go to that tutor they feel comfortable with. [F5]
Transformational component
Reflection and understanding
Both categories of students indicated that teachers adapting the assessment format from evaluating isolated skills to assessing integrated skills, along with dialogue and debriefing, was clinically relevant and served as a crucial reflective check on their development as medical learners. Early integration of clinical skills with basic sciences using context-based scenarios, and early clinical exposure with teamwork, aided self-assessment of performance. This represented the basics of moving forward as medical learners:
…integrated formative logbook within a team…made us reflect and understand, interact with our peers…when the doctor started asking questions to think about the skill and relate it to our findings and then give us feedback…we were able to reflect on our theory knowledge to assess and understand the reasoning behind our performance…That whole integrated skill with feedback opens our thinking… [F5].
Yes, those are really helpful. We did the general exam in one station and moving to the other station I had to do the cardiovascular exam. I didn't check the general signs of the patient ‘cos I thought it was already done in the general exam, so, I missed that whole section. Small things like that in the feedback on the checklist reminds you to self-assess. So, a rubric will be helpful… [F2].
Improves performance
Both groups of students felt that peer assessment motivated their learning and mutual professional development:
We also use the sessions as learning sessions; we all kind of have our mistakes and we learn from each other. [F1]
…feedback is better; I really do not practise skills alone…so when it comes to peer assessment, I get that learning experience for all my skills from my team members. [F3]
I can see improvement in my skills including my professional approach from 2nd year to now. Like when I get my [peer] feedback, I can see where I need to work harder. [F4]
Increases autonomy
For the lower performing participants, a 'weak pass' stimulated a combination of emotions over a short period of time that led to a positive response to their learning, triggering self-motivation to work harder and prevent low ratings in the future:
You become so excited if you get the feedback rating from your peer as 'superior performance' and then you want to get motivated. But then if the rating is…a 'weak pass' and they tell you, you have not prepared well, then you will be like angry, frustrated…demotivated…but after a while you get motivated to study more and will try to work harder, as you now know where to improve… [F4].
Discussion
This study aimed to assess how medical students across different academic achievement levels receive and interpret self- and peer feedback in the clinical skills setting. The findings indicate that while students generally perceive self- and peer feedback interventions as valuable tools for shaping behaviour and improving feedforward, engaging with these interventions is a complex process influenced by several affective, orientational, and transformational factors. In this discussion, we highlight how these factors interact to affect feedback engagement, considering students’ academic performance and the broader impact of feedback culture on self-assessment abilities and peer feedback provision.
Affective/interpersonal domain
Self-assessment differences
Assessment literacy forms the foundation for meaningful engagement in learning and feedback activities [19, 20]. Higher-performing participants recognized tutor requests to reflect on their self-assessment as key opportunities for critical self-evaluation. This reflection enhanced their understanding of performance expectations, empowering them to assess both their work and that of peers against standardized criteria. As supported by previous research, self-assessment strengthened their growth mindset [21, 22], enabling better SRL by closing knowledge gaps and enhancing future performance. Conversely, lower-performing students expressed apprehension about self-assessment due to perceived limitations in their assessment literacy and confidence. These students were less aware of how self-assessment could contribute to improved feedback engagement and performance, often confusing effort with excellence [22]. While some literature attributes these struggles to lower motivation and learning attitudes, it’s important to acknowledge that personal and social factors—such as financial strain, interpersonal issues, and mental health—also significantly impact academic performance [23]. Developing robust self-assessment skills could foster greater interest in improving clinical competencies, leading to better academic and critical thinking outcomes.
Perceptions of peer assessment feedback
Lower-performing participants acknowledged the importance of peer feedback but struggled with fully committing to acting on it due to concerns over credibility and trustworthiness. They noted challenges in providing honest peer feedback, citing friendship bias and discomfort in delivering negative critiques as barriers. This concern mirrors findings from earlier studies, where students tend to overrate each other’s performance to avoid being overly critical, which undermines the authenticity of peer feedback [24,25,26,27,28]. Higher-performing students, however, valued peer feedback when thoughtfully and confidentially delivered, finding it insightful and beneficial for building confidence in feedback provision [29]. They actively engaged with peer feedback, recognizing its role in enhancing self-assessment skills [9]. This study underscores the need for training to develop trust and competence in peer feedback processes [25]. Regular, consistent peer assessments supported by tutor monitoring can enhance reflective practices and gradually improve the quality of student feedback.
Orientational domain
Both high- and low-achieving students emphasized the importance of clear learning objectives and assessment expectations outlined by teachers, appreciating how this “feed-up” guided self-reflection and task completion. When learners understand the purpose and fairness of assessment goals, they demonstrate greater commitment to learning [30]. Additionally, when teachers model evaluative thinking through descriptive feedback aligned with learning objectives, they promote students’ abilities to monitor their progress and set goals, fostering independence. Participants suggested that using rubric checklists to standardize feedback could improve the reliability of self-assessment and peer ratings. The study also indicated that structured feedback processes, such as standardized questions, offer more opportunities for refining feedback skills. However, achieving reliability in using rubrics for performance assessment remains a challenge [25], warranting further exploration.
Transformational domain
The study highlights the transformative potential of peer learning when combined with prior knowledge of learning objectives during the peer assessment process. Participants agreed that this approach fostered accountability and ownership of learning by offering actionable insights for future improvement (feedforward). Peer assessment in small groups was viewed as beneficial for enhancing professionalism, teamwork, and shifting the focus toward motivating self-assessment in the feedback recipient. This aligns with Vygotsky’s sociocultural theory, which emphasizes learning through social interaction [16]. The "zone of proximal development" suggests that while individuals can achieve certain levels of learning independently, their full potential is reached through guidance and peer collaboration [16].
Interestingly, lower-performing students demonstrated increased engagement and autonomy in response to peer feedback, often driven by a prevention-focused SRL approach aimed at avoiding failure. This observation supports the idea that feedback should emphasize self-regulation [7], a key skill for adapting to the evolving clinical environment.
Integration of ELT and SRL towards feedback literacy development
The study findings illustrates how the integration of ELT and SRL offered a balanced approach where structured reflection (via ELT) and autonomous, goal-directed learning (via SRL) worked in tandem [4, 15]. ELT’s emphasis on reflective observation facilitated by self, peer, and tutor feedback, and abstract conceptualization where students used the insights gained from feedback to conceptualize learning principles, ensured that students engaged thoughtfully with feedback, while SRL’s focus on planning and self-monitoring promoted active and sustained engagement with feedback processes. This combination was particularly beneficial in addressing the diverse needs of students with varying academic achievements, as it provided both structure and flexibility in how feedback was processed and acted upon. By developing strong SRL skills, medical students can better integrate feedback into their clinical practice, fostering lifelong learning and critical thinking. This integration underscores the importance of self-assessment, reflective practice, and continuous improvement—key elements in both ELT and SRL.
While this study primarily focuses on feedback engagement, future research could delve deeper into the interplay between SRL and feedback literacy, exploring how these concepts can be synergized to further enhance medical education outcomes.
Feedback culture on the ability to self-assess and provide peer feedback
Study participants expressed varied perspectives on the feedback culture related to self-assessment and peer feedback. Across the board, participants recognized the value of restructuring formative assessments to include regular peer and teacher feedback. This approach was seen as enhancing engagement, dialogue, and evaluative judgment. Peer feedback was viewed as a beneficial way to receive additional insights and develop evaluative skills, helping students judge their own and others’ performance [2, 31]. According to Sadler’s research, peer assessment was the most natural way to develop the knowledge transfer skills required to “convert feedback statements into actions for improvement” [32]. Ericsson also stated that assigning students the same task to assess each other's skills is necessary for deliberate practice towards development of clinical competence [33]. Many participants agreed that peer assessment supports the development of clinical competence and prepares them for future professional performance.
However, there were differences in how students engaged with feedback based on their academic achievement levels. Lower-performing students appreciated one-on-one feedback from tutors, viewing it as a safe environment to discuss challenges [34]. They were less confident in peer feedback due to doubts about peer expertise and preferred tutor guidance. These students found that having a clear understanding of learning objectives and assessment criteria, along with rapport and trust with tutors and peers, facilitated their ability to seek feedback. Preparing well for clinical skills also motivated them to seek feedback as a way to reduce their vulnerability to criticism.
Higher-achieving students, in contrast, preferred to rely on self-assessment and peer feedback, seeking tutor input only as a final step for clarification. This approach highlights their self-regulatory learning style and greater autonomy [35]. Both groups recognized that effective feedback and reflection could transform their learning by addressing knowledge gaps and improving future performance. However, they noted that feedback must be relevant, contextualized, and aligned with clinical practice goals to have a meaningful impact [36,37,38].
As learners take more responsibility for monitoring and evaluating their own learning as a result of participation and collaboration in sociocultural activities [39], they develop into self-directed learners with greater control over assessment and feedback processes [40]. The findings suggest that fostering a positive learning environment and promoting communities of practice can significantly enhance the impact of feedback on student development. This includes determining the best time and location for feedback sessions, discussing learning objectives and action plans, and teaching students how to use and interpret feedback in order to put it into practice. Teachers who encourage guided reflection, mentoring, and support sessions such as self- and peer assessment help students develop the learning goal orientation required for acting on feedback while actively participating in the feedback process, resulting in higher academic achievement [27]. Overall, the study suggests that cultivating a feedback culture where both self and peer feedback are integrated can enhance learning outcomes by promoting self-regulation, critical thinking, and feedback literacy among medical students.
Training recommendations for effective self- and peer feedback practice
Based on the study’s findings, we have developed a set of training recommendations aimed at enhancing self- and peer feedback practices among undergraduate medical students. These recommendations are presented in Table 3. The derivation of these recommendations is grounded in the specific challenges and needs identified during our focus group discussions, as well as in the literature on effective feedback practices.
Our study revealed several key insights:
-
1.
Importance of Feedback Literacy: Students recognized the value of integrating peer feedback into clinical skills education but felt inadequately prepared to engage in such practices effectively.
-
2.
Emotional Impact of Feedback: Many students expressed feelings of vulnerability and anxiety when giving and receiving peer feedback, highlighting the need for emotional support and clear guidelines.
-
3.
Need for Structured Feedback: Participants indicated that structured feedback forms and rubrics would help standardize evaluations and make feedback more actionable.
-
4.
Diverse Perspectives: Students noted the benefit of receiving feedback from a variety of peers, which provides a broader range of perspectives and insights.
These findings informed the development of our training recommendations, which aim to address these specific areas.
By clearly linking the study’s findings with the recommendations, we ensure that the proposed strategies are directly relevant to the issues identified and are tailored to the specific needs of the students. This approach not only enhances the relevance of the recommendations but also underscores their practical applicability in improving feedback practices in medical education.
To ensure the effectiveness of the training process, faculty members should supervise peer feedback processes to ensure fairness and quality. Faculty members can also provide additional insight and guidance. To close the feedback loop, continuously assess the quality and impact of peer feedback on student learning and skill development [25] and use this data to improve the feedback process. Improving self- and peer feedback in undergraduate medical education, including feedback training, is critical for enhancing learning, clinical skills, and feedback literacy. These strategies can be tailored and expanded to meet the unique context and needs of any educational program.
Limitations
This study has several limitations that should be acknowledged. First, the study’s generalizability is constrained due to its focus on a single undergraduate program at a single institution and solely on the perspectives of medical students rather than tutors. The application of the study’s findings may require consideration of specific local contexts.
Additionally, the extent of each student’s prior experience with giving and receiving feedback may have been influenced by individual engagement levels, the frequency of participation in self-directed peer feedback sessions, and the varying feedback styles of different tutors. While the curriculum includes formal teaching sessions at the beginning of each academic year to introduce feedback principles and practices, there is no standardized progression in feedback expectations from years 1 through 3, which may impact the development of feedback literacy over time.
Expanding the participant pool beyond third-year students could have enriched the study. Given the emphasis on examination skills competence in the clinical skills laboratory for first- to third-year students, including a broader range of participants might have provided additional insights. While our focus group analysis provides rich qualitative insights, we acknowledge the limitation of not including detailed demographic data such as age range or generational differences among participants. Understanding the demographic context could offer valuable nuances in interpreting our findings. For instance, generational differences between millennial and Gen Z students might influence their feedback reception and engagement. Millennials, who typically value collaboration and feedback, might approach peer assessments differently than Gen Z students, who are often characterized by their preference for instant feedback and digital communication.
In future studies, incorporating demographic data could enhance the depth of understanding and provide a more comprehensive interpretation of how different factors influence feedback dynamics. Such data could help in tailoring feedback literacy interventions to better meet the diverse needs of medical students across various demographic groups.
Conclusion and recommendations
The study’s findings add to the evidence that learners value the contextualized nature of self and peer-to-peer feedback in the clinical skills laboratory and find it helpful in their learning. However, lower-performing students may have low feedback engagement, which could be due to a lack of motivation and assessment literacy. The findings highlight the importance of tailoring feedback to the needs of students. Learners identified several characteristics of a peer feedback process that could make giving and accepting feedback easier and more meaningful: clearly defined goals, standardization and structure, assessments conducted in a supportive environment, improving peer competency trust, and more case-based encounters that help learners take responsibility for their academic growth.
In summary, this study suggests that feedback effectiveness and meaningfulness are enhanced when it incorporates dynamic elements and is collaboratively constructed from multiple sources. It confirms that social interaction among learners with different academic performance backgrounds can promote the development of agency, belonging, and competence, all of which are important for learner development and professionalism. As a result, the study emphasizes the importance of developing self and peer feedback interventions through training to improve students’ feedback literacy skills, as well as encouraging shared responsibility between clinical educators and learners in order to give learners more control over the assessment and feedback process. This will be a useful theoretical guide for developing and evaluating feedback interventions for educational purposes.
Future research should assess the efficacy of newer feedback initiatives that focus on a post-feedback action plan intervention, evaluating learners’ ability to scaffold feedback by reflecting on and developing self-generated performance improvement objectives. Such interventions could be used as a coaching tool to assist with feedback interpretation and feed forward.
Availability of data and materials
The datasets used and/or analysed are available from the corresponding author on reasonable request.
References
Carless D, Boud D. The development of student feedback literacy: enabling uptake of feedback. Assess Eval High Educ. 2018;43(8):1315–25. https://doi.org/10.1080/02602938.2018.1463354.
Boud D, Molloy E. Rethinking models of feedback for learning: the challenge of design. Assess Eval High Educ. 2013;38(6):698–712. https://doi.org/10.1080/02602938.2012.691462.
Kluger A, DeNisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119(2):254–84. https://doi.org/10.1037/0033-2909.119.2.254.
Zimmerman BJ. Becoming a self-regulated learner: an overview. Theory Pract. 2002;41(2):64–70. https://doi.org/10.1207/s15430421tip4102_2.
Winstone NE, Nash RA, Parker M, Rowntree J. Supporting learners’ agentic engagement with feedback: a systematic review and a taxonomy of recipience processes. Educ Psychol. 2017;52(1):17–37. https://doi.org/10.1080/00461520.2016.1207538.
Nicol D. From monologue to dialogue: improving written feedback processes in mass higher education. Assess Eval High Educ. 2010;35(5):501–17. https://doi.org/10.1080/02602931003786559.
Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.
Yan Z, Carless D. Self-assessment is about more than self: the enabling role of feedback literacy. Assess Eval High Educ. 2021;47(7):1116–28. https://doi.org/10.1080/02602938.2021.2001431.
Mann K, Gordon J, Macleod A. Reflection and reflective practice in health professions education: a systematic review. Adv Health Sci Educ. 2009;14:595–621.
van de Ridder JM, McGaghie WC, Stokking KM, ten Cate OT. Variables that affect the process and outcome of feedback, relevant for medical training: a meta-review. Med Educ. 2015;49(7):658–73. https://doi.org/10.1111/medu.12752.
Ferguson J, Wakeling J, Bowie P. Factors influencing the effectiveness of multisource feedback in improving the professional practice of medical doctors: a systematic review. BMC Med Educ. 2014;14:76. https://doi.org/10.1186/1472-6920-14-76.
Curran VR, Fairbridge NA, Deacon D. Peer assessment of professionalism in undergraduate medical education. BMC Med Educ. 2020;20:504. https://doi.org/10.1186/s12909-020-02412-x.
Lerchenfeldt S, Taylor TAH. Best practices in peer assessment: training tomorrow’s physicians to obtain and provide quality feedback. Adv Med Educ Pract. 2020;25(11):571–8. https://doi.org/10.2147/AMEP.S250761.PMID:32922116;PMCID:PMC7457869.
Cordovani L, Tran C, Wong A, et al. Undergraduate learners’ receptiveness to feedback in medical schools: a scoping review. Med Sci Educ. 2023;33(5):1253–69. https://doi.org/10.1007/s40670-023-01858-0.
Kolb DA. Experiential learning: experience as the source of learning and development. Englewood Cliffs: Prentice Hall; 1984. Retrieved from http://www.learningfromexperience.com/images/uploads/process-ofexperiential-learning.pdf. Accessed 31 May 2020.
Vygotsky LS. Mind in society: the development of higher psychological processes. Cambridge, MA: Harvard University Press; 1978.
Eppich W, Cheng A. Promoting excellence and reflective learning in simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc. 2015;10(2):106–15.
Patton MQ. Qualitative research and evaluation methods: integrating theory and practice. 4th ed. Thousand Oaks, CA: Sage; 2015.
Kozato A, Shikino K, Matsuyama Y, et al. A qualitative study examining the critical differences in the experience of and response to formative feedback by undergraduate medical students in Japan and the UK. BMC Med Educ. 2023;23:408. https://doi.org/10.1186/s12909-023-04257-6.
Price M, Rust C, O’Donovan B, Handley K, Bryant R. Assessment literacy: the foundation for improving student learning. Oxford: Oxford Centre for Staff and Learning Development; 2012.
Jonsson A. Facilitating productive use of feedback in higher education. Act Learn High Educ. 2013;14:63–76. https://doi.org/10.1177/1469787412467125.
Boud D, Lawson R, Thompson D. Does student engagement in self-assessment calibrate their judgement over time? Assess Eval High Educ. 2013;38(8):941–56.
Richardson M, Abraham C, Bond R. Psychological correlates of university students’ academic performance: a systematic review and meta-analysis. Psychol Bull. 2012;138(2):353–87. https://doi.org/10.1037/a0026838.
Ten Cate O, Regehr G. The power of subjectivity in the assessment of medical trainees. Acad Med. 2019;94(3):333–7. https://doi.org/10.1097/acm.0000000000002495.
Abraham RM, Singaram VS. Using deliberate practice framework to assess the quality of feedback in undergraduate clinical skills training. BMC Med Educ. 2019;19:105. https://doi.org/10.1186/s12909-019-1547-5.
Harrison CJ, Könings KD, Schuwirth L, Wass V, van der Vleuten C. Barriers to the uptake and use of feedback in the context of summative assessment. Adv Health Sci Educ. 2015;20:229–45. https://doi.org/10.1007/s10459-014-9524-6.
Ramani S, Konings KD, Mann KV, Pisarski EE, van der Vleuten CPM. About politeness, face, and feedback: exploring resident and faculty perceptions of how institutional feedback culture influences feedback practices. Acad Med. 2018;93(9):1348–58.
Kluger AN, van Dijk D. Feedback, the various tasks of the doctor, and the feedforward alternative. Med Educ. 2010;44:1166–74.
Bounds R, Bush C, Aghera A, Rodriguez N, Stansfield RB, Santeen SA. Emergency medicine residents’ self-assessments play a critical role when receiving feedback. Acad Emerg Med. 2013;20:1055–61. https://doi.org/10.1111/acem.12231.
O’Donovan B, Rust C, Price M. A scholarly approach to solving the feedback dilemma in practice. Assess Eval High Educ. 2016;41(6):938–49. https://doi.org/10.1080/02602938.2015.1052774.
Boud D, Lawson R, Thompson D. The calibration of student judgement through self-assessment: disruptive effects of assessment patterns. High Educ Res Dev. 2015;34(1):45–59.
Sadler DR. Beyond feedback: developing student capability in complex appraisal. Assess Eval High Educ. 2010;35:535–50. https://doi.org/10.1080/02602930903541015.
Ericsson KA. An expert-performance perspective of research on medical expertise: the study of clinical performance. Med Educ. 2007;41:1124–30.
Cramp A. Developing first-year engagement with written feedback. Act Learn High Educ. 2011;12:113–24. https://doi.org/10.1177/1469787411402484.
Abraham RM, Singaram VS. Barriers and promoters in receptivity and utilization of feedback in a pre-clinical simulation based clinical training setting. Bangladesh J Med Sci. 2021;20(3):594–607. https://doi.org/10.3329/bjms.v20i3.52802.
Burke D. Strategies for using feedback students bring to higher education. Assess Eval High Educ. 2009;34:41–50.
Burch V, Seggie J, Gary N. Formative assessment promotes learning in undergraduate clinical clerkships. S Afr Med J. 2006;96:430–3.
Boileau É, Talbot-Lemaire M, Belanger M, St-Onge C. Playing in the big leagues now: exploring feedback receptivity during the transition to residency. Health Prof Educ. 2018;5(4). https://doi.org/10.1016/j.hpe.2018.09.003.
Wertsch JV. Voices of the mind: a sociocultural approach to mediated action. Cambridge, MA: Harvard University Press; 1991.
Abraham RM. Balancing responsibility sharing in the simulated clinical skills setting: a strategy to remove barriers to feedback engagement as a new concept to promote a growth-enhancing feedback process. Afr J Health Prof Educ. 2023;15(1):2–6. https://doi.org/10.7196/AJHPE.2023.v15i1.1630.
Acknowledgements
The authors would like to thank students who participated in the study. This publication was made possible by the Department of Higher Education and Training (DHET) through the University Capacity Development Programme, University of KwaZulu-Natal, South Africa.
Funding
This research was funded by the University Capacity Development Programme. The funding body was not involved in the study design, data collection, analysis or interpretation. The funding body was not involved in the writing of this manuscript. The views expressed in this report are those of the authors and do not necessarily reflect those of the University Capacity Development Programme.
Author information
Authors and Affiliations
Contributions
RA contributed to the conception, design, data collection, analysis, interpretation of data and drafting the main manuscript text. VS contributed to the conception, design, data analysis and interpretation. Both authors critically reviewed the manuscript and approved it for publication.
Corresponding author
Ethics declarations
Ethical approval and consent to participate
Ethical approval for this study was granted (HSS/2213/017D) by University of KwaZulu-Natal’s ethics committee. The participants in this study gave their written informed consent to take part in this study and for anonymised findings of this study to be published. Both the authors (RA and VS) have read, approved and given their consent for the manuscript to be published should it be accepted by the journal.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Abraham, R., Singaram, V.S. Self and peer feedback engagement and receptivity among medical students with varied academic performance in the clinical skills laboratory. BMC Med Educ 24, 1065 (2024). https://doi.org/10.1186/s12909-024-06084-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12909-024-06084-9