The role of feedback in improving the effectiveness of workplace based assessments: a systematic review
© Saedon et al; licensee BioMed Central Ltd. 2012
Received: 20 November 2011
Accepted: 2 May 2012
Published: 2 May 2012
With recent emphasis placed on workplace based assessment (WBA) as a method of formative performance assessment, there is limited evidence in the current literature regarding the role of feedback in improving the effectiveness of WBA. The aim of this systematic review was to elucidate the impact of feedback on the effectiveness of WBA in postgraduate medical training.
Searches were conducted using the following bibliographic databases to identify original published studies related to WBA and the role of feedback: Medline (1950-December 2010), Embase (1980-December 2010) and Journals@Ovid (English language only, 1996-December 2010). Studies which attempted to evaluate the role of feedback in WBA involving postgraduate doctors were included.
15 identified studies met the inclusion criteria and minimum quality threshold. They were heterogeneous in methodological design. 7 studies focused on multi source feedback, 3 studies were based on mini-clinical evaluation exercise, 2 looked at procedural based assessment, one study looked at workplace based assessments in general and 2 studies looked at a combination of 3 to 6 workplace based assessments. 7 studies originated from the United Kingdom. Others were from Canada, the United States and New Zealand. Study populations were doctors in various grades of training from a wide range of specialties including general practice, general medicine, general surgery, dermatology, paediatrics and anaesthetics. All studies were prospective in design, and non-comparative descriptive or observational studies using a variety of methods including questionnaires, one to one interviews and focus groups.
The evidence base contains few high quality conclusive studies and more studies are required to provide further evidence for the effect of feedback from workplace based assessment on subsequent performance. There is, however, good evidence that if well implemented, feedback from workplace based assessments, particularly multisource feedback, leads to a perceived positive effect on practice.
Feedback in clinical education has been defined as “specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance” . It has been suggested that the provision of feedback from formative assessments leads to a positive impact on doctors’ learning and performance .
Recent reforms in postgraduate medical education have brought about a greater emphasis on competency based training which focuses on outcomes rather than processes of learning. Workplace based assessment (WBA) is a system whereby doctors are assessed on clinical skills and other attributes in the context of his or her working environment. Various methods are used to provide this information including mini-clinical evaluation exercise (mini-CEX), case-based discussion (CBD), direct observation of procedural skills (DOPS), procedure-based assessment (PBA), objective structured assessment of technical skills (OSATS) and multi-source feedback (MSF). Feedback and scoring are given by the assessor and this information is compiled and fed back to educational supervisors.
Although there is considerable emphasis placed on WBA as a method of formative performance assessment, there is limited evidence in the current literature regarding the effectiveness of WBA in changing the behaviour of doctors and improving their performance. A recent literature review set out to explore the impact of WBA on doctors' education and performance . The authors found that multisource feedback can lead to performance improvement although other factors have a major impact upon the response. There is a dearth of evidence about the outcome and use of feedback for continued learning and improvement. Anecdotally, trainees perceive feedback as the most useful aspect of WBA and believe that greater emphasis on the feedback component of WBA will improve its effectiveness as a formative assessment tool, hence improving trainees’ performance. The aim of this systematic review was to elucidate the impact of feedback on the effectiveness of WBAs in postgraduate medical training.
Searches were conducted using the following bibliographic databases to identify original published studies related to WBA and the role of feedback: Medline (1950-December 2010), Embase (1980-December 2010) and Journals@Ovid (English language only, 1996-December 2010). The search terms used were “feedback”, “workplace based assessment”, “direct observation of procedural skills”, “mini clinical evaluation exercise”, “case based discussion”, “multisource feedback”, “procedure-based assessment,” “objective structured assessment of technical skills”, “training” and “medical education”. In addition, hand searches using reference lists and bibliographies of included studies and review articles were performed.
Inclusion and exclusion criteria
Gradings of Strength of Findings of the Paper []
Strength of Findings
No clear conclusions can be drawn.
Results ambiguous, but there appears to be a trend.
Conclusions can probably be based on the results.
Results are clear and very likely to be true.
Results are unequivocal
The Kirkpatrick (1967) model of education outcomes []
Learning of skills and knowledge
Changes in learner behaviour
Wider changes in the delivery of care
A statistical synthesis of the evidence was not conducted because no randomised trials involving feedback in formative assessments were identified and the prospective and retrospective studies included a variety of methods of assessment.
Summary of studies included in the review
Data collection methods
Aim of study (implied/stated)
Type of WBA
Grade of Strength of findings and main findings
Archer et al 
Analysis of MSF data
To report the evidence for and challenges to the validity of Sheffield Peer Review Assessment Tool (SPRAT) with paediatric specialist trainees across the UK as part of Royal College of Paediatrics and Child Health workplace based assessment programme.
Grade 3. Assessor seniority is important. Free text boxes allow feedback for personal development
Bullock et al 
Analysis of MSF data
To address differences in staff groups in their assessment of junior doctors’ professional attitudes and behaviour.
Grade 3. Peers and administrators were less likely to indicate concern compared to consultants and senior nurses.
Burford et al 
Junior doctors and trainers
To compare perceptions of two tools for giving MSF to UK junior doctors, based on usability, usefulness and validity.
Grade 3. Trainees were asked in detail whether they would change their behaviour. Attitudes towards MSF in principle were positive and tools felt to be usable. Text-oriented tool rated more useful for giving feedback on communication and attitude
Canavan et al 
Five medical and one surgical specialty
To assess qualitatively written comments on multisource assessments based on psychological feedback theory for professional development
Grade 3. Quality of written feedback varies; a substantial portion of comments were useless and at worst detrimental to progress
Violato et al 
Longitudinal comparative study
Forms analysed on two occasions, 5 years apart
Examining the validity and reliability of MSF for general practice and whether it has led to change in performance when reassessed in 5 years
Grade 4. There is evidence for the construct validity of the instruments and stability over time
Sargeant et al 
To increase understanding of the consequential validity of MSF by exploring how doctors used their feedback and the conditions influencing this use.
Grade 3.Feedback usefulness enhanced by increasing its specificity. Strong influence of direct patient feedback on doctors’ performance
Sargeant et al 
Exploration of physicians’ reactions to MSF, perceptions influencing these and the acceptance and use of feedback
Grade 3. Physicians’ perceptions of the MSF process and feedback can influence how and if they use the feedback for practice improvement.
Wellor et al 
Questionnaire based ratings and written answers
To evaluate mini-CEX for both summative and formative assessment for anaesthetics training
Grade 3. Factors that facilitated or hindered implementation or limited effective feedback were identified
Wellor et al 
Analysis of mini-CEX forms
Psychometric characteristics, logistics of application, and impact on the quality of supervision of the mini- CEX
Grade 3. The positive effect of the mini CEX on feedback, its relative feasibility, and acceptance as a potential assessment tool was demonstrated
Holmboe et al 
Videotaping of feedback sessions
Primary care and internal medicine
To examine how often faculty provided recommendations and used interactive techniques when providing feedback as part of a mini CEX.
Programs should consider both specific training in feedback and changes to the miniCEX form to facilitate interactive feedback.
James et al 
Times taken to complete the consenting and operative components of the forms were recorded.
Assessing the time required to complete PBA forms and ease of use in the surgical workplace.
Grade 3. PBAs are feasible in clinical practice and are valued by trainees as a means of enabling focused feedback and targeted training.
Marriot et al 
Prospective observational study
Direct observation using the PBA.
The aims were to evaluate the validity, reliability and acceptability of PBA.
Grade 3. PBA demonstrated good overall validity and acceptability, and exceptionally high reliability.
Murphy et al 
To investigate the reliability and feasibility of six potential workplace-based assessment methods
MSF, criterion audit, patient feedback, referral letters, significant event analysis, and video analysis of consultations.
Grade 3. Two WBA tools involving patient and colleague feedback have high reliability suitable for high stakes WBA in the general practice setting.
Cohen et al 
To collate the experience and views on three workplace assessments
DOPS, mini- CEX, MSF
Grade 3.Trainees appreciate the formative benefits which derive from the assessments, namely feedback and reassurance of
Johnson et al 
Questionnaires and focus groups
To gain feedback from trainees and supervisors in relation to components of core medical training including workplace- based assessments,
Grade 4.WPBA assessments were well received as means of evidencing achievement and for learning development The majority of trainees felt that in particular the feedback following WBA assessments had been useful.
The 15 identified studies which met the inclusion criteria and minimum quality threshold were heterogeneous in their methodological design. A narrative overview is therefore provided rather than a meta-analysis. A wide range of WBAs were covered in the included studies. 7 studies focused on MSF, 3 studies were based on mini-CEX, 2 looked at PBA, one study looked at WBAs in general and 2 studies looked at a combination of 3 to 6 WBAs. 7 studies originated from the United Kingdom. Others were from Canada, the United States and New Zealand. Study populations were doctors in various grades of training from a wide range of specialties including general practice, general medicine, general surgery, dermatology, paediatrics and anaesthetics. All studies were prospective in design, and non-comparative descriptive or observational studies using a variety of methods including questionnaires, one to one interviews and focus groups. They all showed a modification of skills and attitudes or behavioural or willingness of learners to apply new knowledge & skills (Kirkpatrick Levels 2 and 3) . None of the studies showed an improvement in learning and performance as a direct result of WBA (Kirkpatrick Level 4).
Multisource feedback (MSF)
MSF is believed to increase motivation among staff, translating into positive behaviour change, increased productivity and self awareness which are fundamental for the progress of any organisation . A non-comparative action based study by Archer et al found that MSF in the form of the Sheffield Peer Review Assessment Tool (SPRAT) does not provide enough data on trainees about whom concerns are raised, and more assessments are required for these trainees . They also felt that unregulated self-selection of assessors introduces leniency bias and that this should end. Although free-text boxes allowed comments for feedback, no clear evidence was presented to show a change in practice. In an analysis of MSF data, Bullock et al demonstrated a trend towards becoming more critical in assessing trainees as seniority increases . Feedback was provided by a designated trainer after completed forms were returned unseen to a central point and they stated that remedial action is undertaken as appropriate.
A postal questionnaire to trainees and trainers showed that the perceived effectiveness of multisource feedback was low . There were small but significant preferences for textual feedback, shown by the team assessment of behaviour (TAB), which has large free-text boxes, being perceived as more useful than the mini-PAT, which has a numerical scale and only a small space for comments. Elements which were more likely to be changed as a result of feedback were medical knowledge and teaching and training skills. The aspect which was least likely to change was relationships with patients. TAB was felt to be more useful on items related to communication and professionalism. The expected influence of the feedback was low, with nearly a third of trainees not anticipating to change in response to feedback. The relationship between intention to change in any area and the perceived positivity or negativity of feedback was also extremely low. Assessors based their feedback on both direct and indirect observation, in conjunction with discussion with colleagues and comments from patients and other health care professionals.
Canavan et al analysed phrases in feedback comments written by observers who completed surveys to provide developmental feedback to residents and fellows . They looked at the valence of feedback (positive, negative, or neutral), its level of specificity, and whether it was behaviour based or directed toward the learner’s “self”. 74.5% of surveys contained at least one global judgement. Behaviour-oriented phrases occurred less frequently, and general behaviours were mentioned more often than specific behaviours. Negative feedback phrases were found in 10.3% surveys. Similar to the positive comments, many were self-oriented, which can lead to a decline in performance . The desirable characteristics of feedback were found to be specificity, behavioural focus, and sufficient clarity to be of great potential value to trainees.
A longitudinal study investigated changes in performance for 250 doctors who participated in MSF twice, 5 years apart . All the ratings increased between times 1 and 2, although the increase for patient ratings was not significant. The change in ratings by co-workers and medical colleagues were in the small-to-moderate range. The reasons for relatively little change occurring between the two time-points include the scores being high initially or that the data were not sufficiently compelling. Also, when only a few aspects of behaviour are advised to change in a survey containing more than 100 items, its effect will not be great.
A qualitative study by Sargeant et al found that doctors did not make changes if feedback from MSF was positive, and only 7 out of thirteen doctors who received negative feedback changed their behaviour . The feedback most consistently used was specific, received from patients, and addressed communication skills. The feedback least frequently used addressed clinical competence and came from medical colleagues. Another qualitative study by Sargeant et al using focus group interviews found that family physicians generally agreed with their patients’ feedback . However, responses to medical colleague and co-worker feedback ranged from positive to negative, and did not always result in a change in behaviour.
Mini-clinical evaluation exercise (Mini-CEX)
Studies on the mini-CEX in trainee anaesthetists in New Zealand showed a positive effect of feedback and a perceived very positive educational impact [14, 15]. In the written feedback fields of the Mini-CEX form, 95% of specialists wrote comments under ‘things that the trainee did well’, 70% recorded comments in ‘areas for improvement’, and 60% wrote down an ‘agreed action’ . Trainees felt there was not a strong culture of feedback, but that the mini-CEX facilitated feedback. Holmboe et al recorded feedback from mini-CEX sessions in a prospective observational cohort study and showed that mini-CEX frequently leads to a recommendation for improvement, with the majority of the recommendations focused on the clinical skills of medical interviewing, physical examination, and counselling .
Procedure based assessment (PBA)
James et al looked at the PBA tool in a non-comparative observational study and found that completion of the PBAs resulted in focused feedback to the trainees about their practice . As a result, the trainees in this study valued this structured approach because it enabled subsequent training to be targeted appropriately. Marriot et al also studied PBA and showed that trainees reported the feedback provided by the clinical supervisor as moderately useful to very useful. Clinical supervisors rated feedback similarly .
Murphy et al investigated 6 different instruments (criterion audit, multisource feedback, patient satisfaction ratings, assessment of referral letters, significant events analysis, and analysis of videotaped patient interactions) in General Practice registrars . They highlighted the important role of feedback from patients and colleagues. A questionnaire survey of dermatology trainees collated the experience and views on MSF, DOPS and mini-CEX . Trainees appreciated the formative aspects of the assessments, especially feedback, although not all trainees reported receiving useful feedback. Johnson et al’s questionnaire and focus group study of core medical trainees on their views of the curriculum and assessment found that the majority of them felt that in particular the feedback component of WBA assessments had been useful .
This systematic review aimed to evaluate the effectiveness of feedback in WBAs. The studies were all observational and there were no randomised controlled trials. The majority of the studies were seeking perceptions and self-reported changes rather than measuring actual change in practice. This is because measuring changes in practice and attributing them to feedback from the WBA is extremely difficult due to confounding factors and problems with study design. Most of the evidence to support the use of feedback from WBAs comes from studies on MSF. This may be because, whereas in other assessments the emphasis may be upon performing a procedure correctly or the management of a particular patient, MSF has the sole purpose of providing feedback of doctors’ practice and behaviours. This opportunity is often missed, as found in the study by Canavan et al which analysed comments made on MSF forms . Many forms contained no comments at all and, of those that did contain comments, a significant proportion were found to lack actionable information, thus limiting their usefulness. Global judgments were more frequently used and although these may build the confidence of the person being assessed, they do not give an indication of how they should behave in order to improve their practice and future actions. Most of the trainees in the study by Burford et al did not anticipate changing their behaviour as a result of feedback from the MSF tools used, but the perceived usefulness was consistently higher with the TAB compared to the mini-PAT . The greater space for free text in the former tool allows valuable information to be transmitted back to the trainee which they can use to inform a change in practice, rather than simply a numerical score.
MSF has the potential to be a useful tool but the current evidence suggests that in order for this to occur, the way in which it is used must be improved. Comments should be provided and these should be specific and action-based. Reasons why it is currently under-utilised include time constraints of an already busy clinical workload, regarding WBA as cumbersome, a lack of training on how to provide feedback and a lack of trust in the formative nature of the assessment, as learners may feel that the feedback may have a negative impact on their training .
Other WBAs methods such as the mini-CEX, and DOPS did not show any clear evidence of leading to a change in behaviour. The use of the mini-CEX was strongly advocated to improve feedback, but pointed out that feedback is offered less frequently than is desirable . Cohen et al found that half of the dermatology trainees surveyed reported that learning points had been identified from the mini-CEX, and that feedback and learning were identified most frequently as positive aspects of the process . This implies that feedback is valued and a change in behaviour may occur, but does not show this. A fifth of respondents on the mini-CEX expressed reservations about the quality of feedback; for DOPS, 14% reported that insufficient time was allowed for feedback and only 45% identified learning points arising from the process. There were no studies looking at case based discussion so the effect of this assessment on doctors’ performance is undeterminable. Further research in this area is therefore warranted.
The highest Kirkpatrick level reached by any of the studies was level 3 which indicates a change in behaviour and documents the transfer of learning to the workplace or willingness of learners to apply new knowledge and skills. Others were level 2, showing changes in the attitudes or perceptions among participant groups towards teaching and learning.
Feedback may not produce intended outcomes and may even have detrimental consequences, such as decreased motivation and reduced performance. In one study feedback perceived as being strongly negative generally evoked emotional responses, including anger and discouragement . Trainers reportedly often avoid giving feedback, in order to prevent offence or provoking defensiveness [24, 25]. Several studies suggested that maximizing opportunities for training of assessors in giving optimal feedback and administering assessments would improve the quality of feedback. If WBAs are simply used as a box-ticking exercise, without sufficient emphasis on feedback, then any gains will be limited .
This systematic review had some limitations. The studies were uncontrolled thereby limiting the strength of findings but this may be due to the difficulties in assessing the effect of feedback on future performance of doctors. Limitations in our methodology include the grey literature not being reviewed and only including studies in the English language which may have led to bias. Another limitation of the study is the focus on feedback which is only one potentially beneficial aspect of WBA. Others can include on the job training whilst being observed by a senior and documentation of competence in a particular area. 
The relationship between feedback and outcome is not always straightforward and may not always achieve the desired results . Good feedback can lead to increased motivation and confidence in trainees. On the other hand, negative feedback is not aimed to demotivate or demoralise a trainee, but should be taken as constructive criticism for trainees to improve. More studies are required to provide further evidence for the effect of feedback from WBAs on subsequent performance, as the evidence base contains few high quality conclusive studies. Although a difficult area to research, more randomised controlled studies on a change in behaviour following feedback from specific WBAs should be encouraged. There is, however, good evidence that if well implemented, feedback from WBAs, particularly MSF, leads to a perceived positive effect on practice.
- Van De Ridder JM, Stokking KM, Mcgaghie WC, Ten Cate OT: What is feedback in clinical education?. Med Educ. 2008, 42: 189-197. 10.1111/j.1365-2923.2007.02973.x.View ArticleGoogle Scholar
- Norcini J, Burch V: Workplace-based assessment as an educational tool: AMEE Guide No 31. Med Teach. 2007, 29: 855-871. 10.1080/01421590701775453.View ArticleGoogle Scholar
- Miller A, Archer J: Impact of workplace based assessment on doctors' education and performance: a systematic review. BMJ. 2010, 341: c5064-10.1136/bmj.c5064.View ArticleGoogle Scholar
- Berlin JA, University of Pennsylvania Meta-analysis Blinding Study Group: Does blinding of readers affect the results of meta-analyses?. Lancet. 1997, 350 (9072): 185-186.View ArticleGoogle Scholar
- Colthart I, Bagnall G, Evans A, Allbutt H, Haig A, Illing J, McKinstry B: The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice. BEME Guide No 10. Med Teach. 2008, 30 (2): 124-145. 10.1080/01421590701881699.View ArticleGoogle Scholar
- Kirkpatrick D: Evaluation of Training. Training and Development Handbook. Edited by: Craig R, Bittel L. 1967, McGraw-Hill, New York, 131-167.Google Scholar
- Archer J, McGraw M, Davies H: Republished paper: Assuring validity of multisource feedback in a national programme. Postgrad Med J. 2010, 86 (1019): 526-31. 10.1136/pgmj.2008.146209rep.View ArticleGoogle Scholar
- Bullock AD, Hassell A, Markham WA, Wall DW, Whitehouse AB: How ratings vary by staff group in multi-source feedback assessment of junior doctors. Med Educ. 2009, 43 (6): 516-520. 10.1111/j.1365-2923.2009.03333.x.View ArticleGoogle Scholar
- Burford B, Illing J, Kergon C, Morrow G, Livingston M: User perceptions of multi-source feedback tools for junior doctors. Med Educ. 2010, 44 (2): 165-76. 10.1111/j.1365-2923.2009.03565.x. Epub 2010 Jan 5View ArticleGoogle Scholar
- Canavan C, Holtman MC, Richmond M, Katsufrakis PJ: The quality of written comments on professional behaviors in a developmental multisource feedback program. Acad Med. 2010, 85 (10 Suppl): S106-S109.View ArticleGoogle Scholar
- Violato C, Lockyer JM, Fidler H: Changes in performance: a 5-year longitudinal study of participants in a multi-source feedback programme. Med Educ. 2008, 42 (10): 1007-1013. 10.1111/j.1365-2923.2008.03127.x.View ArticleGoogle Scholar
- Sargeant J, Mann K, Sinclair D, van der Vleuten C, Metsemakers J: Challenges in multisource feedback: intended and unintended outcomes. Med Educ. 2007, 41: 583-591. 10.1111/j.1365-2923.2007.02769.x.View ArticleGoogle Scholar
- Sargeant J, Mann K, Ferrier S: Exploring family physicians' reactions to multisource feedback: perceptions of credibility and usefulness. Med Educ. 2005, 39 (5): 497-504. 10.1111/j.1365-2929.2005.02124.x.View ArticleGoogle Scholar
- Weller JM, Jones A, Merry AF, Jolly B, Saunders D: Investigation of trainee and specialist reactions to the mini-Clinical Evaluation Exercise in anaesthesia: implications for implementation. Br J Anaesth. 2009, 103 (4): 524-30. 10.1093/bja/aep211.View ArticleGoogle Scholar
- Weller JM, Jolly B, Misur MP, Merry AF, Jones A, Crossley JG, Pedersen K, Smith K: Mini-clinical evaluation exercise in anaesthesia training. Br J Anaesth. 2009, 102 (5): 633-641. 10.1093/bja/aep055.View ArticleGoogle Scholar
- Holmboe ES, Yepes M, Williams F, Huot SJ: Feedback and the mini clinical evaluation exercise. J Gen Intern Med. 2004, 19 (5 Pt 2): 558-561.View ArticleGoogle Scholar
- James K, Cross K, Lucarotti ME, Fowler AL, Cook TA: Undertaking procedure-based assessment is feasible in clinical practice. Ann R Coll Surg Engl. 2009, 91 (2): 110-112. 10.1308/003588409X359286.View ArticleGoogle Scholar
- Marriott J, Purdie H, Crossley J, Beard JD: Evaluation of procedure-based assessment for assessing trainees' skills in the operating theatre. Br J Surg. 2010, 98 (3): 450-457.View ArticleGoogle Scholar
- Murphy DJ, Bruce DA, Mercer SW, Eva KW: The reliability of workplace-based assessment in postgraduate medical education and training: a national evaluation in general practice in the United Kingdom. Adv Health Sci Educ Theory Pract. 2009, 14 (2): 219-232. 10.1007/s10459-008-9104-8.View ArticleGoogle Scholar
- Cohen SN, Farrant PB, Taibjee SM: Assessing the assessments: U.K. dermatology trainees' views of the workplace assessment tools. Br J Dermatol. 2009, 16 (1): 34-39.View ArticleGoogle Scholar
- Johnson G, Barrett J, Jones M, Parry D, Wade W: Feedback from educational supervisors and trainees on the implementation of curricula and the assessment system for core medical training. Clin Med. 2008, 8 (5): 484-489.View ArticleGoogle Scholar
- Abdulla A: A critical analysis of mini peer assessment tool (mini-PAT). J R Soc Med. 2008, 101: 22-26. 10.1258/jrsm.2007.070077.View ArticleGoogle Scholar
- Hattie J, Timperley H: The power of feedback. Rev Educ Res. 2007, 77: 81-112. 10.3102/003465430298487.View ArticleGoogle Scholar
- Hewson MG, Little ML: Giving feedback in medical education: verification of recommended techniques. J Gen Intern Med. 1998, 13: 111-116. 10.1046/j.1525-1497.1998.00027.x.View ArticleGoogle Scholar
- Ende J: Feedback in clinical medical education. JAMA. 1983, 250: 777-781. 10.1001/jama.1983.03340060055026.View ArticleGoogle Scholar
- Saedon H, Saedon MH, Aggarwal SP: Workplace-based assessment as an educational tool: Guide supplement 31.3--viewpoint. Med Teach. 2010, 32 (9): e369-e372. 10.3109/01421590903548547.View ArticleGoogle Scholar
- Papettas T, Saedon H, Saedon M: Opportunities for learning in the surgical workplace and how they can be exploited: a practical guide. Br J Hosp Med (Lond). 2011, 72 (12): 707-710.View ArticleGoogle Scholar
- McKinley RK, Williams V, Stephenson C: Improving the content of feedback. Clin Teach. 2010, 7: 161-166. 10.1111/j.1743-498X.2010.00380.x.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/12/25/prepub