Skip to main content
  • Research article
  • Open access
  • Published:

The social validity of a national assessment centre for selection into general practice training

Abstract

Background

Internationally, recruiting the best candidates is central to the success of postgraduate training programs and the quality of the medical workforce. So far there has been little theoretically informed research considering selection systems from the perspective of the candidates. We explored candidates’ perception of the fairness of a National Assessment Centre (NAC) approach for selection into Australian general practice training, where candidates were assessed by a Multiple Mini Interview (MMI) and a written Situational Judgment Test (SJT), for suitability to undertake general practice (GP) training.

Methods

In 2013, 1,930 medical practitioners, who were eligible to work in Australia attended one of 14 NACs in each of 5 states and 2 territories. A survey was distributed to each candidate at the conclusion of their assessment, which included open-ended questions aimed at eliciting candidates’ perceived benefits and challenges of the selection process. A framework analysis was informed by the theoretical lens of Social Validity Theory.

Results

Qualitative data was available from 46% (n = 886/1,930) of candidates, who found the NAC experience fair and informative for their training and career goals, but wanted to be provided with more information in preparation. Candidates valued being able to communicate their skills during the MMI, but found some difficulty in interpreting the questions. A significant minority had concerns that a lack of relevant GP experience may inhibit their performance. Candidates also expressed concerns about the time limits within the written paper, particularly if English was not their first language. They also expressed a desire for formative feedback during the interview process.

Conclusion

During any job selection process, not only is the organisation assessing the candidates, but the candidates are also assessing the organisation. However, a focus on the candidate experience throughout an organisation’s selection process may provide benefits to both candidates and the organisation, regardless of whether or not candidates secured the job. Social Validity Theory is a useful addition to the methods for demonstrating the reasonableness of any selection system.

Peer Review reports

Background

Internationally, the process of recruiting and selecting the best candidates is central to the success of postgraduate training programs and the quality of the medical workforce. A key goal is to predict which candidates will go on and become able doctors, and reject those who are likely to perform poorly in future practice due to issues of professional behaviour as well as lack of clinical knowledge and skills [1]. However, from a candidate’s perspective selection systems are high stakes and have a major impact on their attitude to both the employer and the proposed place of work [2]. Research has tended to privilege the perspective of the employers. However, it is critical for healthcare education and training organisations to consider selection systems from the perspective of both the employer and the job candidate.

There are a number of factors in the organisational literature that evidence how the selection experience can influence a candidate’s perception of the organisation and the position they are applying for [2]. Changing and starting new jobs is ranked in the top 40 of life’s most stressful events [3]. It has been predicted that a candidate’s perception of the fairness of a selection system can influence their future attitudes, intentions, and behaviours in the workplace. Candidates who become disgruntled by the process may develop a negative opinion of the organisation, communicating this to other professionals, having direct implications on organisational reputation [4],[5]. Negative reactions to the application process may also influence the attitudes and performance of candidates within the selection process [6], with those who perceive it as problematic being reluctant to fully participate and engage [7]. When candidates are highly qualified, with various job choices, they are less likely to proceed with a poorly conceived selection process, and consequently, organisations may lose the most outstanding candidates. Inappropriate selection procedures may cause candidates to dispute selection outcomes [8], to the point of legal redress. It has also been suggested that negative experiences for candidates during the selection process can have a detrimental effect on the candidates’ wellbeing [9],[10]. It is therefore imperative to understand the mechanisms by which candidates cognitively assess selection measures so that institutions are able to refine their processes to attract and retain the most qualified candidates.

Postgraduate healthcare selection procedures generally set a minimum standard of clinical competence and personal and professional values, that are expected of entry-level professionals. These are required to be acceptable to a range of interested stakeholders, including universities, government, health service employers, the professional colleges, and the wider community. Typically, selection admission committees develop a ranking list of candidates, which descends in merit order until allocable places are exhausted. The traditional system of ranking candidates consisted of a mixture of application forms, traditional panel interviews, personal statements, and references [11]. Internationally, there have been moves to develop selection procedures which are much more robust in terms of their underlying assessable constructs, psychometrics, fairness and defensibility. The term assessment centre [12] has been used to refer to a model where candidates are required to attend a venue to undertake more than one assessment for the purpose of selection into postgraduate training. Internationally, such assessment centres have used a range of formats, both written and observed, including situational judgement tests (SJT) [13], clinical problem solving tests (CPST) [12], low and high fidelity simulations [14], and the multiple-mini-interview (MMI) [15].

Previous research has demonstrated that candidates are not passive within the selection process, but actively seek opportunities to enhance their chances of presenting a favourable impression, even within short multiple interviews. This is consistent with the literature on the use of impression management. For example, in the high-stakes setting of selection into medical school, students actively tried to shape the impressions that their interviewers might make of them unrelated to the particular set of questions being asked at that interview station [16].

Inevitably, the process of candidate selection involves a measure of differentiation between candidates in order to choose the best candidate for the job, leaving a number of candidates disappointed at the outcome. Any assessment procedure should be free of bias, so that candidates of equal ability are not discriminated against. However, there are widely acknowledged issues of potential biases in selection on the grounds of gender, age, culture and ethnicity. For example, in a study exploring the relative importance attached to various perceived aspects of fairness in personnel selection in a North American setting, ethnicity analyses indicated different ethnic groups emphasised different characteristics in inferring fairness [17]. Potential biases in selection might also impact on the cultural diversity of the workforce. Concerns have been raised regarding entry into general practice in the UK, for black and ethnic minorities were more likely to fail the barrier assessment of GP training than white doctors [18]. Organisations are also encouraged to plan around cultural appropriateness of selection procedures to reduce potential litigation as perceived unfairness is a major cause of costly litigation within selection [17].

A number of frameworks have been proposed to predict, understand and influence job candidates’ reactions to the selection system, and the extent to which that differentiation is based on capability and not on extraneous factors such as, age, gender, and culture. In the field of organisational psychology, frameworks for considering candidate perception of selection processes have primarily revolved around organisational justice theory. There are broadly two differing sets of justice rules that apply in the context of selection. Procedural justice rules are based on the fairness of how decisions are made, while distributive justice rules centre around candidates’ perception of fairness of outcomes [2]. It is common to both rule sets of organisational justice to ensure that a candidate should feel that they have gone through a fair recruitment and selection process.

Within postgraduate specialty training in the UK, Patterson et al. have used a model of organisational justice theory to evaluate candidate reactions about the selection processes used in that setting [19]. Candidates consistently viewed the high-fidelity selection methods, as more job-related and fairer. The authors advised recruiters to systematically compare perceptions of the fairness and job relevance of the various selection methods they were using in their own setting. Within the broader selection literature, candidates’ preferences for selection techniques have been shown to be interviews, followed by work samples, resumes and tests [20],[21].

We have argued that properly conducted selection systems are in the best interest of both the candidate and the organisation [22]. However, so far there has been little theoretically informed research in postgraduate specialty training to determine to what extent candidate reactions are well aligned with a selection process whose purpose is to identify the best candidates for the positions available.

Research context

The details of our research context are published elsewhere [23],[24], and we summarise here. The Australian General Practice and Training (AGPT) introduced a National Assessment Centre (NAC) approach to General Practice (GP) training in 2011. In 2013, 17 regional training providers (RTPs) alongside GPET, ran 14 NACs across Australia. Two assessments were used in the NAC for candidates who were eligible to work in Australia; the observed multiple mini-interviews (MMI) and the written situational judgement test (SJT).

The MMI is an interview format that uses many short independent assessments, each assessed by a single previously trained interviewer. They have been used to assess non-cognitive characteristics of postgraduate medical trainees in the United Kingdom (UK) [25], Canada [15] and Australia [23]. Early findings suggest that the MMI is a useful format for the selection of junior doctors into speciality training.

SJTs are a written assessment format also used to test non-cognitive characteristics. They involve authentic, hypothetical scenarios requiring the individual to identify the most appropriate response or to rank the responses in the order they feel is most effective. Evidence supporting the validity and reliability of the SJT as a shortlisting tool in postgraduate selection has prompted their introduction into the selection process of several medical specialities within the UK. As a relatively low-resource assessment, SJTs are claimed to be a cost-efficient methodology compared with resource intensive assessments of non-cognitive attributes [13] like the MMI.

The NAC assessments had been blueprinted against the expected competencies of entry-level registrars in six domains of practice set out by the two professional colleges in Australia (the Royal Australian College of General Practice and the Australian College of Rural and Remote Medicine). These domains included communication and interpersonal skills, clinical reasoning, analytical/problem solving skills; organisational/management skills; sense of vocation/motivation; personal attributes (including the capacity for self-reflection and awareness of the impact of cultural issues on delivery of primary health care) and professional/ethical attributes.

In 2013, candidates attended one of 14 NACs where they sat a 100 minute 50-question Situational Judgement Test (SJT), and a six station Multiple Mini Interview (MMI). Candidates were assigned an AGPT ranking band score based on MMI and SJT score on a 50–50 split. Those candidates with sufficient NAC total score (combined SJT and MMI scores) were passed to their preferred Regional Training Provider (RTP). Upon review of candidate scores, RTPs had the option of accepting candidates based on their scores alone or conducting an additional round of interviews, and reviewing referee reports if desired. Data for the RTP led process was not part of the NAC process nor this particular research.

MMI marking criteria for each non-cognitive domain (for example, vocation/motivation), included the scope of the desired behaviours (for example, enthusiasm for a career in general practice or dealing with an angry patient), which was to be marked using a seven point rating scale. This ranged from 1 (unsuitable/does not meet criterion) to 4 (meets criterion) through to 7 (meets criterion to superior degree). For each anchor, descriptors were provided to indicate examples of the ways in which candidates might meet the criteria in the interview. Interviewers were encouraged to write sufficient notes on the marking sheet to justify their decision particularly if the candidates did not meet the criteria.

Theoretical research framework

Typically a major concern of any assessment system, is its validity, and there are many models that are widely used in medical education, for example, Van Der Vleuten’s utility index which explores reliability, validity, educational impact, acceptability, feasibility and cost [26]. A recognised limitation of this model is that test takers are merely asked to what extent they find a given assessment acceptable. Accordingly there has been interest in evidencing additional forms of validity to demonstrate the utility of a selection system. Social Validity was originally described by Wolf (1978) to inform the development of better systems and measures to determine whether society was accomplishing the objectives of any particular social intervention [27]. In particular, the social significance of the goals, the social appropriateness of the procedures, and the social importance of the effects of the intervention. Social Validity in the context of selection was further developed by Schuler (1993) to consider the social impact of selection technology on participants in the selection process [28]. Schuler’s theory focused on the extent to which candidates developed both positive and negative perceptions of fairness in the way they experienced the selection system. A Social validity framework, according to Schuler (1993), has four key features that represent a fair and acceptable process to candidates [28] and invites the individual candidate to reflect on how they are personally impacted during the selection process.

  1. 1.

    Provision of relevant information regarding the job and the organisation.

  2. 2.

    Opportunity to practice and display relevant knowledge and skills.

  3. 3.

    Transparency of the selection process and selection tools.

  4. 4.

    Provision of feedback regarding results.

For our study we posed the research question: What are the underlying factors which influence candidates’ perceptions of the fairness of a national assessment centre approach for selection into general practice training using the theoretical perspective of Schuler’s (1993) Social Validity Theory [28].

Methods

In 2013, 1,930 medical practitioners who were eligible to work in Australia applied to AGPT. Of these candidates, 1093 (56.6%) were born outside of Australia, and 606 (31.4%) obtained their primary medical qualification outside of Australia. Candidates attended one of 14 NACs, in each of 5 states and 2 territories nationwide. As part of a systematic evaluation of the process, an anonymous candidate questionnaire was distributed to each candidate immediately following completion of the SJT and MMI at the NAC. The questionnaire included open-ended questions aimed at eliciting candidates’ perceived benefits and the most challenging aspects of being assessed by way of an NAC.

A thematic analysis of the qualitative data was done using Framework Analysis [29]. Coding focused on the socio-cultural influences of the experiences, interactions, and beliefs that influenced the candidates. Whilst this was initially done inductively by all four authors, in subsequent analysis of data from the perspective of candidate fairness, we noted that the emergent themes from the inductive analysis resonated with key constructs within Social Validity Theory [28]. At this point the authors discussed the value of using the theory as the conceptual framework for this paper and subsequently developed a thematic framework, which was applied to a portion of the dataset by three authors to establish its trustworthiness, and to check for new and emerging issues of importance that would extend the analysis. Subsequently the first author coded all of the data, in order to identify recurrent themes and subthemes in the data [30]. Once data had been coded and categorized deductively into themes, the data within each theme were quantified in order to measure thematic prevalence [29].

Ethical considerations

The University of Sydney Human Research Ethics Committee approved the research. All candidates were reassured that data was strictly de-identified to protect participant privacy.

Results

Data was available from 886/1,930 meaning that nearly half (46%) of all candidates provided constructive comments about the NAC. Candidates’ responses to open ended questions regarding the most beneficial and most challenging aspects of being assessed via the NAC selection process are summarized in Table 1, and were mapped to the conceptual framework of Social Validity Theory [28].

Table 1 Candidate responses to open ended questions regarding their perceptions of the selection process (n = 886)

Provision of relevant information regarding the job and the organisation

A detailed handbook explaining the selection and training was provided by the AGPT and was downloadable from their website. However, our data suggest that candidates wanted more specific types of information in preparation for attendance at the NAC. For example, although candidates thought the SJT and MMI sample questions provided by GPET prior to these assessments were relevant, they would have liked access to a preparation guide and additional example questions, with 58/886 (7%) of survey respondents echoing the sentiment, “You didn’t have an idea of what to expect until you got here”.

Interestingly, 71/886 (8%) of respondents found their experience at the NAC provided them with insight and understanding of what is required to work in general practice, and that the experience of the selection process itself would be helpful for them to apply again in the future if they were unsuccessful. This supports the idea that more detailed information regarding the requirements of a general practice registrar position should be provided to the candidates that gives a realistic preview of the job they are going into, and is thought to be related to higher performance and lower attrition rates [31]. However it also suggests that simply providing passive information is insufficient. Rather candidates are learning about general practice through the selection process as a form of situated learning [32]. Thought needs to be given to the educational design of candidate orientation to the selection process, information regarding a career in general practice, the materials which support understanding of the candidate pathway through the selection process, and supporting materials and guidance on the types of feedback available, during the process and on notification of the outcome of the application.

Opportunity to participate and display relevant knowledge and skills

Candidate perceptions of biases within interview questions were apparent depending upon candidates’ experience, culture, or whether English was a first language. Some candidates found it difficult to respond to the MMI questions due to their lack of relevant experience in an Australian general practice setting, with 83/886 (9%) noting difficulty in answering some questions due to their lack of relative experience. It is possible that less experienced, and also international medical graduates, some of who were trained as specialists in their country of origin, may perceive that there are biases within the questions. Local candidates may be better equipped to answer the MMI questions because of their relevant cultural experience. For example, an international medical graduate may lack the experience of local candidates that would provide insight and understanding into the practices of local GPs. Some found it difficult to respond to the MMI when they had not had experience practicing as a GP, “Very GP focused in some ways. Not all of us have had GP rotations yet”. For example, interns are only hospital focussed, not hospital and community focussed. It follows that candidates may have underlying concerns that they are being scored by interviewers for their readiness to practice, that is their ability to commence work, rather than as was intended by the selection process, for their trainability.

Although most candidates did not comment on the SJT, a significant minority of candidates mentioned that the SJT questions were more difficult to respond to than the MMI questions, with 156/886 (18%) commenting that they found the SJT questions to be “vague” and “confusing”, suggesting that SJT questions were not representative of the candidates’ broader experiences or their level of experience.

Candidates [380/886 (43%) of survey respondents] also commented on their difficulty in responding to the SJT questions because of their ambiguous nature, and in completing the SJT in the given timeframe, particularly if English was not their first language. English difficulties have previously been reported in regards to entrance exam outcomes for IMGs [33].

It appeared that candidates placed greater value on the MMI questions and the face to face opportunity for interactions with interviewers, with 384/886 (43%) of survey respondents commenting that the interviews allowed them to “explain and discuss answers”, and “highlight past skills and experience”. Candidates felt that they were able to display relevant attributes other than knowledge and clinical skills, with 62/886 (7%) of survey respondents commenting that they were assessed “on different skills both personal and professional”.

Transparency of the selection process and the selection tools

Candidates generally found the national assessment process to be fair, with 186/886 (21%) of survey respondents conveying that they considered it to be “fair across all criteria and all states” and “standardised”. International Medical Graduates mostly perceived the process fair. Their concerns were largely about their own experience, and if relevant, their command of English language.

Candidates [189/886 (21%) of survey respondents] found the MMI questions quite broad, and commented on their difficulty in interpretation, and development of appropriate responses. Candidates would have liked more guidance and prompting from the interviewers in order to help them to stay on track. This resonates with findings elsewhere advocating standardised, structured prompting within interviews, and the need of adequate interviewer training [34].

Provision of feedback regarding results

At the time of survey distribution and completion, candidates appeared more concerned with immediate formative feedback rather than summative feedback. Many candidates expressed a desire for formative feedback, with 52/886 (6%) commenting, “The MMI stations were vague with no immediate feedback”. Considering that performance feedback provided to RPTs was necessary to inform ranking, they did not appear to have considered whether the summative feedback would be used to inform their future training and practice if successful, or to improve future NAC performance if unsuccessful. Provision of feedback can offer a valuable method to enrich the candidates’ performance and learning experience. By providing candidates with feedback, it is possible that the gap between actual and desired performance in the future can be narrowed [35]. However, the perception of the quality of feedback is important in eliciting a positive attitude towards change [36]. It is therefore important that if feedback is to be given, consideration is given to the quality and value of both formative and summative feedback that could potentially be provided to the candidates.

Discussion

Our data provides evidence, that a national selection centre for selecting doctors to enter general practice has a modest degree of social validity [28]. In general, the NAC process was perceived by candidates to be fair and transparent. Our data show that candidates found the NAC experience itself informative about a job as a GP registrar and about a career in general practice, but wanted to be provided with more information in preparation for their attendance at the NAC. Candidates valued being able to convey a range of skills and experience during the MMIs, but found some difficulty in interpreting the questions, and were concerned that a lack of relevant experience in general practice might have inhibited their performance. However, candidates would particularly like to be provided with more information about the MMI prior to the assessment. They would like more guidance and prompts to answer the questions, and more opportunity to demonstrate their experiences. Candidates also expressed concerns about the time limits within the SJT, particularly if English was not their first language. Candidates appeared to have little expectation regarding feedback about the decision to progress or not to the next round at the RTPs. However, candidates did express concerns about their ability to gauge their own performance during the MMI interview process, and would appreciate more immediate feedback.

Social Validity Theory [28] offers a useful framework to assess candidates’ perceptions of their experience during the selection process conducted by AGPT and the RTPs. Our data correlates with the Schuler’s (1993) contention that the first three elements are most highly regarded by candidates; that is information, participation and transparency [28]. That candidates appeared to be less concerned with the element of feedback resonated with the finding of others [28]. We now consider each element in more detail.

Provision of relevant job related information

Provision of information refers to the degree to which the candidate perceives the information provided by the organisation as useful [28]. Maintaining clear and open communication is important in ensuring that candidates feel they have been treated fairly and humanly [6],[2],[37]. Support to candidates can be provided throughout the recruitment process by ensuring candidates are able to ask questions and obtain information as required [6],[37]. Results from our study suggest that the need for provision of information to candidates prior to the assessment activities should not be underestimated. Furthermore, our results validate the need for a selection method that is closely aligned with the expected competencies of the job for which it is selecting.

Opportunity to participate

Opportunity to participate refers to the extent to which the candidates feel they have the opportunity to display their knowledge and skills to their potential employer [6],[37]. If candidates feel that they have been assessed on criteria that are irrelevant to the job, they feel they have forgone an opportunity to do this [6]. Our results suggest that generally, candidates appreciated the inclusion of MMIs in the assessment process, as it allowed them to portray their personal and professional attributes. However, it was generally felt that candidates with fewer opportunities to gain Australian-specific general practice experience, such as interns; medical practitioners without general practice training; and International Medical Graduates, may perceive some of the experience based questions posed within both the SJT and MMI questions as unfair. This is important given international concerns that international medical graduates and locally based ethnic minorities may be discriminated against in assessments for the right to practice [18]. It is likely that selection organisations will need to scrutinise their data using a range of methods to assure that processes are culturally appropriate and focus on candidate ability to do a complete job [17].

Transparency

Transparency refers to the extent to which the candidates feel the selection method is clear and standardized for all participants. Openness in terms of selection process and procedures can help to increase the candidates’ perception of fairness as well as organisational attractiveness [37]. Candidates want a level playing field to ensure an opportunity to be assessed fairly [10],[38]. The organisation and implementation of MMIs and SJTs at the NACs was standardized across all sites, and candidates generally found the application process to be fair and equitable. AGPT makes the point that the purpose of the NAC is to provide a ranked list of competent candidates for local RTPs, who then make the placement with the local GP supervisors in a locally determined way. However, it is noteworthy that beyond the NAC process, procedures at RTPs are less standardized, which caused concerns for candidates. Although beyond the responsibilities of the NAC, there is great variation in what RTPs are doing regarding selection following receipt of candidates’ rankings. That each RTP conducts secondary processes including re-interviewing in various ways and base decisions on a variety of contributing factors might detract from the NAC’s social validity, if the candidates’ perception of fairness decreases as the selection process progresses. Individual RTPs are encouraged to embrace the notion of social validity [25] in their individual procedures.

Feedback

Feedback has been defined as “specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance” [39]. Candidates reported that although they would have liked to, they did not receive any formative feedback on their performance. They also provided no indication of expecting to receive summative feedback, even if they were successful. There was a considerable wealth of material available from the interview process itself including marks and in depth qualitative comments on their interview performance. However, one consideration is that providing summative feedback may provide an unfair advantage to subsequent applications for the same candidate. Perceptions of fairness have been linked to litigation [17],[40], and the question needs to be asked whether or not providing feedback would affect litigation, and what type of feedback should be given, if any.

Regarding formative feedback, Anderson (2011) recommends that candidates be provided with regular opportunities for verbal feedback during testing. Regarding summative feedback, Anderson (2011) suggests that allowing candidates to review their test scores, and the introduction of a standardised appeal process within the selection system, may reduce the possibility of litigation by disgruntled candidates [8]. It should be noted that candidates to have the opportunity to ask for feedback on their performance, though they cannot review their assessment papers. In the Parliament of the Commonwealth of Australia, 2012 report on the inquiry into registration processes and support for International Medical Graduates, it was recommended that the Australian Medical Council provide a “detailed level of constructive written feedback for candidates who have undertaken the Australian Medical Council’s Structured Clinical Examination” [41]. It may therefore be timely for all relevant stakeholders to give careful consideration to the issue of feedback in relation to selection into specialty training.

Implications

Social validity theory offers a useful theoretical framework for exploring candidates’ perceptions of procedural fairness [28]. The key elements of this theory are related to organisational attractiveness, intentions to recommend the selection process to others, and job acceptance [6],[20],[42]. However within the healthcare professional literature there are typologies of the validity of assessments which already exist [43], in additional to the Van Der Vleuten notion of utility (1996) [26]. Typically they describe of content, construct, and criterion validities, with criterion split into concurrent and predictive depending on the timing of the studies. Validity can be describe as a unitary concept, which describes the reasonableness of any assessment strategy and how this is demonstrated [43]. Accordingly, medical educators would need to determine the reasonableness of a selection strategy that has located selection within a gatekeeping role in preparing a workforce to provide safe, just and effective health care provision. In developing applications of organisational justice within selection into postgraduate training, Patterson et al. [44] have introduced the notion of political validity. This describes the extent to which varied groups of interested stakeholders feel that any selection system meets their basic requirements and will yield valid results before the assessment is even administered and often without ever seeing the assessment itself or any evidence of its psychometric soundness [45]. The question arises as to what extent notions of social validity [25], with its focus on the candidates’ perspective, fit in with existing frameworks of validity such as Downing’s [43] as they relate to selection.

In our study, the least met aspect of social validity of the NAC approach, was candidates’ minimal expectation of getting feedback. From an institution perspective, the issue of whether to give feedback within the selection processes for a limited number of places is problematic [46]. Complaints are more likely where unsuccessful candidates are handled poorly. However, institutions should have little to fear if they have met the criteria for social validity in their recruitment process. There is also the issue of whether assessors are competent to give good written narrative feedback after a complex performance assessment like the MMI [47]. Selection committees could learn much from the work done in the work based assessment sphere [48], where there has been a call to improve training around the giving of feedback which is specific to the assessee’s professional development. However, the purpose of feedback in assessment centres has to be clearly defined. For successful candidates feedback would be helpful for training, whereas for unsuccessful candidates it may be helpful for re-applying or considering other career choices.

Limitations of the study

We believe this is the first study of its kind exploring the experiences of candidates and participants about a National Assessment Centre approach in which the non-cognitive tests of an MMI and an SJT have been used to determine suitability for specialist training. Inevitably in qualitative studies, the context of the field of study may not be generalisable to other settings. The immediacy of data collection to the NAC process added to the authenticity of data collected but limited what and how much data it was possible to collect from candidates without adding a burden to what was already a stressful high stakes selection process. We accept that in quantifying thematic prevalence we had less than 50% of responders, but still claim sufficient representedness of the whole cohort.

Due to the questionnaire being anonymous, we were unable to link feedback of the process with candidate performance.

Conclusion

During any job selection process, not only is the organisation assessing the candidates, but the candidates are also assessing the organisation. We used the Social Validity Theory as a framework with which to interpret candidates’ perceptions of fairness of the selection process. In the context of a national selection process candidates generally felt; treated with dignity and respect, that they were involved and participatory, selection processes were fairly transparent and unambiguous, but were equivocal about the likelihood of receiving feedback on their performance. However, focus on the candidate experience throughout an organisation’s selection process may provide benefits to both candidates and the organisation, regardless of whether or not candidates secured the job. Social Validity Theory is a useful addition to the methods for demonstrating the reasonableness of any selection system.

References

  1. Roberts C, Alwan IA, Prideaux D, Tekian A: Developing the science of selection into the healthcare professions and speciality training with Saudi Arabia and the Gulf region. J Health Special. 2013, 1: 2-10.4103/1658-600X.114684.

    Article  Google Scholar 

  2. Bauer TN, Truxillo DM, Sanchez R, Craig J, Ferrara P, Campion MA: Development of the Selection Procedural Justice Scale (SPJS) Personnel. Psychology. 2001, 54: 378-419.

    Google Scholar 

  3. Spurgeon A, Jackson CA, Beach JR: The Life Events Inventory: re-scaling based on an occupational sample. Occup Med. 2001, 51 (4): 287-293. 10.1093/occmed/51.4.287.

    Article  Google Scholar 

  4. Smither JW, Reilly RR, Millsap RE, Pearlman K, Stoffey RW: Applicant reactions to selection procedures. Pers Psychol. 1993, 46: 49-76. 10.1111/j.1744-6570.1993.tb00867.x.

    Article  Google Scholar 

  5. Hülsheger UR, Anderson N: Applicant perspectives in selection: Going beyond preference reactions. Int J Sel Assess. 2009, 17: 335-345. 10.1111/j.1468-2389.2009.00477.x.

    Article  Google Scholar 

  6. Gilliland SW: The perceived fairness of selection systems: An organizational justice perspective . Acad Manag Rev. 1993, 18: 694-734.

    Google Scholar 

  7. Macan TH, Avedon MJ, Paese M, Smith DE: The effects of applicants’ reactions to cognitive ability tests and an assessment center. Pers Psychol. 1994, 47: 715-738. 10.1111/j.1744-6570.1994.tb01573.x.

    Article  Google Scholar 

  8. Anderson N: Toward a model of applicant propensity to case initiation in selection. Invited Distinguished Scholar Series keynote paper. Int J Sel Assess. 2011, 19: 229-244. 10.1111/j.1468-2389.2011.00551.x.

    Article  Google Scholar 

  9. Ford D, Truxillo DM, Bauer TN: Rejected but still there: Shifting the focus to the promotional context. Int J Sel Assess. 2009, 17: 402-416. 10.1111/j.1468-2389.2009.00482.x.

    Article  Google Scholar 

  10. Truxillo DT, Fraccaroli F: A person-centred work psychology: Changing paradigms by broadening horizons. Indust Organ Psychol. 2011, 4: 102-104. 10.1111/j.1754-9434.2010.01304.x.

    Article  Google Scholar 

  11. Provan JL, Cuttress L: Preferences of program directors for evaluation of candidates for postgraduate training. CMAJ. 1995, 153 (7): 919-923.

    Google Scholar 

  12. Ahmed H, Rhydderch M, Matthews P: Can knowledge tests and situational judgment tests predict selection centre performance?. Med Educ. 2012, 46 (8): 777-784. 10.1111/j.1365-2923.2012.04303.x.

    Article  Google Scholar 

  13. Patterson F, Ashworth V, Zibarras L, Coan P, Kerrin M, O’Neill P: Evaluations of situational judgement tests to assess non-academic attributes in selection. Med Educ. 2012, 46 (9): 850-868. 10.1111/j.1365-2923.2012.04336.x.

    Article  Google Scholar 

  14. Lievens F, Patterson F: The validity and incremental validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations for predicting job performance in advanced-level high stakes selection. J Appl Psychol. 2011, 96 (5): 927-940. 10.1037/a0023496.

    Article  Google Scholar 

  15. Dore KL, Kreuger S, Ladhani M, Rofson D, Kurtz D, Kulasegaram K, Cullimore AJ, Norman GR, Eva KW, Bates S: The Reliability and Acceptability of the Multiple Mini-Interview as a Selection Instrument for Postgraduate Admissions. Acad Med. 2010, 85 (10): S60-S63. 10.1097/ACM.0b013e3181ed442b.

    Article  Google Scholar 

  16. Kumar K, Roberts C, Rothnie I, du Fresne C, Walton M: Experiences of the multiple mini-interview: a qualitative analysis. Med Educ. 2009, 43 (4): 360-367. 10.1111/j.1365-2923.2009.03291.x.

    Article  Google Scholar 

  17. Viswesvaran C, Ones DS: Importance of Perceived Personnel Selection System Fairness Determinants: Relations with Demographic, Personality, and Job Characteristics. Intern J Selection dAsses. 2004, 12 (1/2): 172-186. 10.1111/j.0965-075X.2004.00272.x.

    Article  Google Scholar 

  18. Esmail A, Roberts C: Academic performance of ethnic minority candidates and discrimination in the MRCGP examinations between 2010 and 2012: analysis of data. BMJ. 2013, 347: 1-10. 10.1136/bmj.f5662.

    Article  Google Scholar 

  19. Patterson F, Zibarras L, Carr V, Irish B, Gregory S: Evaluating candidate reactions to justice theory. Med Educ. 2011, 45: 289-2923. 10.1111/j.1365-2923.2010.03808.x.

    Article  Google Scholar 

  20. Hausknecht JP, Day DV, Thomas SC: Candidate reactions to selection procedures: An updated model and meta-analysis. Pers Psychol. 2004, 57: 639-683. 10.1111/j.1744-6570.2004.00003.x.

    Article  Google Scholar 

  21. Anderson N, Salgado JF, Hülsheger UR: Applicant reactions in selection: Comprehensive meta-analysis into reaction generalization versus situational specificity. Intern J Selection Asses. 2010, 18: 291-304. 10.1111/j.1468-2389.2010.00512.x.

    Article  Google Scholar 

  22. Ryan AM, Huth M: Not much more than platitudes? A critical look at the utility of candidate reactions research. Hum Resour Manag Rev. 2008, 18: 119-132. 10.1016/j.hrmr.2008.07.004.

    Article  Google Scholar 

  23. Roberts C, Tongo JM: Selection into specialist training programs: an approach from general practice. Med J Australia. 2011, 729 (194): 2-93.

    Google Scholar 

  24. Roberts C, Clark T, Burgess A, Frommer M, Grant M, Mossman K: The utility of the Multiple-Mini-Interview within a National Assessment Centre for Specialty Training. BMC Med Educ. 2014, 14: 169-10.1186/1472-6920-14-169.

    Article  Google Scholar 

  25. Humphrey S, Dowson S, Wall D, Diwakar V, Goodyear HM: Multiple mini-interviews: opinions of candidates and interviewers. Med Educ. 2008, 42 (2): 207-213. 10.1111/j.1365-2923.2007.02972.x.

    Article  Google Scholar 

  26. Van Der Vleuten CPM: The Assessment of Professional Competence: Developments, Research and Practical Implications. Adv Health Sci Educ. 1996, 1: 41-67. 10.1007/BF00596229.

    Article  Google Scholar 

  27. Wolf MM: Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. J Appl Behav Anal. 1978, 11: 203-214. 10.1901/jaba.1978.11-203.

    Article  Google Scholar 

  28. Schuler HI, Schuler H, Farr JL, Smith M: Social validity of selection situations: A concept and some empirical results. 1993, Lawrence Erlbaum Associates, Inc, Hillsdale, NJ, England

    Google Scholar 

  29. Braun V, Clarke V: Using thematic analysis in psychology. Qual Res Psychol. 2006, 3 (2): 77-101. 10.1191/1478088706qp063oa.

    Article  Google Scholar 

  30. Ritchie J, Spencer L: Qualitative data analysis for applied policy research. Analyzing Qualitative Data. Edited by: Bryman A, Burgess R. 1994, Routledge, London, 172-194.

    Google Scholar 

  31. Phillips JM: Effects of realistic job preview on multiple organizational outcomes: A meta-analysis. Acad Manag J. 1998, 41 (6): 673-690. 10.2307/256964.

    Article  Google Scholar 

  32. Lave J, Wenger E: Situated Learning: Legitimate Peripheral Participation. 1991, Cambridge University Press, Cambridge, UK, 1

    Book  Google Scholar 

  33. Hawthorne L: International medical migration: what is the future for Australia?. MJA Open 1 Supple. 2012, 3: 18-21.

    Google Scholar 

  34. Van der Zee KI, Bakker AB: Why are structured interviews so rarely used in personnel selection?. J Appl Psychol. 2002, 87 (1): 176-184. 10.1037/0021-9010.87.1.176.

    Article  Google Scholar 

  35. Taras M: Summative and formative assessment – some theoretical reflections. Br J Educ Stud. 2005, 53: 466-478. 10.1111/j.1467-8527.2005.00307.x.

    Article  Google Scholar 

  36. Cantillon P, Sargeant J: Giving feedback in clinical settings. BMJ. 2008, 13: a1961-10.1136/bmj.a1961.

    Article  Google Scholar 

  37. Truxillo DM, Bodner T, Bertolino M, Bauer TN, Yonce C: Effects of explanations on applicant reactions: A meta-analytic review. Int J Sel Assess. 2009, 17: 346-361. 10.1111/j.1468-2389.2009.00478.x.

    Article  Google Scholar 

  38. Wiesenfield BM, Swann JR, William B, Brockner J, Bartel C: Is more fairness always preferred self-esteem moderates reactions to procedural justice. Acad Manag J. 2007, 50 (Issue 5): p1235-10.2307/20159922.

    Article  Google Scholar 

  39. Van den Berg I, Admiraal W, Pilot A: Peer assessment in university teaching: evaluating seven course designs. Assess Eval High Educ. 2006, 31 (1): 19-36. 10.1080/02602930500262346.

    Article  Google Scholar 

  40. Goldman BM: Toward an Understanding of Employment Discrimination Claiming: An integration of organizational justice and social information processing. Pers Psychol. 2001, 54: 361-387. 10.1111/j.1744-6570.2001.tb00096.x.

    Article  Google Scholar 

  41. The parliament of the commonwealth of Australia: Lost in the labyrinth: report on the inquiry into registration processes and support for overseas trained doctors. House of Representatives Standing Committee on Health and Ageing. 2012, Canberra, Australia

    Google Scholar 

  42. Chapman DS, Uggerslev KL, Carroll SA, Piasentin KA, Gilliland SW: Effects of procedural and distributive justice on reactions to a selection system. J Appl Psychol. 1994, 79: 691-701. 10.1037/0021-9010.79.5.691.

    Article  Google Scholar 

  43. Downing SM: Validity: on meaningful interpretation of assessment data. Med Educ. 2003, 37 (9): 830-837. 10.1046/j.1365-2923.2003.01594.x.

    Article  Google Scholar 

  44. Patterson F, Lievens F, Kerrin M, Zibarras L, Carette B: Designing Selection Systems for Medicine: The importance of balancing predictive and political validity in high stakes selection contexts. Int J Sel Assess. 2012, 20: 486-489. 10.1111/ijsa.12011.

    Article  Google Scholar 

  45. Miles CA, Lee C: In search of soundness in teacher testing: beyond political validity. 2002, Paper presented at the American Educational Research Association, New Orleans, LA

    Google Scholar 

  46. Dale M: A manager’s guide to recruitment and selection (MBA Masterclass Series). 2003

    Google Scholar 

  47. Marjan JB, Govaerts W, Margje WJ, Van de Wiel W, Cees PM, Vleuten W: Quality of feedback following performance assessments: does assessor expertise matter?. European J Train Develop. 2013, 37 (1): 105-125. 10.1108/03090591311293310.

    Article  Google Scholar 

  48. Vivekananda-Schmidt P, Marshall M, Stark P, McKendree J, Sandars J, Smithson S: Lessons from medical students’ perceptions of learning reflective skills: A multi-institutional study. Med Teacher. 2011, 33 (10): 846-850. 10.3109/0142159X.2011.577120.

    Article  Google Scholar 

Download references

Acknowledgments

We wish to thank Robert Hale and Marcia Grant of GPET, and acknowledge all the staff of the RTPs who worked so hard to ensure that the data collection was efficient. Finally we wish to acknowledge the contribution of the external consultants Dr Julie West, and Professor Fiona Patterson who oversaw the development of the MMI and SJT respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Annette Burgess.

Additional information

Competing interests

Payment was received from GPET by Sydney Medical School for evaluation and reporting of the 2013 national selection process.

Authors’ contributions

CR conceived of the research question and research methods. AB conducted the literature review and took a principle lead in data analysis development of the theoretical framework and interpretation and wrote the first draft. TC managed the data collection, supported by KM. All were all involved in data interpretation, critical review of manuscript development, and approving the final version.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Burgess, A., Roberts, C., Clark, T. et al. The social validity of a national assessment centre for selection into general practice training. BMC Med Educ 14, 261 (2014). https://doi.org/10.1186/s12909-014-0261-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-014-0261-6

Keywords