Assessing fitness-to-practice of overseas-trained health practitioners by Australian registration & accreditation bodies
- Brett Vaughan†1, 2Email author,
- Vivienne Sullivan†1,
- Cameron Gosling†3,
- Patrick McLaughlin†1, 2,
- Gary Fryer†1, 2,
- Margaret Wolff†4 and
- Roger Gabb†5
© Vaughan et al.; licensee BioMed Central Ltd. 2012
Received: 19 September 2012
Accepted: 27 September 2012
Published: 29 September 2012
Assessment of fitness-to-practice of health professionals trained overseas and who wish to practice in Australia is undertaken by a range of organisations. These organisations conduct assessments using a range of methods. However there is very little published about how these organisations conduct their assessments. The purpose of the current paper is to investigate the methods of assessment used by these organisations and the issues associated with conducting these assessments.
A series of semi-structured interviews was undertaken with a variety of organisations who undertake assessments of overseas-trained health professionals who wish to practice in Australia. Content analysis of the interviews was used to identify themes and patterns.
Four themes were generated from the content analysis of the interviews: (1) assessing; (2) process; (3) examiners; and (4) cost-efficiency. The themes were interconnected and each theme also had a number of sub-themes.
The organisations who participated in the present study used a range of assessment methods to assess overseas trained health professionals. These organisations also highlighted a number of issues, particularly related to examiners and process issues, pre- and post-assessment. Organisations demonstrated an appreciation for ongoing review of their assessment processes and incorporating evidence from the literature to inform their processes and assessment development.
Assessment of fitness-to-practice in a jurisdiction is commonplace where a person trained in one country wishes to practice in another. These assessments take many forms and are designed to assess the competency or the capability of the practitioner. The overarching role of the assessment is to protect the public from practitioners who are not competent [1–3].
In Australia, the assessment of overseas trained health professionals who wish to practice in Australia (and in some cases New Zealand) rests with the health professional accreditation bodies or professional associations. In the case of the professional accreditation bodies, this role is assigned by regulators under the Health Practitioner Regulation National Law Act (2009) . Each accreditation body is also often charged with the responsibility of assessing the suitability of pre-registration university programs for that profession.
The assessments undertaken by these bodies varies depending on the competencies and capabilities set out for that profession, with the methods of assessment chosen to ensure that a range of these competencies and capabilities are assessed [3, 5], often in multiple ways. It may be however, that the assessments measure those competencies or capabilities that are easily assessed, and omit those that are not . The purpose of these assessments is to protect patients. Therefore, the methods chosen to assess candidates should be reliable, valid, feasible and acceptable [7, 8]. In this way patients are exposed only to practitioners deemed competent to practice in that profession. In addition, it is also important that assessment methods are continuously reviewed as part of quality assurance processes .
Whilst very little has been published on the methods of assessment  and issues surrounding the assessment of overseas trained health practitioners in Australia, there has been some discussion of the political and workforce issues (e.g., complex procedures, direct and indirect discrimination, poor provision of information) surrounding international medical graduates wishing to practice in Australia [9–13]. There are examples throughout the literature of assessment methods in licensing exams.
Pharmacists seeking to practice in Ontario, Canada undertake a Prior Learning Assessment (PLA) that assesses the candidates learning through both formal and informal education . The PLA used in this context is both an assessment of transcripts, portfolios etc. and performance in an Objective Structures Clinical Examination (OSCE).
The OSCE format is used widely in fitness-to-practice assessments. Austin et al.  have described the development of the OSCE for pharmacy graduates with Munoz et al.  presenting data on the reliability, validity and generalisability of the examination. Austin et al.  suggests that standardised English-language proficiency tests (e.g. IELTS) may not be appropriate for cultural competency and communication skills required for pharmacy practice, even though this is a particularly important criteria where there is diversity in the candiates’ English language proficiency .
This suggests that assessment of English language proficiency should form part of the assessment process, and organisations should not rely solely on standardised English-language tests. In addition to English-language assessment, Archer  suggests that assessment of psychosocial skills form part of a licensing examination. These communication and psychosocial skill issues are quite relevant, as Tamblyn et al.  have demonstrated that patient-practitioner communication and clinical decision making during fitness-to-practice assessments correlate with complaints to professional regulation bodies.
The aim of the current paper is to identify the methods of assessment used by those bodies that undertake the assessment of overseas trained health professionals in Australia, and to identify the issues surrounding these assessments and how these issues are managed.
1. How do you determine the initial eligibility of an overseas trained practitioner to undertake further assessment? Why?
2. How does your organisation assess:
a. Basic sciences?
b. Taking a clinical history?
c. Doing a clinical assessment?
d. Diagnosis & clinical reasoning?
f. Aftercare and follow-up?
g. Practitioner-client (osteopath-patient) communication.
h. Commitment to continuous improvement and professional development?
i. Knowledge of the Australian health system?
j. Collaboration with other health professionals?
3. How does your organisation set the standard (pass mark) for each assessment task?
4. How does your organisation select assessors/examiners? Do you require assessors/examiners to undergo training?
5. How does your organisation review the performance of your assessment processes for overseas trained practitioners?
6. What do you believe are the strengths of your organisation’s assessment processes for overseas trained practitioners?
7. What do you believe are the weaknesses of your organisation’s assessment processes for overseas trained practitioners?
8. How do you manage risks to the public and to the profession in the assessment of overseas trained practitioners?
9. Does your organisation use supervised practice as part of the assessment of overseas trained practitioners?
10. How does your organisation deal with candidates who fail part of the assessment process?
11. How cost-efficient are your organisation’s processes for assessing overseas trained practitioners? How significant are the expenditure and the income associated with the assessment of overseas trained practitioners in your organisation’s annual budget?
12. Does your organisation automatically recognise graduates from any overseas training programs?
The study was approved by the Victoria University Faculty of Health, Engineering and Science Human Research Ethics Committee.
Thirteen (N=13) professional bodies who assess fitness-to-practice of overseas trained health professionals were approached to participate in interviews exploring current practices in the assessment of overseas-trained health professionals wishing to practice in Australia.
The interviews were conducted by a researcher (VS) experienced in conducting interviews for research.
All interviews were audio-recorded and transcribed verbatim. Notes were taken during interviews to include any relevant non-verbal cues and to assist with data transcription (e.g., when the quality of the recording was compromised by background noise). Participants were sent a copy of their transcribed interviews and were asked to make any necessary changes (e.g., if the researcher had misheard a statement) and/or add any additional comments.
Utilising NVivo (QSR International, Victoria, Australia) a primarily directed approach to content analysis was used to select and focus data from transcriptions and notes [20, 21]. Based on previous research [20, 22] some themes were set prior to conducting interviews allowing for semi-structured guidelines to be developed. Nonetheless, as the research was also investigating a relatively sparse area of research, a conventional or inductive content analysis approach was also used to identify any additional themes and categories that emerged [22, 23].
1. Australian Dental Council
2. Australian Institute of Radiography
3. Australian Medical Council
4. Australian Nursing and Midwifery Council
5. Australian Pharmacy Council
6. Australian Physiotherapy Council
7. Australian Podiatry Council
8. Australian Psychological Society
9. Council on Chiropractic Education Australasia
10. Optometry Council of Australia and New Zealand
11. Speech Pathology Australia
By paying particular attention to patterns, regularities, irregularities and propositions within the data [21, 24, 25] four interconnected categories were generated from the interview data with each theme also generating first- and second-order themes.
Assessing (Theme 1)
" …because can’t assess everything that person needs, should be knowing as a practitioner, you have to rely on their training to have given them some of it. I just don’t think that competence-only assessment is either affordable or realistic "
Although a traditional unstructured Viva voce (Theme 220.127.116.11) or oral examination was not common practice, it was not unusual for clinical examinations to include some form of verbal questioning to assess candidate’s clinical reasoning or to assess performance criteria that may not have been covered as part of the clinical examination.
" When it was factual recall they were scoring around 40% plus correct, soon as you put in a vignette they dropped below 25%. "
Clinical capability (Theme 1.1.2) was assessed using different methods. The OSCE (Theme 18.104.22.168) was often used as a method of assessing clinical skills (e.g. history taking, after care) and clinical reasoning. The stations did not necessarily use real or standardised patients. The Long-case (Theme 22.214.171.124) was commonly used to assess the performance of clinical skills. Patient selection strategies ranged from a purposeful selection of patients based on age and/or medical condition to accepting walk-in patients.
" … however it is done at the end of the day if you are putting your stamp on them you have to know that they can deliver, and do no harm. "
During assessment (Theme 126.96.36.199), the main risk-management processes risks were identified as those associated with stringency of evidence confirmation, vigilance when assessing areas where harm can be caused, requiring a demonstration of clinical skills and competency, training assessors in policy and process to follow if model/patient is at risk, transparency to candidate in terms of safety performance indicators, policy and procedures and running examinations well using good staff.
" … the cultural competence has become more of an issue … beyond communication you have to have an understanding of the culture of the individual you are dealing with and I don’t think we are at that stage yet … "
The main focus identified in the area of cultural competency was the ability of candidates to treat patients from culturally and linguistically diverse backgrounds, including indigenous Australians and/or patients of different ages. Cultural competency was sometimes assessed in written and clinical examinations by presenting scenarios that included cultural aspects. In examinations where real patients were included, most organisations screened the patients and did not include those with language difficulties that required an interpreter.
Another issue raised in terms of cultural competency was the possibility of culturally-influenced responses. Caution was advised when developing assessments so that candidates from different cultures are not mis-cued. Concern was also expressed about the inability of candidates to gain employment without an understanding of the Australian workplace culture (e.g., allied health support staff), particularly if competing against Australian candidates for positions.
Assessing a candidate’s communication skills (Theme 1.3.2) could include their interaction with patients, other professionals, patients’ families and particular professional environments. Some respondents noted that communication was assessed throughout the examination process, as without adequate communication skills the candidate could not complete the required tasks. Others specifically assessed communication skills, including building rapport, listening skills and sensitivity to client and information gathered.
For some professional bodies communication was seen as ‘…stuff that is not easy for us to assess’. One concern was that written assessments (e.g., portfolios, MCQs) or simulation-style assessments did not allow observation of candidates’ communication skills in areas such as relationship-building with the client. One strategy to overcome this in portfolio assessment was to ask the candidates to submit a video of a treatment session or consultation. Another solution identified was a change to workplace assessment.
" … you are constructing those questions you then felt the problems between the states, and those differences you might think you have got a terrific question and then someone from SA [South Australia] will say no it’s totally different in NSW [New South Wales]. That is where you end up with a big problem. "
One solution suggested was a consensus between the states or a national standard. Although some organisations did not directly assess knowledge of the Australian health system they provided information on the system to candidates through publications or guest speakers. Another recommendation for candidates was to spend some time observing in an Australian clinical setting. This was seen as a very effective learning experience for candidates who were struggling in the area.
Processing (Theme 2)
Decisions about a candidate’s eligibility to participate in the assessment process were based on a number of criteria. Satisfactory completion of Courses and qualifications (Theme 188.8.131.52) was a common criterion. Prior to entry to any examination, most organisations required candidates to demonstrate that they had successfully completed an approved course that was deemed as equivalent to studies in Australia. In some cases courses were required to be approved by specific councils or organisations in the country and in other cases courses from particular countries were deemed acceptable.
Another strategy used was to recognise equivalent Examination or accreditation systems (Theme 184.108.40.206) coupled with at least 12 months of clinical practice in a particular environment. Examination and accreditation systems were accredited on the basis of documented quality and similarity in structure and standard to Australian systems.
Professional bodies required candidates to complete English language qualifications (Theme 220.127.116.11), either the Occupational English Test (OET) or the International English Language Testing System (IELTS). Candidates were required to, or would be in the near future, gain at least a B in the OET or a 7 on the IELTS in all sections in one sitting.
" So the offshore is conducted through a clearing house … they do all the arrangements with the off-shore venues for about 6 professions, because you know we might only have one [candidate] in Tehran but there might be two [omitted – other allied health professionals] and three [omitted – other allied health professionals] so then we’re not all paying for invigilators and things like that … I guess there are venues in most places around the world … "
" The advice from DEEWR [Department of Education, Employment & Workplace Relations] is no. Very simple and clear. On appeal we usually do, then explain a bit more clearly. We will say to them you need to do the following … "
" … [examiner] gives recommendations to the candidate and then we pass those on … has no problem with the candidates ringing and talking to them about the assessment and asking advice and things like that … "
" You know in Australia we tend to sugar-coat bad news … we’ve done away with it, successful or not successful, or you know, suitable or not suitable, because even though it is not an examination process as such, it’s an assessment. Culturally, people want to hear did I pass or fail? "
" We thought that paying them [final year students] and telling them how important it was would be enough for them to take it seriously but the exam was 3 hour duration and we made them stay for 1½ hours but we could tell that some of them didn’t - it didn’t really work - didn’t give it a really good go. "
Even so, using graduating students was seen as desirable as often the passing score for the examination was based on the graduating Australian student level.
The basis for appeals (Theme 2.2.2) appeared to be both procedural or related to candidate impairment (e.g., feeling unwell). Some organisations only considered procedural appeals that asserted that the assessment process had been defective. Professional bodies worked hard to minimise the frequency of appeals by creating a comprehensive assessment blueprint linked to professional standards; having transparent processes; following guidelines in areas such as patient selection; and monitoring candidate performance by methods such as videotaping clinical assessments or recording key strokes in computerised examinations.
Review of assessment processes (Theme 2.3) tended to be ongoing with assessors encouraged to provide feedback and assessment data analysed after each examination. Some organisations were starting to include an analysis of examiner performance in their review. Another, less common, internal review strategy was to survey candidates on their assessment experience. In most instances reviews were conducted by a committee.
Strengths of the assessment processes
· Multiple forms of assessment – fairer for candidates in terms of examination format preferences and assesses candidates on different areas of competency
· Practical examination – seeing the candidate put theory into practice
· Examination difficulty –stringent test of eligibility for registration to practice in Australia
· Multiple examiners – improves fairness of decisions
· Independent examiners – improves fairness of decisions
· Consistency across Australia and New Zealand – minimises the possibility of candidates ‘shopping’ for easier assessment
· Consistency across candidates
· Consistency across all registration applicants – as all applicants including Australian university graduates, Australian-trained professionals re-entering the profession and overseas-trained professionals have the same basis for assessment there is ‘absolute equity’ and no perception that assessment may be more difficult for overseas applicants
· Transparent – providing candidates with comprehensive information on the assessment process was seen as beneficial to reduce candidate stress and decrease the chance of candidate complaints
· Conducted over time – minimises the possible bias from a candidate who has a ‘bad day’
· Case-by-case assessment – each course and qualification considered can be examined on its merits rather than being rejected because it is not on a pre-approved list
· Rigorous process – assessment guidelines allow for cross-checking and panel referral for borderline applicants; stringent documentation checking including references; and strict examination monitoring
· No need to come to Australia – reduces the cost for candidates
· Onus on candidate to provide evidence – ensures the candidate has had to consider and reflect on the Australian standards to fulfil the assessment criteria
· Rigorous assessment development – reduces future problems by putting in high levels of effort and resources from the beginning
· Good and varied staff – enhancing examination development by engaging professionals from a variety of backgrounds who understand the assessment process and do not alienate other examiners
Weakness of the assessment processes
· Consistency across assessment sites – candidates share information on perceived easier assessment and this results in application drift
· Limited sampling – through snapshot assessments ‘you can only sample a certain amount in what we are looking at whether it’s the skills, domains, the presenting clinical conditions, whatever’
· Availability of venues – limited availability of high demand venues affects the efficient running of the examination
· Recruitment of real patients – although patients are recruited in advance not all present for the examination
· Lengthy process – the current process was lengthy for applicants
· Inadequate communication to candidates – need to clarify requirements and expectations to candidates
· Resource intensive – an individualised process necessarily involves a high level of work
· Limited assessment of clinical skills - no evidence required of the candidate interacting with clients
· Lack of a bridging program – no specific programs to assist candidates develop their expertise
· Limited examination preparation – candidates have no opportunity to prepare for the written examination and this especially disadvantages experienced practitioners who have been out of the education environment for some time. Candidates who are in Australia waiting for the examination have no opportunity to practise their skills
· Lack of information on performance of overseas-trained practitioners – no feedback on critical areas that overseas-trained professionals struggle with in practice to inform examination content
· No professional body membership required – inability to monitor professionals or those who were assessed in the past
· Relevance of accredited courses – courses need to be regularly reviewed, especially with professions in constant change
Possible changes to the assessment processes
· Offering restricted registration – rather than rejecting all applicants who do not meet the full criteria, a partial or limited membership (such as an academic membership) might be offered
· Adding a formal examination – relying solely on a portfolio or desktop-style assessment is ‘not objective enough in the assessment’
· Offering off-shore/internet based assessment – to address the issue of candidates needing to come to Australia the possibility of assessing candidates’ clinical skills overseas. Some options were suggested, including Australian assessors going overseas (dependant on candidate numbers) to work with overseas assessors and/or webcam-based assessment
· Dealing with borderline fail – concern was voiced over borderline fail candidates and it was suggested that there needed to be an option for borderline candidates, as identified by the examiners, to be able to ‘have some provision for perhaps, just make up the work’
· Training for candidates – the lack of candidate training and/or courses was a common issue raised by professional bodies. Suggested content for such courses included cultural competency, communication skills, knowledge of the Australian health system and or upgrading skills. Issues with training candidates included the financial cost, low candidate numbers, candidates being overseas, Australian professional development courses only being open to Australian registered practitioners and professional bodies not necessarily being educational bodies so courses would need to be outsourced
· Assessment types – changing or modifying the types of assessment utilised
· New - for some professional bodies who relied solely on desktop/portfolio there was a feeling that a skills-based assessment was also needed that might be ‘a mixture of both’ [written and clinical]
· More – a professional body who conducted both written and clinical examinations felt that this was not sufficient and was considering further assessment as they felt that there was not sufficient time to assess all they wished to
· Technology – consideration was being given to including technology-based clinical assessment (e.g., models, computer imaging) to address issues around using real patients in examinations. One body was experimenting with computerised adaptive testing in which the number and type of items presented was determined by candidate performance
· Efficiency – consideration was being given to changing the type of clinical examination from a standardised patient model to a clinical supervision model due to resource efficiency issues. Another option was to outsource the examination to educational institutions, although one professional body who did this was considering taking control back
· Assessors – some professional bodies were making changes to improve the training of their assessors including the development and use of assessor handbooks
Examining (Theme 3)
" So the competency based occupational standards were developed … all universities are accredited against those standards and so all people coming in from overseas against those standards. "
In specific assessments systems, standards were generally set according to professional standards. One strategy to set criteria in clinical assessment was to use a panel of experts to make decisions. Several professional bodies required candidates to gain a least 75% for their clinical examination with some parts of the assessments being hurdle requirements.
Strategies (Theme 3.1.2) used in marking candidates in clinical examinations included using rubrics with checklists and/or rating scales when assessing candidates. Checklists were used for assessors to note whether or not participants had performed certain components or achieved defined performance indicators. For those that used checklists there was a certain amount of assessor discretion allowed in terms of marking areas that were not included due to the circumstances of the examination.
Assessor judgement was also used when rating. Candidates were marked according to the standard of their performance but an overall pass or fail decision was made at the completion of the clinical assessment. It was noted that this strategy was not based on any statistical grounds and may be reviewed.
Borderline fail candidates (Theme 3.1.3) were identified as an issue in assessment by several professional bodies. One strategy used to address borderline candidates was to offer them a supplementary examination. This might mean that the candidate then underwent further testing in a specific area or re-sat the whole clinical examination. Borderline portfolio or desktop assessments were referred to a senior staff member and the application discussed. If a decision could not be reached, it was referred to the assessment committee or another higher-level staff member. On most occasions, if insufficient evidence is found there is an option for candidates to provide further evidence (e.g., complete a specific course) within an extended period rather than re-applying in full.
Selection of assessors (Theme 3.2.1) was based on their experience not just in the profession but also in educational assessment as well:
" … again as many people did we are relying heavily on the fact that these people are “trained” when they come to us, trained by the institutions that use them. We then just have to reorient them to the nature of the assessment … "
Assessors from an education background are seen as advantageous not just because of their skills but also because they have experience with entry-level practitioners.
Training of assessors (Theme 3.2.2) was noted as an important issue in assessment. Methods of training included formal government-based sessions in recognising fraudulent documents, ‘calibration sessions’ with other assessors to share ideas, working with experienced assessors, handbooks and pre-session briefings. One professional body indicated that reviewing specific assessment cases with assessors had led to more open communication when assessors were unsure of decisions and that this had been highly beneficial. The benefit of a workshop-style approach was highlighted by another organisation because of the opportunity for ideas to be shared. They noted that while rigorous training sessions had worked well for new examiners, established examiners did not find them helpful. They had moved to having new assessors observe the examination, be monitored during their first assessments and then be subject to the same continual auditing as all assessors were in this organisation.
Cost-efficiency (Theme 4)
The cost of assessment of overseas-trained practitioners was a significant part of the annual budget for most professional bodies. Although candidate fees contributed to the costs, in most organisations the profession subsidised the costs; the institution running the examinations was ‘extremely kind to us and don’t actually charge us, you know, the full amount’ and/or many professionals voluntarily gave their time. Fees were set at a level that made the examination feasible but not too expensive for candidates.
It was also important for examinations to be efficient in use of other resources. For large candidate numbers, written examinations using MCQs met this criterion. For most professional bodies, however, clinical assessment was resource intensive. A major human resource consideration was the administrative staff that worked to organise examinations and disseminate information to candidates. Organisations considering the introduction of examinations were concerned about the resources required to do this.
The purpose of the present study was to investigate the methods of assessment of overseas-trained health professionals who wish to practice in Australia. The main themes identified in the analysis of responses from the interviews of Australian assessment bodies were Assessing, Processes, Examining and Cost-efficiency. Within each theme, multiple levels of sub-themes were also identified.
Under the theme Assessing the main sub-themes related to types of assessment and risk management. Assessment bodies use a range of assessment tools when assessing fitness-to-practice, something which in the Australian medical profession is consistent with other countries . In the initial stages, these bodies utilised a desktop assessment process to either screen candidates (ensuring they meet the standard to sit the examination process) or as the sole assessment process. Multiple assessment methods were used to ensure content validity and ensure that candidates were assessed on all competencies and capabilities deemed to be important and relevant for that profession. It may also be that, as Finucane et al.  suggest, there is no single assessment method that is suitable to assess fitness-to-practice. Given that it is a high-stakes assessment, decisions about fitness-to-practice should be based on a multitude of information sources .
The assessment methods employed by these organisations ranged from short-answer questions and MCQs (testing basic science and theoretical knowledge) [27, 28] to the OSCE [15, 29] and long case assessment [27, 30] (for assessment of clinical capability). In addition portfolio assessments  were increasingly used to assess candidates, and a number of organisations indicated that this is an assessment method that may be used at a later date.
Not surprisingly, risk management was a primary concern for most organisations, where different types of risk and associated risk mitigation strategies were discussed. Risks were identified in a number of areas including minimisation of harm to the community and decreasing the risk of harm to patients (or standardised patients) during the examination process. Ensuring that candidates were equipped to cope with workplace based assessments and the environment in which these assessments are conducted was a further concern.
When designing assessments, the bodies identified a number of areas that presented challenges. These included the assessment of cultural competence, communication, knowledge of the Australian health system and after-care/follow-up of the patient. Both cultural competence [32, 33] and communication [34, 35] have been previously identified within the medical and health education literature as areas that are difficult to assess in a reliable and valid way. In relation to communication, Tamblyn et al.  suggest that a cut-off or minimum score be set for communication components of the assessment process in an effort to reduce the number of complaints to professional regulatory bodies. In addition, whilst the organisations interviewed did recognise the importance of cultural competency assessment, particularly related to indigenous Australians, many were only just incorporating, or anticipating incorporating, this area into the assessment processes the organisation used.
Under Processes the main sub-themes were procedures for conducting assessment, appeal processes, review processes and reflection on the strengths and weaknesses of their systems. Importantly, most organisations reported that their assessment processes and examiners were being continually reviewed. Information about the assessment process, including sample questions, marking criteria etc. were provided to the candidate prior to the assessment process. Written assessments were largely undertaken off-shore, that is, not in Australia. Caution was advised when running examinations off-shore as it was a complex task to have the correct candidate at the correct site sitting the correct examination and receiving the correct results. The organisational difficulty was increased with larger numbers of candidates.
Post-assessment issues including candidate feedback and appeals were also canvassed. Most organisations provided feedback to candidates who failed an element(s) of the assessment process and this assisted in clarifying areas in which the candidate needs to improve and also minimise appeals, ensuring the process is fair and transparent . Organisations were understandably keen on minimising appeals, and in most cases appeals were only available to the candidate if an examination process issue was identified. These steps to minimise appeals would also have an impact on making the assessment defensible from a legal standpoint. Although this was not articulated by participants in the current study, previous research has indicated this is a concern for such organisations , and minimised by the use of valid and reliable assessment strategies. When asked to reflect on their processes, organisations identified numerous strengths and weaknesses, and these organisations also presented planned or potential changes to their assessment processes.
Within the theme Examining, there were two sub-themes related to marking (including processes for those who fail) and assessors (selection and training). Marking was undertaken using checklists , ensuring that candidates performed required elements, however there was little discussion of the use of global or holistic assessments. The use of holistic assessments is becoming widely reported in the literature as a valid and reliable assessment outcome [37–40], although it appears that this has yet to make its way into the assessments undertaken by the organisations. The use of global assessments has been demonstrated to improve the reliability and validity of an assessment, particularly where communication skills are assessed [41, 42].
Most organisations spent large amounts of time and money on their examiners, in terms of training, recruitment and payments to assess. Examiners were typically selected based on clinical experience as well as experience in education [43–45], which ensures that it is a peer assessment process . Formal training sessions were often undertaken, and new assessors were paired with more experienced assessors, to aid in their development. Examiner training is widely accepted to improve the reliability of an assessment as well as improve examiner confidence in the assessment process [43, 46–49]. All examiners were the subject of ongoing auditing and assessment, and therefore “…remain competent in what they do” . Where organisations did not have formal examiner training processes in place, it was anticipated that they were to implement a process in the near future.
The range of practices reported under the theme Cost-efficiency was relatively limited. The size of the organisation had an impact on the financial elements of the assessment process; large organisations were able to make money on their examinations and use this to further develop their processes, smaller organisations often charged candidates the ‘cost’ of conducting the assessment, leaving them with very little in the way of financial resources to develop their assessment processes. Whilst clinical assessments were labour and cost intensive, particularly in relation to the administration of the assessment, organisations did not perceive this to be a major issue.
Most of the organisations who participated in the current study have invested large amounts of time and resources, both financial and administrative, into the development and ongoing review of their assessment processes. Whilst many organisations are utilising assessment methods they have employed for a number of years, there was recognition that ‘newer’ assessment types such as the portfolio may be useful in the overall assessment process. The assessment methods were often chosen based on the resources available to that organisation (i.e. MCQ for medicine). Most processes include multiple assessment methods, with these methods blueprinted to assess a range of competencies and capabilities for that profession. Overall, the organisations interviewed provided an impression of the use of the literature to inform their assessment processes and the use of robust, defensible assessments of overseas-trained health professionals who wish to practice in Australia.
The study was undertaken as part of a project to develop an assessment process for overseas trained osteopaths to practice in Australia through a grant provided by the Osteopaths Registration Board of Victoria (superseded by the Osteopathy Board of Australia as of July 1, 2010). The authors would like to thank the organisations and the people who generously gave their time to be interviewed as part of the study.
- Epstein RM: Assessment in medical education. N Engl J Med. 2007, 356: 387-396. 10.1056/NEJMra054784.View ArticleGoogle Scholar
- Govaerts MJB, van der Vleuten CPM, Schuwirth LWT: Optimising the reproducibility of a performance-based assessment test in midwifery education. Adv Health Sci Educ. 2002, 7: 133-145. 10.1023/A:1015720302925.View ArticleGoogle Scholar
- Brailovsky CA, Grand’Maison P: Using evidence to improve evaluation: A comprehensive psychometric assessment of a SP-based OSCE licensing examination. Adv Health Sci Educ. 2000, 5: 207-219. 10.1023/A:1009869328173.View ArticleGoogle Scholar
- Health Practitioner Regulation National Law Act 2009: http://www.ahpra.gov.au/Legislation-and-Publications/Legislation.aspx.
- Holmboe ES, Hawkins RE: Methods for evaluating the clinical competence of residents in internal medicine: A review. Ann Intern Med. 1998, 129: 42-48.View ArticleGoogle Scholar
- Van der Vleuten C: National, European licensing examinations or none at all?. Med Teach. 2009, 31: 189-191. 10.1080/01421590902741171.View ArticleGoogle Scholar
- Hays RB, Davies HA, Beard JD, Caldon LJM, Farmer EA, Finucane PM, McCrorie P, Newble DI, Schuwirth LWT, Sibbald GR: Selecting performance assessment methods for experienced physicians. Med Educ. 2002, 36: 910-917. 10.1046/j.1365-2923.2002.01307.x.View ArticleGoogle Scholar
- Finucane P, Bourgeois-Law G, Ineson S, Kaigas T: A comparison of performance assessment programs for medical practitioners in Canada, Australia, New Zealand, and the United Kingdom. Acad Med. 2003, 78: 837-843. 10.1097/00001888-200308000-00020.View ArticleGoogle Scholar
- McLean R, Bennett J: Nationally consistent assessment of international medical graduates. Med J Aust. 2008, 188: 464-468.Google Scholar
- McGrath BP: Integration of overseas-trained doctors into the Australian medical workforce. Med J Aust. 2004, 181: 640-642.Google Scholar
- McGrath P, Wong A, Holewa H: Canadian and Australian licensing policies for international medical graduates: a web based comparison. Educ Health. 2011, 24: 1-13.Google Scholar
- Douglas S: The registration and accreditation of international medical graduates in Australia - A broken system or a work in progress?. People Place. 2008, 16: 28-40.Google Scholar
- Groutsis D: Geography and credentialism: the assessment and accreditation of overseas-trained doctors. Health Sociol Rev. 2006, 15: 59-70. 10.5172/hesr.2006.15.1.59.View ArticleGoogle Scholar
- Austin Z, Galli M, Diamantouros A: Development of a prior learning assessment for pharmacists seeking licensure in Canada. Pharm Educ. 2003, 3: 87-96. 10.1080/1560221031000151633.View ArticleGoogle Scholar
- Austin Z, O’Byrne C, Pugsley J, Quero Munoz L: Development and validation processes for an Objective Structured Clinical Examination (OSCE) for entry-to-practice certification in pharmacy: The Canadian experience. Am J Pharm Educ. 2003, 67: 1-8.View ArticleGoogle Scholar
- Quero Munoz L, O’Byrne C, Pugsley J, Austin Z: Reliability, validity, and generalizability of an objective structured clinical examination (OSCE) for assessment of entry-to-practice in pharmacy. Pharm Educ. 2005, 5: 33-43. 10.1080/15602210400025347.View ArticleGoogle Scholar
- Rothman A, Cusimano M: Assessment of English language proficiency in international medical graduates by physician examiners and standardised patients. Med Educ. 2001, 35: 762-766. 10.1046/j.1365-2923.2001.00964.x.View ArticleGoogle Scholar
- Archer J: European licensing examinations - The only way forward. Med Teach. 2009, 31: 215-216. 10.1080/01421590902741148.View ArticleGoogle Scholar
- Tamblyn R, Abrahamowicz M, Dauphinee D, Wenghofer E, Jacques A, Klass D, Smee S, Blackmore D, Winslade N, Girard N, et al: Physician scores on national clinical skills examinations as predictors of complaints to medical regulatory authorities. J Am Med Assoc. 2007, 298: 993-1001. 10.1001/jama.298.9.993.View ArticleGoogle Scholar
- Biddle SJH, Markland D, Gilbourne D, Chatzisarantis NLD, Sparkes AC: Research methods in sport and exercise psychology: Quantitative and qualitative issues. J Sports Sci. 2001, 19: 777-809. 10.1080/026404101317015438.View ArticleGoogle Scholar
- Wolcott HF: Writing up qualitative research. 2009, USA: Sage Publications, 3View ArticleGoogle Scholar
- Hsieh H-F, Shannon SE: Three Approaches to Qualitative Content Analysis. Qual Health Res. 2005, 15: 1277-1288. 10.1177/1049732305276687.View ArticleGoogle Scholar
- Scanlan T, Ravizza K, Stein G: An in-depth study offormer elite figure skaters:I. Introduction to the project. J Sport Ex Psych. 1989, 11: 54-64.Google Scholar
- Harrell MC, Bradley MA: Data collection methods: Semi-structured interviews and focus groups. 2009, Santa Monica: RAND CorporationGoogle Scholar
- Miles MB, Huberman AM: An expanded source book: Qualitative data anlysis. 1994, California, USA: Sage Publications, 2Google Scholar
- Schuwirth L: The need for national licensing examinations. Med Educ. 2007, 41: 1022-1023. 10.1111/j.1365-2923.2007.02856.x.View ArticleGoogle Scholar
- Wass V, van der Vleuten CPM, Shatzer J, Jones R: Assessment of clinical competence. Lancet. 2001, 357: 945-949. 10.1016/S0140-6736(00)04221-5.View ArticleGoogle Scholar
- Elstein AS: Beyond multiple-choice questions and essays: The need for a new way to assess clinical competence. J Med Educ. 1993, 68: 244-249.Google Scholar
- Grand’Maison P, Lescop J, Rainsberry P, Brailovsky CA: Large-scale use of an objective, structured clinical examination for licensing family physicians. Can Med Assoc J. 1992, 146: 1735-1740.Google Scholar
- Hatcher S, Handrinos D, Jenkins K: How and why the long case should be kept: A view from the antipodes. Psych Bull. 2008, 32: 151-152.View ArticleGoogle Scholar
- Wilkinson TJ, Challis M, Hobma SO, Newble DI, Parboosingh JT, Sibbald RG, Wakeford R: The use of portfolios for assessment of the competence and performance of doctors in practice. Med Educ. 2002, 36: 918-924. 10.1046/j.1365-2923.2002.01312.x.View ArticleGoogle Scholar
- Gregorczyk SM, Bailit HL: Assessing the cultural competency of dental students and residents. J Dent Educ. 2008, 72: 1122-1127.Google Scholar
- Miller E, Green AR: Student reflections on learning cross-cultural skills through a ‘cultural competence’ OSCE. Med Teach. 2007, 29: 76-84. 10.1080/01421590701266701.View ArticleGoogle Scholar
- Beaulieu MD, Rivard M, Hudon E, Saucier D, Remondin M, Favreau R: Using standardized patients to measure professional performance of physicians. Int J Qual Health Care. 2003, 15: 251-259. 10.1093/intqhc/mzg037.View ArticleGoogle Scholar
- Tompkins M, Paquetter-Frenette D: Learning portfolio models in health regulatory colleges of Ontario, Canada. J Contin Educ Health Prof. 2010, 30: 57-64. 10.1002/chp.20057.View ArticleGoogle Scholar
- Sadler DR: Indeterminancy in the use of preset criteria for assessment and grading. Assess Eval Higher Educ. 2009, 34: 159-179. 10.1080/02602930801956059.View ArticleGoogle Scholar
- Cunnington J, Neville A, Norman G: The risks of thoroughness: reliability and validity of global ratings and checklists in an OSCE. Adv Health Sci Educ. 1997, 1: 227-233.View ArticleGoogle Scholar
- Cohen R, Rotham A, Poldre P, Ross J: Validity and generalizability of global ratings in an Objective Structured Clinical Examination. Acad Med. 1991, 66: 545-548. 10.1097/00001888-199109000-00012.View ArticleGoogle Scholar
- McKinley R, Strand J, Ward L, Gray T, Alun-Jones T, Miller H: Checklists for assessment and certification of clinical procedural skills omit essential competencies: a systematic review. Med Educ. 2008, 42: 338-349. 10.1111/j.1365-2923.2007.02970.x.View ArticleGoogle Scholar
- McKinley RK, Strand J, Gray T, Schuwirth L, Alun-Jones T, Miller H: Development of a tool to support holistic generic assessment of clinical procedure skills. Med Educ. 2008, 42: 619-627. 10.1111/j.1365-2923.2008.03023.x.View ArticleGoogle Scholar
- Newble DI: Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ. 2004, 38: 199-203. 10.1111/j.1365-2923.2004.01755.x.View ArticleGoogle Scholar
- Driessen E: Portfolio critics: Do they have a point?. Med Teach. 2009, 31: 279-281. 10.1080/01421590902803104.View ArticleGoogle Scholar
- Boulet JR: Summative assessment in medicine: The promise of simulation for high-stakes evaluation. Acad Emerg Med. 2008, 15: 1017-1024. 10.1111/j.1553-2712.2008.00228.x.View ArticleGoogle Scholar
- Hays RB: Assessment of general practice consultations: Content validity of a rating scale. Med Educ. 1990, 24: 110-116. 10.1111/j.1365-2923.1990.tb02508.x.View ArticleGoogle Scholar
- Hays RB, Jones BF, Adkins PB, McKain PJ: Analysis of videotaped consultations to certify competence. Med J Aust. 1990, 152: 609-611.Google Scholar
- Awaisu A, Mohamed MHN, Al-Efan QAM: Perception of pharmacy students in Malaysia on the use of objective structured clinical examinations to evaluate competence. Am J Pharm Educ. 2007, 71: 118-118. 10.5688/aj7106118.View ArticleGoogle Scholar
- van Zanten M, Boulet JR, McKinley D: Using standardized patients to assess the interpersonal skills of physicians: Six years’ experience with a high-stakes certification examination. Health Communic. 2007, 22: 195-205. 10.1080/10410230701626562.View ArticleGoogle Scholar
- Van Nuland M, Van Den Noortgate W, Degryse J, Goedhuys J: Comparison of two instruments for assessing communication skills in a general practice objective structured clinical examination. Med Educ. 2007, 41: 676-683. 10.1111/j.1365-2923.2007.02788.x.View ArticleGoogle Scholar
- Cooper C, Mira M: Who should assess medical students’ communication skills: their academic teachers or their patients?. Med Educ. 1998, 32: 419-421. 10.1046/j.1365-2923.1998.00223.x.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6920/12/91/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.