Skip to main content

Table 1 Analysis outcomes, typical reasons for those outcomes and subsequent potential actions to refine the provisional instrument, arising from usability analysis, exploratory factor analysis and multifaceted Rasch model analysis

From: Development of the Feedback Quality Instrument: a guide for health professional educators in fostering learner-centred discussions

 

Analysis outcomes

Typical reasons

Potential actions

Usability analysis

•Identify instrument problems

• Not easy to use

e.g. too many individual items or insufficient / complex instructions

• Find a way to simplify instrument administration e.g. group related items

• Offer clear, useful and succinct instructions

• Item gap

• Create new items to address gap

• Identify item problems

• Items overlap

• Merge items

• Item not generally applicable during a feedback interaction

• Remove or rephrase so item is generally applicable

• Item phrasing: description of educator behaviour vague or not-observable

• Remove or rephrase so item clearly and simply describes pertinent observable behaviours

• Identify rating category problems

• Too many rating categories, so hard to differentiate between them

• Reduce the number of rating categories

• Rating category phrasing vague or not consistent across categories

• Rephrase rating category description so it is consistent, clear and simple

• Middle rating category not applicable in some items

• Rephrase item so all rating categories are applicable

Exploratory factor analysis

• Identify factors (core concepts) underlying quality feedback, represented by item clusters

• Items in clusters are closely aligned i.e. all attributes of one concept

• Group items into instrument domains, and name accordingly

• Determine if each factor is adequately characterised, with sufficient items strongly aligned with it (3 items minimum, typically)

• Insufficient items (e.g. only 2 items that strongly align)

• Create new items to describe observable behaviours that reflect that concept

• Identify items that do not align strongly with a single cluster

• Item alignment split between 2 clusters (e.g. due to item phrasing or context)

• Remove or revise item, to better align with one cluster

• Item does not strongly align with any cluster

(e.g. due to item phrasing problems; item behaviour not sufficiently influential in the factor; or insufficient data)

• Remove or revise item, to align with one cluster

Multifaceted Rasch model analysis

• Identify misfit shown by items, raters or rating category, which may distort the measurement system

• Lack of consistent interpretation of item and application of rating category,

due to:

- item phrasing problems, so interpretation is variable

- rating category problems, so application is variable

• Insufficient data (if behaviour or rating category rarely employed)

Enhance consistency by

• Removing or revising items and rating categories, according to desirable criteria

• Using instrument manual and rater training

• Determine spread of items across range of ‘feedback proficiency’ (illustrated on the variable map)

• Span with no items (gap)

• Create new items to address gap

• Span with too many items (redundant items)

• Remove items to reduce redundancy