We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Determining influence, interaction and causality of contrast and sequence effects in objective structured clinical exams.
- Authors
Yeates, Peter; Moult, Alice; Cope, Natalie; McCray, Gareth; Fuller, Richard; McKinley, Robert
- Abstract
Introduction: Differential rater function over time (DRIFT) and contrast effects (examiners' scores biased away from the standard of preceding performances) both challenge the fairness of scoring in objective structured clinical exams (OSCEs). This is important as, under some circumstances, these effects could alter whether some candidates pass or fail assessments. Benefitting from experimental control, this study investigated the causality, operation and interaction of both effects simultaneously for the first time in an OSCE setting. Methods: We used secondary analysis of data from an OSCE in which examiners scored embedded videos of student performances interspersed between live students. Embedded video position varied between examiners (early vs. late) whilst the standard of preceding performances naturally varied (previous high or low). We examined linear relationships suggestive of DRIFT and contrast effects in all within‐OSCE data before comparing the influence and interaction of 'early' versus 'late' and 'previous high' versus 'previous low' conditions on embedded video scores. Results: Linear relationships data did not support the presence of DRIFT or contrast effects. Embedded videos were scored higher early (19.9 [19.4–20.5]) versus late (18.6 [18.1–19.1], p < 0.001), but scores did not differ between previous high and previous low conditions. The interaction term was non‐significant. Conclusions: In this instance, the small DRIFT effect we observed on embedded videos can be causally attributed to examiner behaviour. Contrast effects appear less ubiquitous than some prior research suggests. Possible mediators of these finding include the following: OSCE context, detail of task specification, examiners' cognitive load and the distribution of learners' ability. As the operation of these effects appears to vary across contexts, further research is needed to determine the prevalence and mechanisms of contrast and DRIFT effects, so that assessments may be designed in ways that are likely to avoid their occurrence. Quality assurance should monitor for these contextually variable effects in order to ensure OSCE equivalence. Examiner DRIFT and contrast effects are well established effects in #AssessorCognition but how pronounced are they in real #OSCEs? Yeates et al found muted evidence and discuss why effects may vary between settings.
- Subjects
RATING of students; COLLEGE teacher attitudes; COGNITION; LEARNING strategies; CLINICAL competence; SECONDARY analysis; VIDEO recording; EVALUATION
- Publication
Medical Education, 2022, Vol 56, Issue 3, p292
- ISSN
0308-0110
- Publication type
Article
- DOI
10.1111/medu.14713