
The Pediatric Examination Assessment Rubric (PEAR) toolkit: reliability study
Background: The Pediatric Examination Assessment Rubric (PEAR) toolkit consists of an examination sheet and rubric designed to assess ophthalmology residents’ performance on the pediatric eye examination. The purpose of this study was to evaluate the reliability of the PEAR toolkit.
Methods: Six ophthalmology residents (2 PGY-2, 4 PGY-3) at a single ACGME-accredited US program participated in 11 video-recorded pediatric ophthalmology patient encounters. Two pediatric ophthalmologists reviewed the videos and the residents’ examination sheets to complete a PEAR evaluation. The inter-rater reliability of the rating for each of the 12 examination skills evaluated using PEAR was determined using kappa statistics, and reliability strength was categorized based on published guidelines (≤0, poor; 0-0.20, slight; 0.21-0.40, fair; 0.41-0.60, moderate; 0.41-0.60, substantial; 0.81-1.00, almost perfect).
Results: Eleven video encounters were completed. Of the 12 examination skills evaluated using PEAR, 9 had kappa scores with strengths of moderate to almost perfect reliability. Two examination skills, Worth 4-Dot and alignment, showed fair reliability. A kappa score could not be calculated for stereoacuity because of the lack of variability among the evaluators’ raw scores.
Conclusions: In our small sample of residents from a single institution, the PEAR toolkit showed inter-rater reliability.
Files
Metadata
Work Title | The Pediatric Examination Assessment Rubric (PEAR) toolkit: reliability study |
---|---|
Access | |
Creators |
|
License | CC BY-NC-ND 4.0 (Attribution-NonCommercial-NoDerivatives) |
Work Type | Article |
Publisher |
|
Publication Date | December 1, 2020 |
Publisher Identifier (DOI) |
|
Deposited | August 02, 2021 |
Versions
Analytics
Collections
This resource is currently not in any collection.