Evaluation of Interrater Reliability for Coding of Types of Gazes in Nurse-Patient Dyads

Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/80553

Show full item record

Files Size Format View
Alexandria_Martz_Thesis.pdf 203.0Kb PDF View/Open

Title: Evaluation of Interrater Reliability for Coding of Types of Gazes in Nurse-Patient Dyads
Creators: Martz, Alexandria
Advisor: Wills, Celia
Issue Date: 2017-05
Abstract: The purpose of this secondary analysis project was to describe and evaluate processes of interrater reliability assessment based on 13 videotapes of ICU nurse-patient dyads that were collected as part of a prior study (Happ et al., 2004). The videos were coded for four types of gazes (relating, assessing, technical doing, listening) during nursing care in the Medical (MICU) or Cardiothoracic Intensive Care Unit (CTICU). Interrater reliability is important to establish in research data collection, codebook development, and standardized nursing assessments in clinical practice. Steps described by Lombard et al. (2010) constituted the overall framework for interrater reliability assessment for the coding of gazes for the videotaped nurse-patient dyads. Raw percentage agreement for the four types of gazes was calculated by dividing the number of times the data collectors agreed by the total number of gazes per videotape within and across the videotapes. Overall, the coders agreed in the coding of the four visual gazes across the 13 videotapes 70% of the time, but with a substantial range of agreement from 58% to 90% for ratings of individual videotapes. The overall percentage was lower than the target goal of at least 75% agreement per videotape, with only one gaze, “relating,” achieving a percentage agreement (90%) exceeding 75%. Sources of disagreement in coding arose from formative clarifications of the codebook definitions, but evolved over time to improved agreement for coding as the codebook was refined. As definitions were clarified during the coding process, fewer disagreements in coding of the gazes were found over the 13 videotapes. The raw percentage agreement could be improved by rating more training videos to better refine the codebook prior to the official coding of the videos. Further research could use the Kappa coefficient method of establishing interrater reliability, which adjusts for agreement that occurs by chance.
Embargo: No embargo
Series/Report no.: The Ohio State University. College of Nursing Honors Theses; 2017
Academic Major: Academic Major: Nursing
Keywords: Interrater Reliability
Raw Percentage Agreement
Visual Gazes
Nurse- Patient Dyads
Non-Verbal Communication
Description: 1st Place in Emerging Issues in Healthcare policy, Administration, and Workforce Category at The OSU Denman Undergraduate Research Forum
URI: http://hdl.handle.net/1811/80553
Bookmark and Share