Ask: Research and Methods. Volume 32, Issue 1 (2023)
Permanent URI for this collection
Browse
Recent Submissions
Item The Many Faces of Cognitive Labs in Educational Measurement(The Ohio State University Libraries in partnership with the Institute of Philosophy and Sociology, Polish Academy of Sciences, 2023) Arieli-Attali, Meirav; Katz, Irvin R.; Cayton-Hodges, GabrielleCognitive labs are becoming increasingly popular over the past decades as methods for gathering detailed data on the processes by which test‑takers understand and solve assessment items and tasks. Yet, there’s still misunderstandings and misconceptions about this method, and there is somewhat skepticism about the benefits of the method as well as lack of best practices for using it. This study’s purpose was to clear out some of the misconceptions about cognitive labs, and specifically to show through theory and examples of use, the concrete benefits and best practices of cognitive labs in different stages of assessment development, ranging from early stages of conceptualizing and designing the task or item to later stages of gathering validity evidence for it. Previous literature review on the topic revealed that even the term “cognitive labs” describes different techniques, originated in three different fields of study (Arieli-Attali, King, & Zaromb, 2011): 1) Cognitive Psychology and Artificial Intelligence research (“Think Aloud” studies, e.g., Ericsson and Simon, 1993); 2) Survey development studies (“Cognitive Interviews”, e.g., Willis, 2005); and 3) software development studies (“Usability Test”, e.g., Nielsen and Mack, 1994). While the latter two fields draw from the first original method, the different terminology and practices might have been the cause for skepticism and avoidance of use in educational measurement. This study maps the various ways of applying the method, shedding light on which variation can be used in which context of assessment development, in order to answer the research questions. We conclude that while it is evident that uninterrupted think aloud is needed for collecting response process validity, more flexible techniques may be used in contexts of usability or for assessment fairness or accessibility purposes.Item Attention checks and how to use them: Review and practical recommendations(The Ohio State University Libraries in partnership with the Institute of Philosophy and Sociology, Polish Academy of Sciences, 2023) Muszyński, MarekWeb surveys dominate contemporary data collection in numerous disciplines within the broadly understood social sciences. However, this mode of data collection comes with additional challenges, particularly related to careless or insufficient effort responding (C/IER), which can distort study results and poses a direct threat to the validity. One of the recommended approaches to address this problem is using attention checks, which are additional tasks or items with objective answers that indicate attentive responding. Despite the potential benefits of attention checks, recent evidence suggests that they are still not sufficiently researched to justify their uncritical use in screening out inattentive participants. This article provides an abridged review of the attention checks literature, offers evidence-based practical recommendations, and highlights crucial gaps in research regarding attention checks. Evidence-based recommendations concerning the type, number, and placement of attention checks in a survey are presented. Generally, including more than one attention check in a survey is advisable, especially for longer surveys. Long instructed manipulation checks should be avoided, instead, covert attention checks, which are difficult for participants to identify, are recommended to reduce negative side effects such as noncompliance. In addition to attention checks, other criteria, such as item-level response time analysis, should be used in combination to identify inattentive participants. It is crucial to carefully analyse all data before making decisions about participant elimination. Ethical considerations related to the use of attention checks are also discussed, recognizing the importance of maintaining participant trust and understanding the potential impact on survey completion rates and data quality. Overall, attention checks hold certain promise as a tool to enhance data quality, but further research and a thoughtful implementation are necessary to maximise their effectiveness.