Uncovering an Inclusion Gap in the Design of Digital Assessments for Middle school-aged Deaf and Hard of Hearing Students in the United States

Open Access
Article
Conference Proceedings
Authors: Alexis Polanco JrTsailu Liu

Abstract: What does a score on a digital assessment mean? At its core, a score is a measurement of how a student matches up to a predefined construct. For example, a reading assessment may measure the construct of a student’s reading fluency, comprehension, or both. This research seeks to challenge the legitimacy of digital assessment from the lens of Accessibility, User Experience (UX), Inclusive Design, and Marginalized Populations by focusing on the needs of the deaf and hard of hearing (DHH) middle school-aged student in the United States.DHH learners are among the least understood groups. Neither the US Census nor public schools recognize American Sign Language (ASL) as a non-English language used at home. For the sake of discussion, this research references a study by Goman from 2016 which estimates that 14.3% of all Americans aged 12 and older have some form of hearing loss, and a study from the U.S. National Center of Educational Statistics which estimated students with hearing impairment between ages 3-21 at 1% of all students. These statistics are especially concerning when juxtaposed with how assessments are created. Two of the top educational companies in U.S. use a process called “pretesting” to determine the statistical relevance of the questions used in their assessments. This process involves trialing assessment items with a sample group similar to the population to be assessed. As assessments are increasingly delivered digitally, they overlap with other disciplines like UX Design. In UX, it is well documented that testing with five people finds most problems. If we assume that pretesting uses a similar sample size, it is a reasonable assumption that many items would not be trialed with DHH students, i.e. this marginalized group isn’t populous enough to be accounted for in a statistically relevant pretesting sample.To provide legitimacy to this claim, this research used structured interviews with subject-matter experts (SMEs) in usability, accessibility, child-computer interaction, and DHH education. The responses provided by these SMEs lent credence to the idea that DHH learners were often not included in digital assessment design either due to being sampled out, a lack of accessibility awareness, and/or the absence of inclusive design guidelines for DHH students. For example, one interviewed Director at a prominent deaf institution said, “In terms of my field, there isn’t some tangible set of design principles that apply in [my] specific area. These things are developing as we go.”This is especially concerning when scores for deaf learners have wide implications in terms of public funding for school districts at the macro level, and self-worth issues at the individual level; especially when it is oft-cited that 80% of age-14 DHH students on average place below a grade-4 reading level. For these reasons, the goal of this research is to empower designers, developers, managers, and researchers with a repeatable framework for inspiring cross-disciplinary collaboration to create fair and equitable digital assessment designs. It is about meeting the full spectrum of need for every individual student—starting with the DHH student’s needs.

Keywords: Deaf, hard of hearing, design, accessibility, inclusion, education, assessment

DOI: 10.54941/ahfe1003329

Cite this paper:

Downloads
126
Visits
353
Download