From User Perceptions to Technical Improvement: Enabling People Who Stutter to Better Use Speech Recognition
AuthorsColin Lea, Zifang Huang, Jaya Narain, Lauren Tooley, Dianna Yee, Tien Dung Tran, Panayiotis Georgiou, Jeffrey Bigham, Leah Findlater
AuthorsColin Lea, Zifang Huang, Jaya Narain, Lauren Tooley, Dianna Yee, Tien Dung Tran, Panayiotis Georgiou, Jeffrey Bigham, Leah Findlater
Consumer speech recognition systems do not work as well for many people with speech differences, such as stuttering, relative to the rest of the general population. However, what is not clear is the degree to which these systems do not work, how they can be improved, or how much people want to use them. In this paper, we first address these questions using results from a 61-person survey from people who stutter and find participants want to use speech recognition but are frequently cut off, misunderstood, or speech predictions do not represent intent. In a second study, where 91 people who stutter recorded voice assistant commands and dictation, we quantify how dysfluencies impede performance in a consumer-grade speech recognition system. Through three technical investigations, we demonstrate how many common errors can be prevented, resulting in a system that cuts utterances off 79.1% less often and improves word error rate from 25.4% to 9.9%.
Speech recognition systems have improved substantially in recent years, leading to widespread adoption across computing platforms. Two common forms of speech interaction are voice assistants (VAs) that listen for spoken commands and respond accordingly, and dictation systems, which act as an alternative to a keyboard by converting the user's open-ended speech to written text for messages, emails, and so on. Speech interaction is especially important for devices with smaller or no screens, such as smart speakers and smart headphones, that support speech interaction. Yet speech presents barriers for many people with communication disabilities such as stuttering, dysarthria, or aphasia.