The tech industry is banking on artificial intelligence like Siri, Alexa and OK Google becoming ubiquitous. Voice assistants are notorious for misinterpreting local accents, but many overlook that this extends to people with disabilities.
Voice recognition algorithms are built from libraries of standard pronunciations and speech patterns, so people who have difficulties with speech or enunciation also have trouble accessing these technologies. And because they may have physical disabilities as well, these are often the very people voice assistants could help the most.
Because of their unique speech patterns, voice technology doesn’t understand people with Down syndrome. Out of the box, Google’s voice assistant misunderstands about every third word from an average speaker with Down syndrome. This is due to a large lack of training data.
Project Understood aims to improve Google’s algorithms by building out the database of voices. The Canadian Down Syndrome Society is working with Google to collect voice samples from the adult Down syndrome community to create a database that can help train Google’s technology to better understand people with Down syndrome. The more voice samples we have, the more likely Google will be able to eventually improve speech recognition for everyone.
Spots from FCB Canada follow Matt MacNeil, a Canadian with Down syndrome who works with CDSS, as he travels to Google headquarters in Mountain View, California, to work with Google engineers and product managers to refine the voice recognition tools.
Why it’s hot: We’ve seen the repercussions of a lack of diversity in advertising and tech, from alienating workplaces to tone deaf creative. But there remains much to explore and address. As artificial intelligence becomes more ubiquitous, the design of relationships between humans and machines carries exciting opportunity to help people in meaningful ways, and more serious implications to getting it wrong. Overlooking people with disabilities is a glaring misstep that is part of a larger problem – we can’t design inclusive experiences from a single perspective. We need to develop new design frameworks, blended skillsets, diversity of thought and ethical systems of governance for building empathy into technology.