The 2016 State of the Speech Technology Industry: Assistive Technology
Navigating a world where speech, hearing, and the ability to see are taken as givens, the speech impaired, the hard of hearing, and the partially or totally blind have everyday challenges that speech technology is helping to solve in many instances.
Demand for technologies to help the speech, hearing, and visually impaired is expected to increase as the population ages; as people get older, more develop these kinds of impairments.
More specifically, there are roughly 750,000 strokes annually, according to the Centers for Disease Control and Prevention (CDC), killing 129,000 every year, according to the American Heart Association (AHA). Some sources place the annual stroke count as high as 1 million. Stroke is the No. 5 cause of death, according to the AHA. There are currently between 1 million and 2 million Americans with aphasia, according to the National Aphasia Association and the Adler Aphasia Center, respectively. More than 80,000 Americans are diagnosed with aphasia every year, according to the National Aphasia Association. According to the CDC, there are currently more than 10 million Americans living with cognitive impairment. Healthcare spending is increasing in the United States and across the globe, so more people than ever before have access to speech, hearing, and sight assistance devices.
One of the oldest and most publicized speech technologies is the speech synthesis technology that world-renowned physicist and author Stephen Hawking uses to communicate. Hawking is using the same basic system developed 30 years ago for people who have lost their voice and people who never had a voice, says Alan Black, Carnegie Mellon professor of consumer science.
By using a small sensor, which is activated by a muscle in his cheek, Hawking "types" characters and numbers on his keyboard to produce synthesized speech via a speech generation device (SGD). Hawking's device was developed by now-defunct Speech Plus. The largest current U.S. manufacturer of these devices is Pittsburgh, Pa.–based DynaVox Systems, LLC, a Tobii Technology Company.
While Hawking's device uses a single robotic-sounding and American-accented "voice" (Hawking is British), systems from DynaVox and other companies offer a variety of voices so that the synthesized speech can be somewhat individualized.
Lending Your Own Voice
Going several steps further are assisted speech systems that use recordings of one's own voice so that the reproduced speech sounds like the individual using the device.
One of the most famous cases of such a device on the market was the system that film critic Roger Ebert employed after first using a speech synthesizer that gave the native Illinoisan a British accent. Ebert lost his ability to speak after cancer led to the removal of his jaw.
Ebert eventually gravitated to a CereProc’s CereVoiceMe, a voice cloning service. He didn't like it at first because he didn't think the voice sounded like him even though it came from 30 years of recordings of his appearances on television and radio programs, according to Black. The reason Ebert didn't think it sounded like him is because your recorded voice sounds different from what you hear when you speak, which incorporates vibrations in the skull and eardrums. When listening to a recording, one hears only via eardrum vibrations.
Others who know they are losing their speech often will record several hours of their voice to take advantage of the voice—cloning technologies offered by various companies today. Typically, the more hours one can record of oneself, the more natural the reproduced voice will sound because more inflections and other nuances of speech can be included. However, the more robust solutions also cost more.