Untapped Potential: Speech Technologies and the Home Health Care Market
Although call centers and other telecommunications-based applications hold the largest share of the market for speech technologies, there are other areas in which there is a real need for speech-enabled applications. I'd like to focus on another market segment with significant potential - health care - and specifically, the home health care portion of that market.
Speech technologies have been slow to find their way into medicine and health care, partly because of the criticality of the tasks involved and the high levels of accuracy required for many medical applications, but also because medicine and health care, traditionally, have not been early adopters of new technologies. Even medical transcription, one of first and most obvious speech applications in medicine, was only slowly embraced. Nevertheless, as the technology improved, the number of successful applications increased. A spattering of other types of medical applications exists, mostly speech command-and-control of medical devices and tools in surgical environments, but there is potential for many more.
Increasingly, U.S. health care is moving away from the hospital and clinical environment to the family home. As our population becomes older and people are living longer, the number of people coping with chronic conditions (e.g., congestive heart failure, chronic obstructive pulmonary disease, diabetes) has increased. A major goal of the current health care system is to help patients learn to self-manage these chronic diseases at home, so they can avoid costly emergency room visits and re-hospitalizations. To accomplish this, home health care technology includes devices that monitor patient status remotely, as well as computer software that helps to educate patients and encourage them to develop good self-management strategies and habits.
Not surprisingly, most of these devices for the home are designed to have the simplest of user interfaces, so that they can be operated by patients and lay caregivers. Admittedly, the user interfaces look deceptively simple; they generally consist of a small piece of hardware, usually no larger than a desktop office phone, with a screen that displays questions to the patients, and a few buttons the patient presses in response to questions asked. Peripheral medical devices (e.g., pulse oximeter, heart rate monitor) are easily attached to communication ports on the back of the device, allowing vital signs to be sent electronically to a remote location for monitoring by nurses and other health care providers. Any device can be hooked up to any port. Sounds simple, and it is for many patients, but not all.
Patients using these devices usually have just been released from the hospital. They are very sick and have limited strength and endurance. In addition, they tend to be older people, and some of them have limited education and low levels of literacy/reading skills. Some have limited experience with computer systems, and some have additional health problems that may, or may not, be related to their chronic disease. For example, a sizable proportion of diabetics have vision problems.
In a clinical trial of one of these devices, in which I am currently involved, the research team found that most patients learned to use the device easily, a tribute to the reasonable care that went into the design of the user interface. However, almost from the start, we found that for some patients this interface, simple though it was, was problematic for reasons stated above. Speech-enabling the interface - having the questions presented to patients via synthetic speech and, perhaps, even enabling patient responses via speech recognition - would have helped us to compensate for low levels of literacy, limited stamina, and low vision.
I would encourage application developers to take a closer look at the home health care market and the potential application of speech technologies there, especially in systems designed for telehealth/telecare. Given the diversity of capabilities and limitations we see among patients, it makes sense to design the user interfaces for these devices to be as flexible as possible, or at least to provide options that extend beyond the visual input/physical response user interface. Telemonitoring and remote patient education devices exist for many conditions and, increasingly, telehealth and telemedicine services are becoming reimbursable under Medicare. We have the opportunity to make a real difference in people's lives by applying speech technologies in this arena, and I hope developers will embrace the challenge.