A New Approach to Conversational AI Uses Symbolic Neural Networks
A new approach to artificial intelligence could provide customer experience and digital transformation specialists with more accurate intent recognition, according to a new report from Opus Research.
In the report, “Neuro-Symbolic Artificial Intelligence and Potential Impact on Conversational Commerce,” Opus founder and lead analyst Dan Miller details a joint venture between IBM and MIT that relies on machine vision in combination with a broad AI that can multitask and cover multiple domains, but which also can read data from a variety of sources (text, video, audio, etc), regardless of whether the data is structured or unstructured.
This approach could enable users to “do more with less” and provide for greater transparency and privacy, according to Miller.
Employing the neuro-symbolic approach to conversational AI could enable companies to “add common sense” to their chatbots, intelligent virtual agents, interactive voice response systems, and the prompts provided to live agents.
“AI has been subjected to boom-then-bust cyclicality since the 1970s, but this time feels very different,” Miller says. “The MIT-IBM Watson AI Lab, the joint research effort forged in 2017, has put significant resources into a new approach that combines the probabilistic pattern recognition capabilities of today’s deep neural networks and deep understanding with the once-prevalent ‘symbolic’ approach to AI that is based on representations of problems, logic, and search that are considered more human-readable. Their approach offers the possibility of levels of accuracy that approach 100 percent for functions like image recognition or natural language understanding.”
With the new approach, companies will be able to improve their automated systems’ ability to understand the actions and utterances of those with whom they are trying to communicate, according to Miller. “The technology can enable companies to step up their level of customer care. It offers improvement over being tossed around between live agents and chatbots.”
Another advantage of incorporating neuro-symbolic AI into customer conversations is architectural. Resources can be deployed in multiple clouds, as well as on processors embedded in smart endpoints, including smartphones, smart speakers, and automotive information/entertainment consoles. Thus, the ability to support ubiquitous, omnichannel, device-agnostic self-service strategies, including contact center, is simplified.
The technology requires far fewer resources than the technology used today to understand customer actions, Miller adds. In fact, neuro-symbolic systems could achieve better results in natural language processing and other tasks with as little as 1 percent of the data required by current alternatives, according to David Cox, IBM Research’s director of the MIT-IBM Watson AI Lab.
In 2018, the lab published research regarding the Neuro-Symbolic Concept Learner, a resource that eases the machine training. “As conversations become the prevailing engagement model between brands and their customers, it will become increasingly apparent that successful task completion ultimately relies on solving a succession of non-related puzzles,” Miller explains.
Still a Little Futuristic
While the neuro-symbolic AI technology is in its embryonic stages, the speed of technology advancements today means it could enable very practical conversational commerce in a few years.
Less than a decade ago, machine vision was relatively new—and extremely expensive—in the robotics realm. But now several companies offer the technology with continuing refinements for better color definition, shape recognition, etc., enabling vision-capable robots to perform tasks that previously could only be done by humans.
Miller doesn’t think it will take that long for the IBM/MIT technology to become commercially viable in conversational AI applications.
Concerns over consumer privacy could provide a catalyst to advance the adoption of the technology, according to Miller. “The neuro-symbolic AI requires a small fraction of the data that [deep neural network]-based approaches have relied on. This is vitally important because of the heightened concern surrounding privacy and protection of personal information. Enterprises had grown comfortable collecting or purchasing voluminous amounts of personal data about their best customers in order to provide highly personalized service.”
Miller adds: “Ideally, companies need to work exclusively with first-person data, meaning the information that an individual provides directly and voluntarily in the course of spoken or text conversations. This has no clear connection with the use of symbolic processing, per se, but in an era of heightened sensitivity to personal privacy, any approach that minimizes reliance on third-party data on individuals is a positive.
“In the years ahead, as it is productized and commercialized, we expect the technology to be a source of measurable improvement in customer experience and agent productivity.”