Saving Costs Vs. Saving the Customer
You wouldn’t throw thousands of dollars into a stock purchase without first doing a little homework. Before writing a check to your broker, you would want to establish a clear-cut goal, which in this case would most likely involve at least turning a small profit. Naturally, you would want to research the company’s past financial performance, projected revenue per share, earnings forecasts, and overall direction to determine its ability to get you to that goal.
A similar process needs to be established when investing in a speech system or making a change to an existing system. Companies need to first establish and define their goals in employing such systems, and then identify the key performance indicators (KPIs) to evaluate their progress. The right KPIs further ensure that an organization has a clear objective and that the project is aligned with that objective.
Determining which KPIs are important in the contact center will vary based on what the company is trying to achieve. Organizations can easily put together a list of 20 or 30 KPIs to track within the contact center, but experts widely agree that just a handful of them will have the biggest impact.
One of the first, and often considered the most important, is cost per contact, defined as total contact center costs (including employee salaries and benefits, facilities costs, telecom fees, training costs, and hardware and software costs) divided by the total contact volume from all the customer channels (including phone, email, Web, fax, and SMS). Cost per contact is driven by several other KPIs, like calls per hour, first-call resolution (also referred to as “one-and-done” calls), accuracy, agent utilization, call handling time, average speed of answer, abandonment and containment rates, and task completion rates.
The crucial question that needs to be considered is whether it pays to automate, says Joseph Maxwell, chief operating officer of Parlance, a provider of speech-enabled communications solutions. “There is a question of cost: What are the call volumes? Where will the savings come from? Where will it help revenue?”
The key point to be considered is whether enough calls can be offloaded to the speech system to create value for the company. Not all calls can or need to be automated. “You have to look at the requests and then determine if they can be better served with automation,” Maxwell says. “Sometimes automation is not a good fit because the agents are [handling the requests] well enough.”
But while costs are indeed important, focusing too much on them can be dangerous, leaving many other factors to chance. “A lot of the decisions are based on costing models, where the company thinks it can save [money] by going from [touch-tone] or an agent to speech,” says Peter Leppik, founder, president, and CEO of Vocal Laboratories. Costs should never be the only factor considered, he and others assert.
“As a contact center manager, it puts you into the ‘I’m a cost center’ bucket rather than something to be viewed as a valuable strategic asset,” says Grant Shirk, director of industry solutions at Microsoft/Tellme.
Furthermore, basing the automation question solely on cost per contact limits a company to determining its success based on saving money, “and you don’t build a business based on saving money,” Shirk adds.
For that reason, the other foundational KPI should always be customer satisfaction, according to experts. “There needs to be a customer service element, too,” Leppik says.“The obvious impact is that deploying a system does not necessarily save money, and it can have a negative impact on customer satisfaction, especially if they’re hanging up or zeroing out to an agent.”
According to Leppik and others, customer sentiment is often left out of the decision-making process, or not weighed heavily enough, because companies just assume customers will be better served with automation. That is not necessarily the case, however, and some tasks are better left to agents.
Tied directly into the concept of customer satisfaction is task completion rate, which Shirk considers to be the most important KPI for any customer contact center. “By looking at the task completion rate, you can look at what customers are trying to do, where they are going, and where they are having trouble, and you can then go in and make corrections to the system to make their lives better,” he says.
And unlike automation rates, average call handling time, or speed of answer rates—which Shirk considers lower-level metrics—task completion rates allow contact center managers to focus on building more efficient ways for customers to do things.
Automation, speed of answer, and call handling times are important, especially when most contact center costs are prioritized on a per-minute basis, according to Shirk, but they relate more to cost than to customer satisfaction. “We all know that waiting in a queue can have a detrimental effect on the customer interaction, but they’re more a matter of agent staffing, scheduling, and utilization,” he says. “They’re more about time management rather than dealing with the customer’s problems.”
“You have to do field work and find out where the [customer] pain points are,” Leppik adds. And that requires asking the right questions.
Out in the Field
Those questions should fall into two categories: objective and subjective, or, as independent speech consultant Jim Larson calls them, performance and preference.
Objective or performance data is easier to come by. It includes computer logs of data gathered during the call, such as speech recognition accuracy rates, task completion time, and how long the system took to respond to a question or route the caller to the appropriate location. Many vendors make solutions that can apply business intelligence, call logging and recording, speech analytics, and other technologies to gather this information. Call monitoring also can go a long way in helping companies collect this information.
Subjective or preference questions, on the other hand, seek user reactions to the system. They typically come in survey form and ask callers to rate aspects of the system on a scale from 1 to 10. Survey questions can include the system’s usefulness, whether the caller would use it again and/or recommend it to a friend, whether the voice was pleasant, and whether the dialogue moved the interaction forward. “The goal for [answers to] these questions should be 9 or higher,” Larson says.
In gathering customer satisfaction data, Leppik says companies should also pay close attention to the call flow. “Were there repetitive steps within the call?” he asks. “Customers get frustrated if they have to give their account information three or four times.”
The IVR, he adds, can have the biggest impact on the customer’s perception of the company and the overall success of a call. “It’s a key driver for customer satisfaction and loyalty and whether he would recommend and promote the company to others,” he states.
In collecting IVR data, VocaLabs “ties into the IVR and gets live streams of information about the customer: who called, what path [in the IVR] they went through, and whether they went to an agent,” Leppik explains.
VUI Jail
Another KPI with perhaps the farthest-reaching effects in the call center is first-call resolution. By tracking and eliminating the reasons for repeat calls, a contact center can improve customer service, but it can also lower costs by reducing call volumes, and possibly even the number of agents needed to field calls in the first place.
Donna Fluss, founder and president of DMG Consulting, also suggests that companies include call containment rates in the mix. Call containment, defined as the number of callers who complete their tasks entirely within the automated system without zeroing out to an agent, has to be scored appropriately to be of consequence, though. All too often, companies that look at containment numbers fail to consider that customers could be hanging up before they complete their tasks, Fluss says.
“If they’re hanging up in three to six seconds, they were in the wrong spot and did not want to end up in the IVR,” she explains. “Now some companies consider that a contained call, while other organizations do not count that because the person clearly was not satisfied.”
Microsoft/Tellme’s Shirk takes issue with using call containment/abandonment rates as metrics because they do not consider that customers interact with companies through more channels than just the telephone. “Today people more frequently are turning to other channels to engage companies. They do parts of their tasks across multiple channels at different times,” he says.
Containment and abandonment also fail to consider whether the customer was able to accomplish his goal. “Task completion is more relevant because it allows the customer to say, ‘I had a specific task in mind, and I was successful at it,’” Shirk adds. And that, more than anything else, weighs heavily when determining whether a customer was satisfied with the interaction.
“Always, the enterprise should pay attention to whether customers are satisfied—are they pleased with the content and outcome of the IVR?” Fluss argues. And that includes not just the self-service IVR, but also the agent contact and the company’s fulfillment, which looks at whether the company followed up and did what it said it would to make a bad situation right.
Then, Fluss adds, “it’s not enough to determine if [the customer] was satisfied or not. You need to find out why he was not satisfied.”
Speech analytics can go a long way in that regard, says Fluss, who advocates for the technology as “something that should be standard with any IVR you’re putting in.”
Agents can provide a lot of assistance there as well. “Callers often express their displeasure to agents,” Fluss explains. “So you really should listen to those calls to determine where the customer is not satisfied.”
Determining where the customer lost interest often requires two additional questions, Maxwell says: One, is the interaction fast? Two, is it easy?
To determine how easy a system is, “we have something called a ‘two-try rate’—if they can get what they want in two tries or less,” he says. “To get the user to adopt the application, it needs at least an 85 percent two-try rate.”
On the issue of speed, “look at the median call duration—the total length of the interaction between the caller and the system, and not just the dialogue, but also the call transfer, etc.”
Next, Maxwell suggests looking at the operator assistance rate—how many people are opting out to an agent. “If there’s a high operator assistance rate, the application is not delivering ROI from automation,” he says, noting that such a metric can also uncover additional areas that could be automated.
Then one of the last metrics he suggests is call volume trending data, which is subject to change as the company experiences layoffs, growth, or a drop-off in business. “Each can be an opportunity to help customers better,” he says.
Another crucial KPI to consider is agent satisfaction, since satisfied agents can be the company’s best advocates. It goes without saying that the contact center would not be able to operate without agents, and in an increasingly competitive job market, ensuring staff turnover is low is one of the most important things a contact center manager can do to improve overall performance. Another school of thought says that if the contact center management takes care of its agents, the agents will take better care of the customers.
When to Ask
Along with knowing which questions to ask, companies often struggle with knowing when and how often to ask the questions. Larson and Leppik both suggest collecting data at three stages of deployment. Stage 1 is when the company is determining which technologies to deploy. Stage 2 is the dialogue development stage, when scripts and grammars are written, voice talent is selected, and prototypes are chosen. Stage 3 is when the application goes live.
“You want to go out and sample live customers to see if the application accomplishes its goals and where it can be improved,” Leppik says. “You should do a test right away to get solid data,” he continues. “Then afterward, do constant, low-level testing. You can do a modest number of customer surveys every week or every few weeks and roll it up every few months to get higher-level statistics.”
As a rule of thumb, according to experts, KPIs should be collected at least every three months. “Minimally, you want to do it on a quarterly basis, but ideally it should be done on a monthly basis,” Fluss states. “Companies should always be listening to the voice of the customer and surveying their customers.”
Unfortunately, many companies do not follow up as often as they should, Fluss says, adding that companies should expect to change goals, KPIs, metrics, and data sources as the business and technologies evolve.
Leppik agrees. “Speech applications do not exist in a vacuum,” he says, noting that changes in customer expectations, the competitive landscape, company products and promotions, and more can affect the customers’ experiences and why they call into an IVR in the first place.
Who Should Ask
Measuring the success of an application is often an area left to quality control experts working in conjunction with a VUI designer. “I do not think the application implementers should do it,” Larson says. “It’s not their area of expertise.”
The initial VUI designer might also not be the right person to collect KPI data, Leppik suggests. “When we’re talking about customer feedback, people can become defensive, especially when we’re talking about negative sentiments,” he says.
Surveys, the experts agree, should be conducted by third-party firms that specialize in collecting such data, and should ideally take place no later than 24 hours after the initial customer contact.
Leppik says his company “calls the customers back a few minutes after they hang up, and because we do [surveys] right away, we get a good response rate.”
As a final caveat, Fluss warns against using metrics and KPIs to benchmark against the competition. “It doesn’t matter what the other organizations are doing,” she says. “You need to continue to optimize your own operations first.”
KPI Duty
To maintain a consistent level of service quality, call center management should rely on several key performance indicators (KPIs) as gauges. KPIs for contact center operations include the following:
- task completion rates;
- cost per contact;
- customer satisfaction;
- calls per hour;
- agent utilization;
- first-call resolution;
- average call handling times;
- average caller wait times;
- routing accuracy;
- abandonment rates; and
- containment rates.
The Metric System
To determine customer satisfaction, companies should be collecting information using both subjective and objective metrics.
Subjective metrics should include the following:
- ease of use, which measures how callers perceive the application with regard to navigation, error recovery, and providing help when needed;
- quality of the speech output, which measures the intelligibility of the system dialogue; and
- perceived first-call resolution, which measures whether callers complete their tasks on the first try.
Objective metrics, which do not involve judgments from callers, but rather data collected by logging and recording actual calls, include the following:
- time to task, which measures the time it takes for the caller to start the task about which he called, and looks at lengthy instructions, references to a Web site or other self-help channel, marketing messages, or other content that plays at the start of the call;
- task completion rate, which measures how often a caller is able to accomplish her goal;
- task completion time, which measures how long it takes for the caller to complete his task, including transfers and other steps in the process;
- correct transfer rate, which measures the frequency with which an application takes the caller where she needs to go to complete her desired task or misdirects a call to an inappropriate menu or agent;
- abandonment rate, which measures how many callers hang up while waiting in the queue or while navigating through the IVR; and
- containment rate, which measures how many calls are handled entirely by automation without transferring to an agent.