How are We Really Doing on Customer Satisfaction?

Ric Kosiba, Chief Data Scientist, Sharpen Technologies

WFM Focuses on Efficiency, and We’ve Done a Great Job

When contact centers became a thing, their primary purpose was to save companies money.  Businesses knew they could run a bigger call center more efficiently than they could small and disparate call handling groups — call centers brought significant economies of scale.  It is the reason workforce management became such an important part of the contact center operation — not only do we get efficiencies from standard automatic call distributors, but by scheduling agents better and by managing adherence and schedule exceptions, we could save our companies even more money.

We have done such a good job that we don’t really talk about this much anymore, rather efficiencies have been realized and are just assumed, so long as we do our jobs.

Instead, we have been discussing, in many different flavors, paths to greater customer satisfaction.  We discuss measuring customer satisfaction through net promoter scores (I’m not a fan) and customer satisfaction scores. We discuss coaching and feedback, reducing agent attrition (losing experienced agents is both a cost and a quality issue), and tools to answer our customer’s questions better and quicker.  We’ve shifted, for quite a while, toward improving our customer’s experience. Which is good!  

We now search for better metrics to define and measure interaction quality. We survey our customers, and we search to find AI technologies that automatically improve the quality of the interaction.  Our vendors come up with new products that promise better interactions. And so, contact center vendors dream about selling call center technologies to marketing departments (instead of operations), pitching the ability to use the call center to drive customer loyalty and sales.  

We have been trying to make quality the next reason for contact center organizations and contact center technologies.

But have we made any headway?  The data is iffy at best.

The Graph

For most organizations, contact center agents are at the tip of the customer service spear.  Contact center departments are proud to own the lion’s share of our company’s customer satisfaction.  CSAT is on us.

When we look at customer satisfaction surveys over the course of the last twenty years, a time when we have invested so much into our contact center operations, we find that the result is somewhat tepid.  The organization, ACSI, a customer satisfaction survey and analytics company, conducts an annual poll that serves to measure customer satisfaction across all U.S. companies. This year they produced the following info, that we graphed:

CSAT chart 1

For a better description, see American Customer Satisfaction Index at

What is most surprising about this graph is its flatness.  Its variance is +/- 3%, in an industry that has spent billions on customer satisfaction. Shocking, but the customer service index starts in 1994 at 74.8% positive CSAT and it ends after 27 years of measures, and after amazing investments in process improvements, AI, quality assurance technologies, natural language processing and speech analytics, coaching, and other expensive tools, at 73.7% positive CSAT. 

The CSAT improvement: Nada. It is certainly disheartening.

From ACSI: “The major reason for the decade-long stagnation in customer satisfaction among major companies and the subsequent sharp decline in customer satisfaction is not a result of lack of attention by businesses. Companies devote more effort than ever before to enhancing the shopping, purchasing, and consumption experience. It is not for lack of data either. Companies assemble, organize, combine, transmit, store, and display more customer data now than they have in the past. There is no evidence that consumer expectations have risen either.”


While your mileage may vary, for our industry much of our investments have failed. They haven’t moved the CSAT needle at all. This should concern us all.

Why is this the case?  

According to the ACSI, and I think they are onto something, lack of customer data isn’t the issue, but rather, lack of customer information is: “While companies today have more data about their customers, the analytics employed to turn data into information are for the most part not good enough. Customer satisfaction data have certain characteristics that make it difficult to obtain accurate estimates, to pinpoint what aspects of the customer experience need attention, and to gauge the financial impact of actions contemplated.”

It’s not a lack of data, it is a lack of insight from the data.  Never has an industry had as much data as we have and the question is, can we make sense of it all and then be able to act upon it?  

If ACSI is right, are we measuring the right things? Certainly, on the workforce management side of the house, we’ve chosen the right metrics — we’ve seen those efficiency gains. But on the quality side, are we measuring and focusing our business toward improving the correct measures and processes that result in happier customers? The ACSI data would suggest we are not. We are measuring the wrong metrics associated with CSAT — because those measures do not inform us how to improve the interactions with our customers.

In this column in the last two issues of On Target, we discussed some of our quality measures and their shortcomings. We questioned NPS and other surveyed measures.  The ACSI data would also suggest many of the technologies being touted, like predictive routing or AI, are not having the positive customer satisfaction outcomes expected.

What are the components of satisfaction? Proponents of AI-based predictive routing would suggest there is some unknown chemistry between agents and customers that improve satisfaction. But is there? Lately, I have been on the phone with a number of companies, not because I was itching to speak to a phone agent, but rather, I knew my request was too complex for any alternative (oh, do I hate talking to any machine, BTW).  What do I care about?

  1. Getting my question answered.
  2. A quick interaction and a sunny agent are a plus.

That’s it.  That’s the key to customer satisfaction.  But don’t take my word for it, here’s some data.  

In the first graph, from data at one of our customers, I am plotting Customer Satisfaction Score (surveyed and measured out of a top score of 5) on the Y-axis, against handle time on the X-axis.  Each dot represents the average agent performance over the course of a month. If there was a relationship between these measures—if long handle times influenced CSAT negatively—we would see a downward trend.  But we don’t, instead, we see a data blob, with no real pattern at all.  This means there is little relation between agents who handle calls slower (long AHT) and their CSAT performance.

Average Handle Time Versus CSAT

The next chart plots customer satisfaction versus hold time for each agent over the course of a month.  Clearly, customers don’t like being put on hold (I don’t!).  But again, we are left with another data blob, with no significant trend.  Agents who put their customers on hold much more do not see any real change in CSAT.  The two measures look unrelated. Being put on hold does not seem to drive dissatisfaction.

CSAT chart 3

Hold Time Versus CSAT

Let’s look at one more, this time plotting CSAT against Active Contact Resolution (ACR-1 Day).  If you remember from the last couple of issues of On Target, ACR is a cool new metric that measures whether the customer’s question was answered and that the customer did not need further contact with us for 24 hours. Completely intuitive.

We no longer have a blob.  Instead, we have a great relationship between agents who can better answer customers’ questions and customer satisfaction.  Bingo.

A few things to note.  First, ACR is possible to tally for every contact. Second, our experience shows that agents can be motivated to significantly improve their personal ACR performance, and hence, CSAT.  Third, by improving ACR, not only do you move the needle on CSAT, but you also improve center efficiency, as it reduces the number of repeat callers. Super cool.

ACR1 Versus CSAT

I don’t have the answers, but I suspect that part of our industry’s lack of progress has been that we have been measuring the wrong things we measure metrics that do not correlate with happy customers.  And we try to fix those unrelated measures, which do not bring us better real results.  ACR measures the right thing and improving it will improve customer satisfaction. What seems to move the needle?  Not technology necessarily, but a focus on the right, simple, and easy to convey metrics.

I truly think we are onto something cool.

Ric Kosiba is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. He can be reached at or (410) 562-1217.

Sharpen Technologies builds the Agent First contact center platform, designed from the ground up with the agent’s experience in mind. A happy agent makes happy customers!