Metrics, Measures, and Agent Performance

By Ric Kosiba, Chief Data Scientist, Sharpen Technologies

They say the first step to improving performance is to start measuring it. The contact center is so lush with data that whole industries have grown up around winnowing that data into meaningful chunks and creating flashy dashboards. There are companies that specialize in reporting systems, others that have built agent performance games using this data, and consultants that have developed standard (but custom!) management analyses using our ACD and other contact center data. We are data rich.

I’ve had the luxury over the last few weeks to spend time thinking about contact center data and its application for managing agent performance and, more importantly, agent experience. I’ve been lucky enough to speak to several smart contact center experts and to hear their stories on performance hits and misses. I’d like to take you down the bumpy road of our current thinking.

Why Do We Measure?

This question sounds a bit basic. But when viewing the management reports of several companies, the question “why do you have this report?” was sometimes met with the answer “I don’t know” or “I guess we really don’t need this one.” Reports have a life all their own and many reports or dashboards live well beyond the work-life of their creators and sponsors.

What’s our purpose for measuring agent and contact center performance? It’s to improve performance and help manage it. Others use the reports as a warning sign of some operational hiccup. Let’s delve into the performance management aspects of measuring.

A GPS, Not a Compass

I had the pleasure of chatting with contact center guru and deep thinker (and my friend) Tricia Payne from American Express, and she gave me this awesome insight: “Measurement should be a GPS and not a compass.” No matter what we measure, its purpose should be to guide management, floor leaders, and agents toward a desired improvement. If a metric doesn’t do this, or worse yet, if it causes behaviors that are detrimental, then simply the act of measuring something is counterproductive. Measurement for measurement’s sake is a waste.

What are the attributes of a good performance metric? Let’s list some:

  1. First it needs to be measurable, which sounds like a given. But many measures are simply not possible, either because of the nature of the metric, or the limitations of our ACDs and reporting systems.
  2. Your performance metrics need to be actionable or corrective, meaning an agent needs to be able to affect improvements of the measure. For example, service level is not individually actionable by an agent (or even a supervisor). While an agent or supervisor can slightly affect service level, service level as performance really determines the effectiveness of the workforce management team. On the other hand, schedule adherence is completely under the control of the agent and has the added benefit of marginally improving the collective service level.
  3. Metrics need to be hard to manipulate and not corrupting. If we measure handle time, agents can manipulate their performance by simply hanging up early on the customer. We do not want to foster this sort of behavior!
  4. Competing metrics should be simultaneously reported. To counter manipulation of performance measures, one good practice is to report on opposing metrics. In our AHT example, if we also measured quality metrics, like first call resolution (if we have an ACD that supports this) or customer satisfaction, it’d be harder to manipulate handle times because the two measures work in tension with one another (to a point).
  5. We should measure data that is plentiful. Sporadic measures, like quality control scores are great for coaching, but not great as a statistical measure. If I only get a few data points each week, outliers have too much of an impact on the performance tally.
  6. A metric should measure reliable data. I’ve heard many ACDs have a hard time measuring first call resolution reliably. If a reported value is sometimes right, it means that it is also sometimes wrong. If a metric is sketchy, we don’t want to add importance to it by putting it on a report. It would be detrimental to report FCR, unless the measure was locked down and consistent.
  7. Measures and agent goals need to be fair. There is nothing more unmotivating than setting goals way too far out of reach or measuring performance that an agent is not able to affect. For example, it’s important that new hires are treated differently than seasoned reps.
  8. Rollups and allocations need to be accurate: Performance metrics aren’t just for agents, but also for team leaders and managers. Rollups need to be accurate and fair, and they can be complex. For example, some teams may include different skills and channels than others. The rollup of performance metrics must consider the nuances of the differences of these teams. Similarly, allocating idle time, say, down to an agent who handles multiple types of contacts must be well thought through, or the performance metrics will be biased.

Seeing Performance

It’s not enough that your teams are measured (although it certainly moves the needle). Reps need to know they are being measured, and better yet, see the results of the measure. Knowing your performance relative to your peers is the “gold standard” in performance management.

We were having dinner with a very smart customer of ours a few weeks ago, and he told us a story of one of his underperforming agents. He had brought the agent into the office to put him on a performance plan. The agent was painfully slow and unproductive, and was being weeded out. When shown his handle times and discussing the company’s expected handle times, the agent was surprised to find he was so much slower than his peers. He went back to his desk, picked up his pace, and was well within standard from that moment on, all while maintaining quality scores as well. He just didn’t know what was expected of him.
Our customer started the habit of posting performance for each agent relative to their peers and saw an uptick in overall quality and efficiency. But this is intuitive, right? If I knew I would be measured by my company on how often I called a customer, I’d do a heckuvalot more calling. Expectations are funny things.

Of course, there are some important social considerations.

Do you want all agents to know the pecking order of performance? Maybe, maybe not. Showing performance can be a de-motivator as well, especially if it is public. Use judgment.

Scorecards

A common practice, which I have mixed emotions about, is the combining of disparate performance metrics into a single scorecard. I understand the impulse – it would be super nice to have an easy, single number to judge performance by. But if they are designed poorly, they can do a disservice.

For scorecards, we still need to pass the metric attributes tests. What do you do with the scorecard? Is it corrective and actionable? Is it fair? Does it make math-sense? Does the weighting of the different metrics in the scorecard seem arbitrary? Does it measure what we want it to? Scorecards introduce a complexity around how to assemble the disparate measures (that people don’t usually think about).

A depressingly long time ago, I took a job – for only a week, it turned out – as an agent in a highly unethical sales center. This center made calls into businesses and sold them office supplies. We had a very rigorous script, which included promises of freebies like a Las Vegas vacation or other premium prizes. We were measured by how well we adhered to the script, our handle times, and the number of pens sold. There was a formula for determining whether you were doing your job well. I was productive, efficient, but a terrible salesperson.

I did a side-by-side with the number one agent, who had terrible handle time scores and productivity, because he would research the next “mark” and learn about the neighborhood before he ever made a call. He knew the name of the restaurants, the local hardware store, and the mayor. He’d call the small business, pretend to be from the small town, and have a long chat with people about how much he missed his “hometown.” He’d put on an accent. And then he’d sell them a lot of pens.

The point of this is that, while I performed well on the contact center scorecard, it really wasn’t what management was interested in at all. They measured handle times and number of calls made, but they really only cared about sales. I made 80 calls a day, the super sales guy made three. Your scorecard and measures should be a statement about what you think is important. Like our customer, whose agent made an improvement when he was on the verge of being fired, knowing what is expected of you is half the battle.

Another problem I have with scorecards is that their math doesn’t always add up. To make scorecards work, we often have to convert the different metrics into a ranking or a percentage-of-goal measure. This can be problematic when merging with other measures. If one metric with a wide distribution of performance is combined, through a percentage of goal, say, with a metric with a narrow distribution of performance, you might get a weird skew in the scorecard. The wide distribution metric will swamp the narrow distribution’s meaning. If using scorecards, look at the distribution of performance in the separate measures. Something to watch out for: if small changes in performance yield big swings in agent ranking, it means the distribution of performance is too narrow and you should use a different measure.

Similarly, weighting is a judgment call, and that weighting must be translatable into a course of action by the agent. The agent needs to be able to intuitively know what to do with the scorecard. Often, this isn’t natural for the agent.

One of my smart friends at work is a former teacher, and recommended a teaching trick for measuring agents in place of the scorecard — the use of the rubric. A rubric, in schools, is a breakdown of what is expected on any assignment: 20% of your score is if you have a summary paragraph, 10% if you have clear goals, etc… For contact centers it could be more complex. For example, you get 30% if you meet your AHT standard, 30% if the customer does not call back again, 40% if you adhere to your schedule, or 100% if you make the sale. Instead of rolling up numbers that aren’t really combinable, you have a rubric on every single call to know that you were successful or not. That can serve as a basis for an overarching performance goal.

A happy agent is one who does their job well, and knows their supervisors (and peers) know it too. Agents should see their performance, compared to their peers, and compared to the best. We need to put some real thought into how we measure them.

Ric Kosiba, Ph.D. is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. He can be reached at rkosiba@sharpencx.com or (410) 562-1217.