Outliers and Improving Performance
Ric Kosiba, Chief Data Scientist at Sharpen Technologies
The Improvement Cycle
We’ve all seen some version of the simplistic, continuous improvement cycle chart that I cobbled together below. The idea that has driven many a consultant’s paycheck is to measure, investigate, fix, improve, and then measure again, starting the process over. The hope is that by focusing your management time and attention on finding and fixing issues in your operation, your organization will get better and faster. As simplistic as it is, it really works.
Figure 1: An Improvement Cycle
The issue is that we often do not have time or the ability to search for things to fix.
I had a wonderful professor, Tom Sparrow, who told me that one of the best things you can do when starting a new job is, on your first day of work, walk around the operation and note all the things that feel silly. Those things on your list will make up your project work for the next five years. I’ve found this advice to be extremely valuable, and even though it is not our first week on the job, we can do the same thing by taking a fresh look at our data and jotting all the things in our numbers that feel “silly.” Let me show you how.
QA and Coverage
The main issue with Quality Assurance is having enough resources; there just aren’t enough to go around.There has been talk of using AI to do QA, but so far, real attempts to substitute computers for QA professionals has been unproductive.Either the AI results have been too off, or the system and process has been too expensive and time consuming.
How about using speech analytics platforms, using keyword and phrase searches to target problems? Or sentiment analyses to find irate customers or frustrated agents? These work well to find particular key words or emotional interactions. They help us figure out whether customers are calling us for different reasons. But they do not spot behavioral issues that are not predefined and already understood by management.
So we are back to having too few QA resources. In my discussions with call center managers, a typical agent will have one or so QA-scored interactions a week, randomly chosen. However, to enter our cycle of improvement, we need to be able to spot those interactions that need improving, those agents with bad habits, and diagnose and fix those issues. A random one-listen-to-per-week will not suffice.
Outlier Analysis
One easy method used to find behaviors in agents or customers is basic outlier analysis. The purpose of this is to look at your data and look for behaviors that just aren’t normal. In Figure 2 below, all we are doing is plotting, for each agent, the number of contacts that are considered short (in this case, we chose 20 seconds or less as the threshold) compared to the total contacts taken in a given month. We also look at highlighting the number of calls that are considered long (the threshold we used is greater than 20 minutes). It is super simple and an easy query to make against your contact handling database.
Figure 2: Short and Long Call Outliers
To make the data pop, we change the color of the bar to red if the percentage of abnormal calls is twice the mean (although you could format colors and define your outliers however you like).This makes for a compelling graph—anybody can glance at it and know immediately where there are anomalies in performance.
In this example, Judy, Angela, and Samuel stand out as agents who have a significantly different number of calls that are less than 20 seconds. Gloria and Sherylanne are two agents with more long calls than their peers.
The important question is, how do we use this information? The most straightforward thing we could do is to simply set up time with each agent and ask them what’s up. But I would counsel that we do something a tad different: listen to the short and long calls and learn. Does the agent have internet connection problems? Do the hours they work mean that there are more wrong numbers? Are they new and are slower than other agents? Do they get more complex calls than the typical agent? Do we have a behavioral problem with the agent?
By learning, we get moving on the improvement cycle, which will help us to get better as an operation. When showing this customer their data, they had an interesting take. They knew that Gloria and Sherylanne were new to the contact center, so a higher proportion of long calls was expected. But Samuel’s 10% of calls shorter than 20 seconds? That was very unexpected and deserved investigation. Let’s do another.
Figure 3 below is another outlier report but looks at different behaviors: hold and wrap times. For most companies hold time and wrap time is a normal part of an agent experience, but it can be abused. Once again, we are looking for behaviors that are out of the ordinary. In this case, we simply toggle to red those agents that have, on average, higher wrap or hold times. Interestingly, the agent, Samuel, is an outlier again—his wrap time (the time from the when the call ended to the time the agent is available again) is an outlier. This seems to be a weird behavior, many more short calls and significantly more after-call “work.”
Figure 3. Hold and Wrap Time Outliers
Occupancy Versus Busyness
We were looking for a way to determine how busy an individual agent is, using numbers. In our current work environment, where agents are working from home, these reports have to substitute for our eyes. In the past we could look out across the center and see our agents and what they were doing. But today, we have to resort to dashboards and reports.
Occupancy is a great measure for us WFM types to understand how well we are doing in allocating resources and occupancy serves as a measure for how busy all of our agents are, on average. But our outlier report hints at something different: some agents may be gaming our metrics, while other agents working “all out.” We need to be able to flag both scenarios.
How do we know if an individual is possibly overly busy or manufacturing idle time? Occupancy doesn’t help here. So looked for a “personal occupancy” measure that makes sense, in place of occupancy. We settled on what we call “Busyness” where we measure the time from when an agent hangs up to the time when the agent picks up the next call. Included in Busyness is wrap time—a metric the agent controls to a large extent—as well as normal idle time.
Figure 4 below shows the Busyness of our center by agent, by hour. In this table, we are looking at specific days, specific hours, and color coding the Busyness of each agent over time. Of particular interest are two agents, Paola and Samuel, who are working the same queues at the same times. Note that Paola is extremely busy, with low time between calls, where Samuel is extremely un-busy, with high times between calls.
Figure 4. Busyness by hour and by agent
Given the effort that workforce management teams put forth to ensure that overall occupancy is within acceptable rates, Samuel’s Busyness shows real potential to improve overall efficiency. Fixing those sorts of behaviors makes the overall center operate a tad better, plus addressing Samuel’s extra idle time is just plain fair to the other agents.
Similarly, Paola is someone to keep an eye on. While she is certainly productive, management needs to be concerned of her burning out. Simple communication and praise (or an extra unscheduled break every so often) may go a long way of ensuring she does not leave.
Outlier analysis is very easy to do and fun to analyze. It serves as a great tool in our quest to manage our workforce from our home office, pointing out issues that otherwise might slip past us, when we only look at averages of performance metrics. By addressing our outlier issues, we continue along our contact center improvement cycle.
Ric Kosiba is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. Sharpen builds the Contact Center platform designed for agent performance. He can be reached at rkosiba@sharpencx.com or (410) 562-1217.