Thoughts on Call Center Model Accuracy, AI Forecasters, and COVID

By Ric Kosiba, Chief Data Scientist at Sharpen Technologies

Staffing Models

In the last issue of SWPP’s On Target, we discussed how to make a more accurate call center staffing model using weighted averages of disparate models. Meaning we can take several Erlang A models, an Erlang C model, maybe some simulations or regression models, and by weighting the results of each model and building a master combination model, we can develop a much better model – an ensemble model. If we want to go crazy, we can build a weekend ensemble model, or a holiday ensemble model or a Tuesday ensemble model, or a Friday at 10AM ensemble model, and it will result in more and more accuracy.

The point of these models is to estimate, accurately, how many people are required to hit some service level, given your handle times and your contact volumes.

What Does Accuracy Buy You?

Sometimes, the thought of building a custom model at all seems like too much work, let alone building a Friday at 10AM (and all the other hours of the week) ensemble model.  There are obvious limits to the time and effort we are willing to take to build ad hoc models.  Is accuracy that important? I’m not being glib here – accuracy is important. But too much accuracy has a cost, and to be honest, that super accurate model is likely not real-world anyway.

The graph below is a completely made-up chart with no real numbers behind it at all (but we’ve all seen something like this in the past).  I think it tracks with our experience about level of effort and improvement.

Figure 1. Effort Required to Achieve A Specific Level of Accuracy

The marginal value in terms of improved accuracy may not be worth the huge level of effort required to build the improved models.   But maybe it is?  We certainly can calculate the cost to our operation of being inaccurate – if I understaff or overstaff by some amount, there is a cost in service or staff, right? How far down the curve in Figure 1 do I have to go to achieve the acceptable level of accuracy?

We know that accuracy in staffing calculations has real benefit.  We can use accurate models to solve very tactical problems, like how many agents do I need to ask for overtime this afternoon?  As my models are better, we can be a bit more precise in our requests and cut down on unnecessary overtime expenses.  A more accurate model will save us more money.

How about using an accurate model to solve strategic problems?  Again, as we are putting together staff plans, a more accurate model will ensure we don’t overstaff, by overhiring.  Or an accuracy miss in the other direction can lead to underhiring and, hence, service level failures.  Strategic inaccuracy is usually more costly than tactical error.

My friend, and former SWPP Board Member and contact center savant, Duke Witte, had a simple test, he called “club-length.”  You want your models and your staff plans to be accurate enough that your operation has the flexibility to be able to recover if you miss.  He reckoned he could ask and get maybe 4% overtime or 6% undertime from his contact center, and so his capacity plan had to be -4% to +6% accurate.  It is a simple and effective idea.

My favorite use of an accurate contact center model is, of course, in building what-if analyses.  If your staffing model is accurate, it can help you develop what-ifs around the inaccuracy of your forecasts.  What happens if our forecasts are off? We can use our staffing model to predict the service and cost repercussions of forecasts that are too high or too low.

Which brings us to another modeling musing.

Forecast Models and COVID

Those of us that have lived through forecasting after the 2008 recession, or 9/11, or the internet bubble burst, know that we are in for an interesting next couple of years.  I was talking recently to a good friend who has built an impressive AI-based forecasting system, and I asked him how his system will do, given the business disruption of 2020-2021.  His answer?  “Not well at all.”  Think of it – the best AI forecasting system, designed to produce volume forecasts just for call centers, is a bust for at least a year.

But we all knew that, right?  If the patterns in 2020 are not representative for the future, then we’ll have to forecast knowing that 2020 data doesn’t help.  If our businesses change semi-permanently, as many of our businesses probably have, then even the data before 2020 is now out, too. Or if we aren’t sure that the pre-COVID days are reflective of our new future, our earlier data is now at least suspect until we understand the post-COVID world.

That makes it hard to forecast, right?

But let me tell you what it really hurts:  sophisticated forecasting systems.  Systems designed to use a ton of contact center history, to automatically account for holidays, and to use seasonal and year-over-year growth as a big part of its algorithms are very iffy. The underpinnings of those systems – patterns of data – are broken.

Maybe both you and my mom have read some of my forecasting articles from On Target in past years and know that
I am bullish on AI forecasting algorithms.  But here is a truth:  these systems will likely not work well for quite a while.  All of the AI in the world won’t work as well as your personal expertise for the next six months or so.

Our old method of forecasting, a combination of human expertise and short-term models, maybe using recent average volume distributions, may be the right way to go for a while. Humans are better at discerning weird patterns at a glance; we can look at a forecast and know if it is off or if it makes sense.  The AI systems will catch up, but it will take a ton of new data to do so.  I have faith that SWPP forecasters will use their intuition to build better forecasts than will automated tools in the mid-term.  It may be time to polish off your old spreadsheet models and ditch the AI.  If you’ve got a sophisticated forecasting tool, be skeptical.

Forecasting and Staff
Planning Models

I just realized, while re-reading this discussion that there is something that seems inconsistent in my treatment of accuracy in the two different modeling exercises.  Both forecasting models and staffing models use history to predict the future, but I state that staffing models can get accurate with just more effort and forecasters are in trouble. I don’t think I’m wrong, and it may be because of the amount of history each modeling approach relies upon.

Sophisticated forecasting systems rely on history for mostly everything.  They try to find patterns in history: intra-weekly patterns, inter-week patterns, seasonal patterns, intraday patterns, vacation day patterns, you name it. If those patterns shift dramatically for specific business verticals, it will be hard for those systems to make sense of these new patterns.

Staffing systems are less dependent on this sort of history. The basic building block of an Erlang model is an interval – usually an hour or half-hour pattern.  I expect that COVID has not changed interval patterns.

Intraweek patterns may have changed, which means staffing models that use those patterns to extrapolate interval models should be examined.  Has our intraweek pattern changed? If it has, it is easy to recalibrate the new pattern.

Our Job

Where the rubber meets the road is simply whether our contact centers run smoothly.  If history is a guide, I fully expect our senior management to assume that we will continue to hit our service levels and do so at a reasonable cost.  They won’t really care about, or understand, the details of our challenges.  Workforce managers are called upon – it’s just assumed – to make sense of the world when all shifts.  Understanding that when the world changes our assumptions require questioning, is something we do as a matter of course.

Ric Kosiba is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. Sharpen builds the Contact Center platform designed for agent performance.  He can be reached at rkosiba@sharpencx.com  or (410) 562-1217.