What Business Problem Are We Solving?

By Ric Kosiba, Ph.D. Vice President, Genesys

Congratulations on a Great Conference

I just wanted to send a quick “congrats” to Vicki and the SWPP Board for another fantastic conference! We are so proud to be a part of this great organization.  It was so cool to chat with so many of you!

Solving the Real Problem

Last newsletter, when we discussed forecasts and error, we touched on a topic that is near and dear to me, but one that most of us tend to forget about: “What business problem are we really trying to solve?”  Sounds like a silly question, but I have seen the results of forgetting about this problem pop up at many companies, and over and over in my career.

Like we discussed last newsletter, how we judge our forecasts, and by extension, what methods we use to forecast should be informed by the very specific business problem at hand.

Here is a fun story.  I had a cool job years ago managing a team of forecasters and statisticians for a credit card company.  One of our jobs was to determine the expected impact on profitability of any program or offering that we were considering.  For example, if we wanted to try and raise revenues, we might offer customers more credit coming into the Christmas season.  My team would build test programs and analyze the portfolio (the list of customers) to determine who to offer the credit to, and how much.

As you could guess, there was a fair amount of math (not unlike our WFM stuff, by the way), and statistical modeling.  My team would take results from previous credit card programs we’d run, we would develop small test programs to see how customers would behave, and we would build newer and (hopefully) better risk models to determine who might be a poor choice to offer more credit (because they wouldn’t pay us back). We even bought a pricey model or two from one of the credit bureaus.

These models were exercises in finding customers who used the credit programs and were profitable, and trying to determine, probabilistically, who else would behave a lot like these “good” customers.  We would use credit data, purchase history, demographic data, and anything other data we could find to help us predict those “will-be-good” customers.

An important story point: we would use the data of every one of our customers when building models, hoping to find those pockets of customers that would use the additional credit, and then pay us back.

We developed lots of models, and burned countless hours whenever our company wanted to run a marketing program or a risk action.

When I first started at this job, I was surprised at how small the benefit was from these sorts of models (my job was to improve these).  We could find better customers, but our models were no better than the expensive models we would buy from the credit bureau. But we kept chugging along.

Until one day, we had an epiphany.  That epiphany was this: what business problem are we trying to solve?  In our case we were looking to use models to find out who was likely to use a credit line increase for their Christmas shopping.  So, should we use all our customer base to build models on – isn’t that too much data? Even though we had been doing this for years, the answer was suddenly relevant.  Would customers who were not currently using our credit card likely to use a credit line increase? Probably not, so why include them in the data set? Would we send a line increase to those customers who had already stopped paying us, or any of their other bills?  Nope, doesn’t seem very smart. How about those customers who were trying to pay off our card? We could see that behavior in our data, and it didn’t make sense to offer them more credit. How about customers that were good paying customers, but were near their credit limit?  Yup, sounds like they are a match.

When building our statistical models, it made a lot of math sense to use as much data as possible, to discern something hidden in the data.  But it didn’t make sense to our specific problem.

We narrowed our modeling population to only those customers that we suspected would be open to using more credit and built specific models from this sub-population, and sure enough, our predictive models popped.  We could find those pockets of folks who were good customers and that we could risk additional credit. Our models were fantastic, better than any we could purchase, or any that we had built before. The line increase program became a great profit-maker. Our customers loved it.

What’s the Link to WFM?

I was at the SWPP conference, maybe ten years ago, and a very knowledgeable college professor was speaking about types of forecasts and how forecast error was important.  He went through all the standard forecast error metrics, and how each of them would be better and under what conditions. But he never asked the question, “What business problem are we solving?”  He assumed, that our only goal was to develop accurate forecasts, and he used academic measurements for determining which one was best.

I remember having the same sort of epiphany as at the credit card company during this session.  What if it is only important to hit our forecast in the next straight six weeks? What error method would we use?  What if it is very important that we never understaff? Would that change our error metric, and, hence, our forecasting model?  What if we were putting together a budget, and we needed to get our average staffing levels right? It doesn’t matter so much if we miss a few peaks or valleys, but it does matter that we have little over or under forecasting bias to determine staffing levels.  What error metric would we use then?

In workforce management we have many business problems we solve (often simultaneously).  But our primary problem is not minimizing forecast error. It is good to have accurate forecasts, and forecasting is an intermediate step to our overall process, but it isn’t the whole shebang.  Our goals are (I suspect):

  1. Schedule as few people as possible, while maintaining service goals during every time period, and while developing schedules that agents want to work.
  2. Develop hiring and controllable shrinkage plans that are efficient, that meet our financial goals, and ensure consistent service delivery and maximum sales.

On a similar note, what is the business purpose in developing staff requirements?  Again, staffing requirements are a useful step in figuring out employee schedules and hiring and overtime plans.  And it is useful for real-time management of our resources to know the number of agents required to maintain service standards.  Accurate requirements are important, but I would argue that the next step, schedules and hiring plans are where our rubber meets our road.

When developing schedules, optimizers have long been used to automate the scheduling process.  This is good!

When developing capacity plans and budgets, we often put science into developing forecasts and requirements, but every capacity planning spreadsheet I’ve ever seen requires a “guess and test” hiring and controllable shrinkage process.  Meaning, an analyst will determine the hiring plans eyeballing a spreadsheet. There is no science or real math in the most important step.

We apply rigorous math to building forecasts and developing requirements, meanwhile, we use an analyst’s best guess to develop hiring plans, controllable shrinkage plans, and budgets.  What problem are we trying to solve?

Ric Kosiba, Ph.D. is a charter member of SWPP and Vice President of Genesys’ Workforce Management Team. He can be reached at Ric.Kosiba@Genesys.com or (410) 224-9883.