Measuring Speed of Answer

By Maggie Klenke

For most contact centers, speed of answer is one of the most important statistics measuring overall performance.  Some centers use service level (SL) while others use average speed of answer (ASA), and some use both.  It is certainly important to understand what these metrics measure and how they can be effective tools in the contact center, but also how they can be misleading.  Let’s start with the calculations that make up these two metrics.

Service Level Calculation

Service level is used by more centers than ASA according to recent SWPP survey data (see the Summer 2021 survey report in this newsletter) so we will start with it.  There are two components in the calculation.  The first is the percentage of calls that are answered within a set amount of wait time.  Generally, the percentage seen in many centers is somewhere between 70 and 90 percent.  However, it is not unheard of to see a goal for below 70% or over 90%.  

The second component of service level is the goal for length of time for the wait.  This is typically expressed in seconds or minutes.  The idea is that the center chooses a timeframe that will minimize abandoned calls and customer dissatisfaction.  When SL is measured, another important component is when the time begins to be counted.  It could start at different points:

  • When the caller first hears the announcement that they have entered the queue.  When the announcement is forced (for legal purposes), it may be unfair to hold the time spent listening to the announcement against the center’s performance.
  • When the caller has heard the complete announcement and is now waiting in the queue.
  • When an arbitrarily set number of seconds has expired in the queue.  This might be used when the center managers do not want to count callers who hang up in just a few seconds as failures.

Let’s use an example of a center that has a goal of 85% of calls answered in 30 seconds.  In a half-hour, the calculation looks at all of the contacts handled within that period to see how long each waited for the agent to answer.  If there were 100 calls that entered the queue and 85 of them were answered before the 30 seconds expired, the goal has been met.  However, what about the other 15 callers – what was their experience?  It is hard to tell since waiting 31 seconds is essentially the same to the calculation as waiting 3 minutes and no data is typically provided on these calls.

ASA Calculation

Average speed of answer is just that – an average across a period of time.  The goal is expressed as an acceptable wait time in seconds or minutes.  The calculation looks at the wait time for every call during the period and adds the seconds/minutes all together.  Then the result is divided by the total number of calls handled in the period.  Under normal circumstances, there will be some calls that were answered immediately with no wait at all, while other calls might have waited much longer than the goal.  But all added together, the average is the result of the calculation.  For example, in a half-hour with 100 calls and an ASA goal of 35 seconds, each caller’s wait will be tallied and added together with the others and then the total is divided by the 100 calls.  Even with some callers waiting 2 to 3 minutes, if there are enough 0 wait calls, the average will meet the goal.  Therefore, the challenge with this metric is that significant fluctuations are easily concealed in the average.

Measurement Period

The longer the period of time that the metric is calculated using, the more chance there is that the result will conceal significant periods of fluctuation in the individual customer experiences.  While the recent SWPP survey suggested that few centers concentrate on their achievement by half-hour, a significant percentage look at the whole day, week, or even month of data to make their calculation.  There are likely to be periods that are consistently below goal and others above it.  Therefore, it is important to look at the results on a half-hourly basis to see these situations more clearly.  When Tuesdays at 10-11 AM are consistently below goal, it will become obvious and further analysis can be done to determine what is going on at that time that can be adjusted to achieve a more consistent result for customers.  (Excel Pivot Tables are useful for this kind of analysis.)

It is not uncommon to find centers that have periods that are well below the goal, but they purposely overstaff during other periods just so the goal for the day or week can be achieved.  This is not a good practice even though meeting the goal may be what the center thinks management wants to see.  Paying extra (and even overtime) to “make up” for times of poor service does not change the impact the long waits had on the customers.  This is why it is becoming more common for centers to change their approach to measuring speed of answer to focus on the consistency across all timeframes.  A new goal might be 85% of the calls answered in 35 seconds in 80% of the half-hours.  Or a standard deviation of no more than 20 seconds in the ASA by half-hour over a day or week.

Abandon Rate

Another thing to consider is how abandoned calls are treated in the calculation.  There are several options:

  • All calls are included in the calculation regardless of whether they abandoned or not.
  • Only calls that waited at least X number of seconds are included in the calculation.  This would exclude short calls where the caller never gave the center a real chance to answer the call.
  • Only calls that waited at least the length of the goal seconds/minutes are used in the calculation.
  • All abandoned calls are excluded from the calculation regardless of how long they were in the queue.

It is easy to see that these options have a significant impact on the resulting metric.  The first would make it the toughest to meet the goal with easing of the results for each other choice in order.  Knowing how the systems are set and how the abandonments are treated is important since having different setting in the phone system and WFM system could result in significant mismatches. Forecasts may be built to one calculation with the actual performance in the ACD reports done differently.   It is also key when comparing centers in multiple sites or outsourcer environments to ensure all are being held to the same standard.  These calculations can be modified in most systems today so that all can be the same.

Using Both Calculations

There are any number of reasons to select service level or ASA for a given type of work.  It may be simply the preference
of the contact center management or might be dictated by the client when contacts are handled for others.  Either metric can work effectively.  However, using both metrics to measure a single workload is problematic.  The math works differently as you can see from the calculations, but the results can be quite different as the workload changes.  Even if the two goals provide a matching staffing requirement with a specific volume of work, during low volume periods, it is likely that one goal would be met and the other missed.  But during high volume periods, the results switch and only one goal is met but it is the opposite of what happens in the low volume periods.  Utilizing one metric for each type of work is best practice.

Summary

Speed of answer is an important metric for most centers.  While it may not be as important to callers as first call resolution or friendly service, it is easy to obtain from the systems and gives an overall picture of the efficiency of the center regarding call handling.  However, it is a metric that can be done many different ways and choosing the combination of components that give the center the information needed to focus on continuous improvement and rewarding excellence is more difficult that it might first appear.

Maggie Klenke has written numerous books and articles related to call center and WFM. A semi-retired industry consultant, Maggie serves as an Educational Advisor for SWPP.  She may be reached at Maggie.klenke@mindspring.com.