A Look Behind the Curtain: Artificial Intelligence

When I started work at Sharpen, I was told by our CEO that I could pick my title. There were several that made it to the final: “Chief Math Officer” was a fun favorite, but a bit flippant, “Chief Decisions Engineer,” was in keeping with my education, but strange, and the ubiquitous “Chief Data Scientist” made the final three.  We ended up going with the latter because it was something that people understood. But because “data scientist” is so tied to artificial intelligence (AI), I do have the heebie-jeebies about the title. Here’s why.

I’m old enough to remember the fashion of AI in the marketplace back in the 80’s and 90’s. AI was all the rage then, like it is today. There was a lot of hype, a lot of promise, a ton of investment, and, consequently, a lot of failure. See, all the hype led to ridiculous expectations. Many companies invested huge in big and promising projects.  But big projects are difficult, long, and expensive. Results often didn’t live up to the excitement and many failed.

AI funding was cut and research withered, and AI went into hiding. To mathematicians and engineers of a certain age, this era is sometimes referred as the “AI Winter.”

After a 20-year hibernation, and with the advent of better data access and the common availability of cloud computing, we are now in an “AI Spring.”  There have been many successes that we see all around us, from digital assistants and natural language processing to self-driving cars. As these successes get attention, more companies will invest in AI and we’ll get more successes.  But I am expecting some big failures, too.  And some of these will be spectacular.

Spring leads to Summer, and then things turn cold again. My prediction:  Like the term “Big Data,” we will hear about AI less and less in the coming years.  I expect that “Data Science” as a title will become less popular.

An AI Example

I’m super bummed our SWPP conference is remote this year. I learn so much at our mixers in Nashville, talking with all of you!  Here’s one example.

Last year I met a workforce manager whose company had been piloting a large piece of AI in its contact center infrastructure. It’s a new AI contact center technology that we’ve all heard about. It was a long project, over a year in the making, with huge data requirements, big project teams, exorbitant costs, and even larger expectations. This manager was truly hopeful that this system would bring great value, but he didn’t think that the data was bearing it out. He was nervous about saying anything about its non-performance.

The politics of the pilot were intense. Everyone, from the vendor to the project sponsors to the team members were desperate to show success.  But ROI was not obvious, and the team had to torture their data to try and find any benefit. He was worried about his involvement in the project.

If this project plays out the way many of the AI projects did in the 90’s, its failure will follow a script.  The pilot will be declared victorious by extrapolating “model lift” and ROI from a small improvement, in a single metric, over a shockingly short period of time, in a limited part of the operation.  The project team will disband after a press release telling the world about the project’s success, the executive sponsor will offer limited references for the vendor, and in about a year, the product will start to wither at the client company.  A few years out, it will be quietly unplugged.  The vendor will sell several more of them for a while, because the vendor will believe that the real problem of the project was that the customer was not a good fit for their advanced ideas.  Plus, it’s important that their development investment be recouped.  The product will die when too few are sold over the next few years as references become scarce.

But that’s in the possible future.

Ensuring AI Success

How do you ensure that your AI project has the highest probability of success? As a customer, it is really important to challenge the vendor, the premise, and their understanding of your business problem. When investing both your money and your time, it is crucial that you be critical of the vendor (as well as be a good team player). A good vendor will welcome some skepticism.

There are a few things I’ve learned that can help.

  • Is there a problem? Follow your gut. Is the basic premise being sold really true?  Meaning, are they solving a problem you know you have, or are they selling you a solution to a problem that you’re not so sure you have?
  • When discussing a solution, does it make sense? If you don’t understand some technobabble from the vendor, have it explained until you get it.  You’re not uninformed. It’s entirely possible your vendor doesn’t understand their babble either.
  • How is the AI part going? In an AI exercise, if the vendor keeps asking for more and more data, that is a red flag.  It basically means that the lab-version of their model, with your data, is not seeing results.  Start asking for model validation graphs on contact center performance metrics.  Be wary of statistics-speak — lift should be obvious to your eyes.  And don’t take their word for it.
  • Keep perspective. AI is simply another math technique, sometimes better, sometimes worse than other math techniques (note that some of the new automated forecasters are using old-timey, non-AI math).  You should not assume there is any value to something just because we vendors call it AI.
  • Teach your data scientist. A shocking thing to find out, is that most of the folks doing the hard work — the behind the scenes model building — probably know little about your business. Big companies tend to use product managers to understand the business, while the decision scientists do the work.  This is a failure point in a big project.  I spoke to a data science group once whose members had never been in a contact center. They didn’t know even the basics. One said to me incredulously: “Did you know that agents quit a lot, that attrition is high?  We have to rewrite all of our models every time one quits!”  Give them a tour of your contact center and make sure that they are visible during the project. The product manager wall between the engineers is a very weak link.
  • Finally, listen for excuses. A sure sign of imminent failure is when the vendor starts questioning your operation, your data, or your management.  When you hear that “data is not clean,” blame is starting to be assigned to you.

Vendor Responsibilities

I’ve been lucky enough to have worked for some great companies, where the guiding principle in decision-making is to “do what is right.”  But complex projects are hard on a vendor, too, and there is a temptation to minimize their own risk when proving out new technologies.

  • First, pilots should be free.  When rolling out a new and unproven product, the last thing we should do, as your partner, is to charge for our services.  Our first pilot customers are doing us a huge favor, and we need to honor that.  We shouldn’t experiment with our helpful customer’s operation on their dime.  We charge when we’ve proven value.
  • Know the business. It is in the data science DNA to solve problems that we conjure up.  In the best case, we solve a real problem.  In the worst case, the problem is somewhat imaginary. It’s our responsibility to solve real problems for our customers.  If we have a new cool idea, we better be able to back our innovation up with real, provable, and obvious ROI in your real operation.
  • Keep it simple. AI and math modeling are complex. But if our models are not easily understood (many AI models can’t be explained and, somehow, AI people think this is a positive), that’s actually bad. If a model, or a complex system produces counterintuitive results, watch out.  It is our responsibility to explain, in simple English, how our models work and what they find. No blinding prospects with science.
  • Pull the plug yourself. I’m lucky to work for a very ethical CEO.  He’s let me know that we’d return our customer’s money — all of it — if we couldn’t find ROI for our customers. We shouldn’t waste our customer’s money if a project isn’t working. That should be the industry gold standard.

Conclusion

Even though this article seems very cynical to me upon my 20th read, I am really very bullish on AI and math modeling in general.  Our world is changing because of it. I’ve seen some incredibly cool contact center AI ideas, and have seen some fantastic results.  But, I’ve heard of some poor projects, and understand many of the pitfalls.

AI systems are like any other math technique, some of them are good, some of them are not.  Even though AI is at the height of fashion, it, too, will go out of fashion (big data or six sigma anyone?). And while algorithms are increasingly being called AI (whether they are or not) we should be able to call BS on AI, just like we would on any other technology.

Ric Kosiba, Ph.D. is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. He would love suggestions on a better title and can be reached at rkosiba@sharpencx.com  or (410) 562-1217.

When I started work at Sharpen, I was told by our CEO that I could pick my title. There were several that made it to the final: “Chief Math Officer” was a fun favorite, but a bit flippant, “Chief Decisions Engineer,” was in keeping with my education, but strange, and the ubiquitous “Chief Data Scientist” made the final three.  We ended up going with the latter because it was something that people understood. But because “data scientist” is so tied to artificial intelligence (AI), I do have the heebie-jeebies about the title. Here’s why.

I’m old enough to remember the fashion of AI in the marketplace back in the 80’s and 90’s. AI was all the rage then, like it is today. There was a lot of hype, a lot of promise, a ton of investment, and, consequently, a lot of failure. See, all the hype led to ridiculous expectations. Many companies invested huge in big and promising projects.  But big projects are difficult, long, and expensive. Results often didn’t live up to the excitement and many failed.

AI funding was cut and research withered, and AI went into hiding. To mathematicians and engineers of a certain age, this era is sometimes referred as the “AI Winter.”

After a 20-year hibernation, and with the advent of better data access and the common availability of cloud computing, we are now in an “AI Spring.”  There have been many successes that we see all around us, from digital assistants and natural language processing to self-driving cars. As these successes get attention, more companies will invest in AI and we’ll get more successes.  But I am expecting some big failures, too.  And some of these will be spectacular.

Spring leads to Summer, and then things turn cold again. My prediction:  Like the term “Big Data,” we will hear about AI less and less in the coming years.  I expect that “Data Science” as a title will become less popular.

An AI Example

I’m super bummed our SWPP conference is remote this year. I learn so much at our mixers in Nashville, talking with all of you!  Here’s one example.

Last year I met a workforce manager whose company had been piloting a large piece of AI in its contact center infrastructure. It’s a new AI contact center technology that we’ve all heard about. It was a long project, over a year in the making, with huge data requirements, big project teams, exorbitant costs, and even larger expectations. This manager was truly hopeful that this system would bring great value, but he didn’t think that the data was bearing it out. He was nervous about saying anything about its non-performance.

The politics of the pilot were intense. Everyone, from the vendor to the project sponsors to the team members were desperate to show success.  But ROI was not obvious, and the team had to torture their data to try and find any benefit. He was worried about his involvement in the project.

If this project plays out the way many of the AI projects did in the 90’s, its failure will follow a script.  The pilot will be declared victorious by extrapolating “model lift” and ROI from a small improvement, in a single metric, over a shockingly short period of time, in a limited part of the operation.  The project team will disband after a press release telling the world about the project’s success, the executive sponsor will offer limited references for the vendor, and in about a year, the product will start to wither at the client company.  A few years out, it will be quietly unplugged.  The vendor will sell several more of them for a while, because the vendor will believe that the real problem of the project was that the customer was not a good fit for their advanced ideas.  Plus, it’s important that their development investment be recouped.  The product will die when too few are sold over the next few years as references become scarce.

But that’s in the possible future.

Ensuring AI Success

How do you ensure that your AI project has the highest probability of success? As a customer, it is really important to challenge the vendor, the premise, and their understanding of your business problem. When investing both your money and your time, it is crucial that you be critical of the vendor (as well as be a good team player). A good vendor will welcome some skepticism.

There are a few things I’ve learned that can help.

  • Is there a problem? Follow your gut. Is the basic premise being sold really true?  Meaning, are they solving a problem you know you have, or are they selling you a solution to a problem that you’re not so sure you have?
  • When discussing a solution, does it make sense? If you don’t understand some technobabble from the vendor, have it explained until you get it.  You’re not uninformed. It’s entirely possible your vendor doesn’t understand their babble either.
  • How is the AI part going? In an AI exercise, if the vendor keeps asking for more and more data, that is a red flag.  It basically means that the lab-version of their model, with your data, is not seeing results.  Start asking for model validation graphs on contact center performance metrics.  Be wary of statistics-speak — lift should be obvious to your eyes.  And don’t take their word for it.
  • Keep perspective. AI is simply another math technique, sometimes better, sometimes worse than other math techniques (note that some of the new automated forecasters are using old-timey, non-AI math).  You should not assume there is any value to something just because we vendors call it AI.
  • Teach your data scientist. A shocking thing to find out, is that most of the folks doing the hard work — the behind the scenes model building — probably know little about your business. Big companies tend to use product managers to understand the business, while the decision scientists do the work.  This is a failure point in a big project.  I spoke to a data science group once whose members had never been in a contact center. They didn’t know even the basics. One said to me incredulously: “Did you know that agents quit a lot, that attrition is high?  We have to rewrite all of our models every time one quits!”  Give them a tour of your contact center and make sure that they are visible during the project. The product manager wall between the engineers is a very weak link.
  • Finally, listen for excuses. A sure sign of imminent failure is when the vendor starts questioning your operation, your data, or your management.  When you hear that “data is not clean,” blame is starting to be assigned to you.

Vendor Responsibilities

I’ve been lucky enough to have worked for some great companies, where the guiding principle in decision-making is to “do what is right.”  But complex projects are hard on a vendor, too, and there is a temptation to minimize their own risk when proving out new technologies.

  • First, pilots should be free.  When rolling out a new and unproven product, the last thing we should do, as your partner, is to charge for our services.  Our first pilot customers are doing us a huge favor, and we need to honor that.  We shouldn’t experiment with our helpful customer’s operation on their dime.  We charge when we’ve proven value.
  • Know the business. It is in the data science DNA to solve problems that we conjure up.  In the best case, we solve a real problem.  In the worst case, the problem is somewhat imaginary. It’s our responsibility to solve real problems for our customers.  If we have a new cool idea, we better be able to back our innovation up with real, provable, and obvious ROI in your real operation.
  • Keep it simple. AI and math modeling are complex. But if our models are not easily understood (many AI models can’t be explained and, somehow, AI people think this is a positive), that’s actually bad. If a model, or a complex system produces counterintuitive results, watch out.  It is our responsibility to explain, in simple English, how our models work and what they find. No blinding prospects with science.
  • Pull the plug yourself. I’m lucky to work for a very ethical CEO.  He’s let me know that we’d return our customer’s money — all of it — if we couldn’t find ROI for our customers. We shouldn’t waste our customer’s money if a project isn’t working. That should be the industry gold standard.

Conclusion

Even though this article seems very cynical to me upon my 20th read, I am really very bullish on AI and math modeling in general.  Our world is changing because of it. I’ve seen some incredibly cool contact center AI ideas, and have seen some fantastic results.  But, I’ve heard of some poor projects, and understand many of the pitfalls.

AI systems are like any other math technique, some of them are good, some of them are not.  Even though AI is at the height of fashion, it, too, will go out of fashion (big data or six sigma anyone?). And while algorithms are increasingly being called AI (whether they are or not) we should be able to call BS on AI, just like we would on any other technology.

Ric Kosiba, Ph.D. is a charter member of SWPP and is the Chief Data Scientist at Sharpen Technologies. He would love suggestions on a better title and can be reached at rkosiba@sharpencx.com  or (410) 562-1217.