AI

By Justin Tenuto, February 26, 2016

Customer service ticket tagging is a big expense. Here’s why that’s changing.

If you’re doing business online, chances are, you’re getting customer service tickets. It’s the nature of things, really. And the bigger you are–and the more moving parts your organization has–the more tickets you’re going to get. Increased feedback–be it complaints, praise, or simple questions–is a natural side-effect of continued success, but sooner or later, it can become unmanageable, not to mention really expensive. Generally, a company will deal with this influx of user mail by hiring additional head count to not only answer each ticket but to tag important tickets so the customer success team can triage the most crucial issues and deal with those first. But that’s not the best way. As we discovered with just a  hundred dollars and a simple CrowdFlower job, machine learning can take care of a lot of that for you. And for a fraction of the cost. 

A little background before we jump off here. We here at CrowdFlower get our fair share of tickets, especially from our contributors. They write in to ask us to look at test questions, to check on their account status, to report functional issues with jobs, and, of course, to make sure a payment is coming through. There are, naturally, plenty of other reasons contributors write in to CrowdFlower, but identifying those main buckets is really important for creating the job we used to train our model. In other words, knowing what you hear most often helps you train a smart model to deal with those frequent or persistent complaints. 

It’s important to note here that we wanted to build a ticket tagging algorithm, not create a model that would actually reply to our users. Even if the issues two contributors have are very similar, they deserve a personal response that addresses the substance of their email. Traditionally, tagging is done manually, by having a member of the team scan the ticket and note the issues therein. This doesn’t take too long–call in ten to fifteen seconds–but when you’re dealing with thousands of these tickets a day, that time really adds up. Even at just 1,000 tickets a day and at ten seconds a ticket, you’re looking at close to three hours of repetitive, fairly thankless work. That’s time that could be spend actually answering our contributors. 

In other words, it makes sense to automate this process. This is the exact sort of thing we were talking about a few weeks ago when we wrote about automation: don’t think of automation as something that will take your job but instead take over some of the more monotonous parts of your job

So how to do that? It’s actually fairly simple. First, we made a CrowdFlower data categorization job where contributors read and tagged tickets for us. Since tickets can sometimes contain sensitive information, we ran this job through our NDA channel, but the job itself wasn’t too complicated. Basically, we presented our contributors with the text of ticket and asked them to note the following: 

  • What kind of ticket is it? A complaint? A compliment? 
  • What is the mood of the contributor? Upset? Pleasant? Neutral? 
  • What’s the ticket about? This is the most important bucket for us. We kept it simple with four categories, but two of the options had some subcategories so our model could become more exacting

In the end, here’s what our contributors saw:

SS

Simple, right? Now, here’s where things get interesting. After tagging just 2,000 tickets, we ran the job through our beta CrowdFlower AI tool. Our model, with just this one quick job, was 68% confident about classifying tickets. In other words, about two out of every three tickets could now be pre-tagged for our team! Instead of combing through 2,000 tickets, we’re left with only about 675. That’s substantive and, if you extrapolate that for 1,000 tickets daily for the rest of year, at 10 seconds each, that’s over 100 man-hours that can be saved simply by tying the model into our ticket tagging tool. All this for a job that cost us less than $100. 

Going back to our example above, where a contributor’s account had been flagged, our model was 98% sure of both the category of complaint (about the user’s account) and the subcategory (account has been flagged). Why? Because that contributor used very typical language to describe her issue. Here are the major predictors for this type of ticket:

Screen_Shot_2016-02-26_at_12.32.35_PM

In other words, if a contributor says “flagged,” well, we know what the problem is.

Ticket tagging works so well because, in the end, a lot of our contributors have the same issues. But for the one we’re talking about here, again, tagging the ticket doesn’t mean they get a canned response. It just means we know this account has been flagged and we can deal with it accordingly. 

If you’re a company that gets tons of customer tickets, this is a great use of your data science team. They can create a simple job and a simple model and save your customer service team a ton of time and energy. Not only that, by tagging tickets, you’ll be able to bubble up the most urgent tickets so your team can take care of them immediately rather than in the order they were received. In other words, a model like this can surface the scariest complaints–a threatening store employee or a safety issue or anything that needs to be taken care of A.S.A.P.–while tagging the less dire issues at the same time. 

This is one of those machine learning applications that isn’t all that sexy and that you’re unlikely to read about in trade publications and the like. But it’s a way for companies to save a lot of money without investing much in the way of time. And it’s the sort of the thing you’re going to start seeing in all sorts of industries. Namely: machine learning taking care of the easy stuff and people taking care of the rest.