fbpx
Talk to our Sales Team about our 15% Q4 Discount!

What Your Customer Satisfaction Research Isn’t Telling You and Why You Should Care

Why most customer experience management surveys aren’t very useful

 

Most of your customers, hopefully, are not unhappy with you. But if you’re relying on traditional customer satisfaction research—or Customer Experience Management (CXM) as it’s come to be known—to track your performance in the eyes of your customers, you’re almost guaranteed not to learn much that will enable you to make a meaningful change that will impact your business.Why Are Your Customers Mad At You-revise v2

That’s because the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

Customer Satisfaction Distribution - Misconception: Most Customer Feedback is Negative

To understand what’s going on here, we first need to recognize that the notion that most customer feedback is negative is a widespread myth. Most of us assume incorrectly that unhappy customers are proportionately far more likely than satisfied customers to give feedback.

… the vast majority of companies are almost exclusively listening to happy customers. And this is a BIG problem.

In fact, the opposite is true. The distribution of satisfied to dissatisfied customers in the results of the average customer satisfaction survey typically follows a very different distribution. Indeed, most customers who respond in a customer feedback program are actually likely to be very happy with the company.

Generally speaking, for OdinText users that conduct research using conventional customer satisfaction scales and the accompanying comments, about 70-80% of the scores from their customers land in the Top 2 or 3 boxes. In other words, on a 10-point satisfaction scale or 11-point likeliness-to-recommend scale (i.e. Net Promoter Score), customers are giving either a perfect or very good rating.

That leaves only 20% or so of customers, of which about half are neutral and half are very dissatisfied.

So My Survey Says Most of My Customers Are Pretty Satisfied. What’s the Problem?

Our careful analyses of both structured (Likert scale) satisfaction data and unstructured (text comment) data have revealed a couple of important findings that most companies and customer experience management consultancies seem to have missed.

We first identified these issues when we analyzed almost one million Shell Oil customers using OdinText over a two-year period  (view the video or download the case study here), and since then we have seen the same trends again and again, which frankly left us wondering how we could have missed these patterns in earlier work.

1.  Structured/Likert scale data is duplicative and nearly meaningless

We’ve seen that there is very little real variance in structured customer experience data. Variance is what companies should really be looking for.

The goal, of course, is to better understand where to prioritize scarce resources to maximize ROI, and to use multivariate statistics to tease out more complex relationships. Yet we hardly ever tie this data to real behavior or revenue. If we did, we would probably discover that it usually does NOT predict real behavior. Why?

2.  Satisficing: Everything gets answered the same way

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful. But the respondent has either had the pleasant experience she expected with you  OR in some (hopefully) rare instances a not-so-pleasant experience.

The problem is that customers look at surveys very differently than we do. We hope our careful choice of which attributes to measure is going to tell us something meaningful.

In the former case her outlook will be generally positive. This outlook will carry over to just about every structured question you ask her. Consider the typical set of customer sat survey questions…

  • Q. How satisfied were you with your overall experience?
  • Q. How likely to recommend the company are you?
  • Q. How satisfied were you with the time it took?
  • Q. How knowledgeable were the employees?
  • Q. How friendly were the employees? Etc…

Jane’s Experience: Jane, who had a positive experience, answers the first two or three questions with some modicum of thought, but they really ask the same thing in a slightly different way, and therefore they get very similar ratings. Very soon the questions—none of which is especially relevant to Jane—dissolve into one single, increasingly boring exercise.

But since Jane did have a positive experience and she is a diligent and conscientious person who usually finishes what she starts, she quickly completes the survey with minimal thought giving you the same Top 1, 2 or 3 box scores across all attributes.

John’s Experience: Next is John, who belongs to the fewer than 10% of customers who had a dissatisfying experience. He basically straightlines the survey like Jane did; only he checks the lower boxes. But he really wishes he could just tell you in a few seconds what irritated him and how you could improve.

Instead, he is subjected to a battery of 20 or 30 largely irrelevant questions until he finally gets an opportunity to tell you his problem in the single text question at the end. If he gets that far and has any patience left, he’ll tell you what you need to know right there.

Sadly, many companies won’t do much if anything with this last bit of crucial information. Instead they’ll focus on the responses from the Likert scale questions, all of which Jane and John answered with a similar lack of thought and differentiation between the questions.

3.  Text Comments Tell You How to Improve

So, structured data—that is, again, the aggregated responses from Likert-scale-type survey questions—won’t tell you how to improve. For example, a restaurant customer sat survey may help you identify a general problem area—food quality, service, value for the money, cleanliness, etc.—but the only thing that data will tell you is that you need to conduct more research.

For those who really do want to improve their business results, no other variable in the data can be used to predict actual customer behavior (and ultimately revenue) better than the free-form text response to the right open-ended question, because text comments enable customers to tell you exactly what they feel you need to hear.

4.  Why Most Customer Satisfaction or NPS Open-End Comment Questions Fail

Let’s assume your company appreciates the importance of customer experience management and you’ve invested in the latest text analytics software and sentiment tools. You’ve even shortened your survey because you recognize that the be Overall Satisfaction (OSAT) and most predictive answers come from text questions and not from the structured data.

You’re all set, right? Wrong.

Unfortunately, we see a lot of clients make one final, common mistake that can be easily remedied. Specifically, they ask the recommended Net Promoter Score (NPS) or Overall Satisfaction (OSAT) open-end follow-up question: “Why did you give that rating?” And they ask only this question.

There’s nothing ostensibly wrong with this question, except that you get back what you ask. So when you ask the 80% of customers who just gave you a positive rating why they gave you that rating, you will at best get a short positive about your business. Those fewer than 10% who slammed you will give you  problem area certainly, but this gives you very little to work with other than a few pronounced problems that you probably knew were important anyway.

What you really need is information that you didn’t know and that will enable you to improve in a way that matters to customers and offers a competitive advantage.

An Easy Fix

The solution is actually quite simple: Ask a follow-up probe question like, “What, if anything, could we do better?”

This can then be text analyzed separately, or better yet, combined with the original text comment which as mentioned earlier usually reads Q. “Why did you give the satisfaction score you gave? And due to the Possion distribution in customer satisfaction yields almost only positive comments with few ideas for improvement.  This one-two question combination when text analyzed together fives a far more complete picture to the question about how customer view your company and how you can improve.

Final Tip: Make the comment question mandatory. Everyone should be able to answer this question, even if it means typing an “NA” in some rare cases.

Good luck!

Ps. To learn more about how OdinText can help you learn what really matters to your customers and predict real behavior,  please contact us or request a Free Demo here >

 

[NOTE: Tom H. C. Anderson is Founder of Next Generation Text Analytics software firm OdinText Inc. Click here for more Text Analytics Tips ]

7 Responses

  1. @marsattacks here (academic validation of NPS). Your point on follow up question is well made and actually is key to Net Promoter System follow up question’What’s the one thing we could change that would most improve the rating you’ve just given?’
    The point of asking for a rating first is to give customers a personal reference point and focus for suggesting a priority for improvement…

  2. Love this article Tom! Right in line with our own experience and approach.A challenge is that the structured data gives those who are not particularly invested in CX an opportunity to “tick the box” and show KPIs that show complacency (and inactivity) is justified.
    The comments and their analysis are the gold (and in the verbal feedback that we focus on at BigEars, it’s even more so), but they generate action and change (which can be a challenge for some). If you are C-level though, it is excellent, and the whole point.

  3. @Paul our users run a lot of NPS data through OdinText, and we rarely see open ends other than the “Q.Why did you give that rating?” question. I wonder how someone who gave the company a perfect 10 would answer your suggested question. That said I do like it, and its certainly an improvement over what we typically see. And again, this isn’t an NPS thing alone, Overall Satisfaction (OSAT) or any other metric you are using work exactly the same way. Also, much of what was said above in the post really covers a lot of other ad-hoc survey market research as well. Long 15 minutes surveys with likert scale attribute batteries produce bad data, period!
    @Mark Thanks so much. You guys really have a great product also. There is so much more that can be done with both text and voice to text data than is being done now. So much more…

  4. That is why it is so important to encourage happy customers to leave positive reviews for your business. Unhappy customers almost always will – and that is not what you want.

  5. Hey Tom,Liked every point you have shared here but my personal favorite way to understand audience is to lean towards first party data by letting visitors authenticate themselves using existing social identities. It not only makes the login simpler but also makes the return desirable. The reliable first party data fetched meanwhile can bring your business on the top. What do you think?
    Sophia Briggs

  6. @Nate, no that is the opposite of what I’m saying
    @Sophia, sounds interesting, would love to learn more

  7. People do not enjoy taking long boring surveys with similarly worded questions. When surveys are short and to the point and then followed by a text comment the customer will be more likely to spend the time explaining their problem that is necessary instead of rushing through because they are bored of the survey. People have short attention spans and enjoy being entertained at every second of the day, they don’t want to be bothered with menial questions.

Industry-leading companies using OdinAnswers to better understand their customers.

See OdinAnswers in Action

Schedule a demo or discovery meeting to see how OdinAnswers can reveal actionable insights for your business.