I have been called a glutton for punishment because I call a contact center every day of my career. No way. The lessons I’ve learned calling other contact centers have proven invaluable. I wouldn’t have had the career I’ve had without the years of “competitive analysis”.

The real punishment I put myself through however, is taking surveys. Although I’m not able to take one every day, I actively seek out opportunities to take them and have completed hundreds over the past several years. I have also worked for some companies where I have learned a great deal about customer feedback programs.

Survey History

But before we go into those lessons, it’s important to know from where we came. From my vantage point, here is the evolution of customer surveys.

In the early days of formal customer surveys, many different departments wanted feedback on “their part” of the customer experience, even before we called it such. Companies created elaborate mail-in surveys (which eventually translated into long post-call IVR surveys) that contained several questions around service, product, employee responsiveness, etc. These surveys often took the shape of a balanced Likert scale, a five-point scale that asked customers to rate their agreement on certain aspects of the company. The balanced 5-point scale would go from “Strongly Agree” to “Strongly Disagree” with a neutral “Neither Agree or Disagree” in the middle.

The questions could often be changed, as could the scale (from unbalanced to balanced, as an example, where there was no neutral option), and the method for calculating. Early in my career I learned that industries, companies, and even sometimes divisions within the same company had no consistency when it came to the scoring methodology. Some companies would calculate the score using the average numeric value based on the responses (1 through 5) while other companies would calculate a “top box” or even “top two box” score which is, as the name implies, the percentage of overall survey responses that were a 5 or above a 4 (on a 5-point scale), respectively.

This confusion often led to front-line supervisors and team members not having a clue what the data meant, and much like with AHT, they were simply told to just “increase” the survey scores.

CSAT

Then came the customer satisfaction (CSAT) survey. CSAT surveys . Companies were still using the balanced Likert scale, but the questions were more around the customer’s satisfaction with the company, product, or a specific person during a call to customer service. These surveys asked if the customer was “Extremely Satisfied” to “Extremely Dissatisfied” with “Neither Satisfied or Dissatisfied” in the middle. And again, there was little consensus on the best way to ask the question or calculate these results.

FCR

After the rush of CSAT was starting to wear off, contact center leaders began focusing on First Contact Resolution (FCR). FCR quickly became the rallying cry of contact center leaders everywhere. With more demanding customers and, oftentimes, shrinking budgets, it didn’t take long for leaders to see the value in reducing the number of calls it took to resolve a customer’s issue. Process improvement teams popped up in contact centers of all sizes and attacked the low-hanging fruit with a vengeance. Improvement in the FCR rate was also shown to have a substantial improvement on CSAT. Makes perfect sense, right? The more times a customer has to contact a company, the less satisfied they will likely be. As intuitive as FCR is, there wasn’t consensus on how to measure it and there weren’t great benchmarks on what the number should be, or even how the results should be used.

NPS

Around the same time I first started hearing about FCR, I was beginning to hear about Net Promoter Score (NPS), the newest survey methodology at the time. NPS, in a nutshell, came from the Fred Reichheld book, The Ultimate Question, and is a one question survey that is designed to give companies a measure of their overall customer loyalty. The single question, “On a scale of zero to 10, how likely are you to recommend this company (or service) to your friend or colleague?” The score is derived by simply subtracting the percentage of detractors (customers that gave a score below seven) from the percentage of promoters (customers that gave a score of nine or 10). One of the most valuable elements of this system was that there was a standard to follow. Since there was only one question (plus a follow-up question designed to get more specifics) and one clear way to calculate the score, it is easy to see why it became one of the most widely used customer feedback measurements and an effective way to benchmark results across different businesses or industries.
Is NPS the final step in the evolution of customer surveys? If the past 20 years is any indication, it is not. NPS is just another step in the process of businesses trying to learn from their customers.

Effort

The newest kid on the survey block is the Customer Effort Score developed by Matt Dixon while he led the Customer Contact Practice at CEB (now part of Gartner). From his ground-breaking book, The Effortless Experience, Dixon and his co-authors set out to prove that a greater positive impact to customer loyalty could be made by asking customers how much effort they had to put forth to handle an issue with the question, “To what extent do you agree or disagree with the following statement: The company made it easy for me to handle my issue.” Have a conversation with any contact center leader and the topic of the effortless experience is bound to come up. Although the topic is on a lot of our minds, many companies still haven’t begun asking the question. Even though I believe the future of customer feedback is going to be with effort scoring, it has a long way to go before hitting the critical mass that NPS has reached. Anecdotally, I rarely come across the effort question in the wild, coming up in less than 1% of the surveys I take.

Lessons Learned

Throughout my career I have learned a lot about what makes a successful customer survey program. Some of what I learned came from my mistakes and some from learning how other companies were doing it better. Here are my tips for a successful customer feedback program:

  1. Everyone must buy into the fact that the survey program is part of the customer experience. Surveys, no matter the form they take, are part of the customer experience. Don’t get fooled into thinking that you are getting feedback from customers after they have experienced your company’s product or service. Getting feedback is part of the experience
  2. Get all stakeholders to agree on a methodology that you can stick with for many years. Your best source of benchmark data will come from your own customers and your own surveys. If you are constantly changing the questions, format, and scoring methodology you will lose the ability to see trends in your own results. Resist changing things because of the “flavor of the month”. This doesn’t mean that you can never try a new survey, you just need to understand what you’re giving up if you completely scrap the existing one.
  3. Get all stakeholders to agree on a sampling criteria. Will you survey every customer? What will trigger the survey? How often will you rest a customer from being given another survey? Settle on these things early in the process, it will save you many headaches in the future.
  4. Only be willing to survey the number of customers with which your employees can follow up. Why bother asking your customers for feedback if you don’t have a process to follow up with them? You won’t need to follow up with every customer, but you should certainly contact the ones that aren’t satisfied. If you don’t have the resources to follow up with customers after they respond to your survey, you have no business doing surveys (or that many surveys). Always remember that data are the aggregate responses, but people are not, they are individuals.
  5. Stop offering an incentive to customers to complete a survey. One of the most important lessons I have learned around customer surveys is that you need to have a consistent sampling methodology and you cannot goad customers into taking your survey.
  6. It’s never about the score. Seriously. It is never about the score. It is more important to look at the direction and velocity of the results. Stop putting the emphasis on the score. Be careful how you incentivize your employees around customer surveys. You may be sending the message that you only care about the score which may cause employees to, believe it or not, cheat the system. Customer feedback programs should be a tool to drive better customer experiences and better business results, that is your end goal. If you are proud of your “high” score, you are focusing on the wrong thing. Use the score and the feedback serve that mission.
  7. Stop asking so many questions! Every time I see a multi-page survey (I’m looking at you, chain restaurants of America) all I can think of is a dysfunctional leadership team where every single department in the company has a hand in the survey cookie jar. Long surveys can sometimes result in more negative than positive responses, which, in turn, can negatively impact your team members that are doing a great job! One thing for sure is that survey length negatively effects completion rates. Leaders should talk amongst themselves and come up with a few (at most) simple questions.
  8. The lack of response is a response; the absence of feedback is feedback. That is one of the most important lessons I have learned about customer surveys. When a customer doesn’t respond to a survey it doesn’t mean that they are happy with your company, nor does it necessarily mean they are not. Take the time to understand why customers don’t respond to your surveys, or why they abandon surveys – and even more importantly, where in the process they abandon them. Here’s a hint: if you have more than a single digit percentage of surveys abandoned without being completed, you have too many questions!
  9. Ask yourself if you even need a customer feedback program. Not every business needs a formal customer survey program. Again, the purpose of a customer feedback program should be to get insight to drive better results. Turned on its head, that also means that your results ARE FEEDBACK. Not every business needs a formal customer survey.

What customer feedback lessons have you learned?

A self-proclaimed Contact Center Geek, Matt Beckwith brings a wealth of customer contact experience and a keen analytical outlook to his current role as Contact Center Director at Clark Pest Control. Matt’s previous positions include Division Manager, Card Services / Electronic Banking and Client Services and Call Center Manager at Fremont Bank; Operations Manager, Customer Service, Blue Shield of California; and Vice President, Call Center Operations, Washington Mutual Card Services. Read more from Matt at contactcentergeek.com.

X