of the United Kingdom’s capitol city.
Have you ever met someone who will “boycott” a brand? - or plan to never purchase from them again? I’ve met many. The reasons range from a bad first experience, to brand loyalty to another brand, to personal views. I’m curious though - in particular - about those who had some meaningful experience with a brand and then decided to stop or slow purchasing.
In Step by Step Season 5 we talked about the law of zero — the idea that certain processes and activities are limited by their zeros. In short, a 5-star dining experience can be ruined altogether by a sewage backup. Any 0 appearing in an equation nullifies the rest of the experience.
I think it could actually be worse. I think more often than a 0 we see a negative number dropped into the equation. A really bad experience doesn’t just cancel, it detracts. That is to say, 10*10*2=200, but 10*10*-2=-200. Although in real life, two negative experiences don’t equal a positive experience; negative experiences do compound!
Our key insight is that, the higher the positive experience score, the more impactful the poor outcome is if a negative number is introduced. And more obviously, a higher negative number has the same effect. To say it plainly, the more trust you gain, the more that’s at stake. As you get better at serving your customers, it’s all the more important to make sure they don’t have a negative experience anywhere along the chain, or it will cause them to question or even feel bitter about that good experience they’re having.
Assuming the primary audience of this essay is the operator of an eCommerce or retail sales channel, take note: we believe that understanding what your multipliers are becomes your most important job as an experience leader.
In this piece we explore a potential framework for determining when a negative experience may have soured a customer relationship, and how to interject human interaction (good friction!) into that equation to right the ship.
Quality of Experience is a multiplier
This week, I visited NYC for the first time in 18 months. While concerned about the state of the city due to all the pandemic-related “New York is dead” doom-hype articles, I was pumped to get back to the greatest of American cities. The sights, sounds, food, and atmospheric weight of the city called my name. I had dinners ahead, and people to meet and reconnect with in person, finally. The schedule was tight and my hopes were high.
While I had two phenomenal dinners with talented, interesting people like my friends Kristen, Alex, and Nilla, my experience didn’t live up to my expectations. My hotel breakfast was a voucher at an oddly assorted market down the street. The rats were out at dark. There were enough people walking around that, while I wasn’t shoulder to shoulder, I felt like I was in a crowd. None of these are negative experiences, per se. Close to zero, but not enough to sour me on the new state of the City.
I had a bad haircut. Many of my favorite restaurants were still closed or even permanently closed. My flight was delayed due to the pandemonium that is the new post-Covid travel season. Parts of my experience were straight negatives, leaving me feeling like New York may never be the same again.
Introducing the Experience Impact Score
My mom once told me that you can’t make a withdrawal from an account if you’ve never made a deposit. Brand experiences add up over time, but that account doesn’t necessarily gain interest. To turn customers into evangelists, you have to keep the account topped off. Two bad experiences in a row will deplete a customer’s good will, and three will send them off for good.
We already have tools to measure the sentiment of the customer experience. NPS, or Net Promoter Score, is a very popular, and powerful, tool. NPS captures the sentiment of a customer at a particular moment. Having a net promoter score of greater than 0 is table stakes, and >20 is what most brands aim for. Greater than 80? World class.
But, not all customer journeys are created equal. NPS doesn’t take into account your sentiment as a customer, over time.
This is where we believe the Experience Impact Score (EIS) could be beneficial. It takes into account the last 5 NPS scores of a given customer, along many touchpoints. They can be many and varied, and the goal isn’t to measure group net promotion sentiment, it’s to determine when a human interaction would be required: when a customer encounters a problem, or when they need a nudge to cross a threshold. We’ll explain further below.
Why EIS? Because the impact of brand experience isn’t a single look. It’s the sum of multiple parts. Looking at the same example, NYC has a history for me of delivering incredible experiences that have made a real difference in my life and have changed my view of the world. But not everything that’s ever happened in NYC is life-altering. A bad haircut in the city doesn’t have the same impact on my view of NYC as the cab ride where I go down a one-way street the wrong way. Likewise, when I have a world-altering dinner with wine that changes my view of what wine can be. But as New York changes post-pandemic, tradeoffs begin to emerge. My new expectations reset the prior goodwill I had. Eventually, my new outlook will be shaped by recent experiences more than the ones I had in the distant past, leaving me nostalgic for what the City “used to be.”
This is the story of too many brands who have come and gone. EIS addresses this.
Breaking Down the Experience Impact Score
Here is the breakdown of how we would see EIS work.
Major changes: We introduce a -1 into NPS. Customers angry enough to boycott a brand should be given weight to express frustration. A -1 is a “screw you and the horse you rode in on!”. This negative impact in the score will have a halo affect on other NPS.
EIS has a lower-bound of 0 (for all negative experiences), and an upper-bound of 30. An EIS of 20 or above is a customer who is happy as a clam. EIS has a theoretical minimum of 2, but any customer below 10 is at risk of telling their friends how much you suck.
The following formula represents a series of experiences based on five different touchpoints:
In English: the series sum of the weighted average your NPS, from your last five experiences.
Let’s break that down:
- We measure NPS of a given experience (from -1, 1-10)
- We take the weighted average of that experience, where the weight is applied to the most recent experience (5) to the least-recent experience (1)
- The sum of these scores gives us an index.
A visualization of the index is as follows:
Let’s try it out on a customer interaction. A shopper visits a local DTC show room and signs up to receive their email. They then place an order, and like the product, but the packaging was truly transcendent! Only problem: the product didn’t work as advertised. They had no choice but to try to return the product, but it was just outside of the return period so they had to call a human being. It took nearly 2 weeks to get the refund.
We calculate the EIS of this customer to be 20. Overall, the experiences have been decent to this point, but on the whole, the customer sits on the hump of being neutral.
An example worksheet can be found here. Customers “on the hump” hovering around 10, or around 20, can be nudged higher with a human interaction. This good friction opportunity is usually prompted by the customer (as with our example where she called into the support team) but outlier interactions may prompt CX follow-up as well.
People keep an emotional record of how something impacts them. Erasing that is difficult, and a negative number reflects that. With this in mind, I believe the law of negative numbers works at micro level. It’s very personal, and it works as a personal aggregator, while taking into account the prior work you’ve put into developing a lifelong relationship with the customer.
I like EIS for a lot of reasons:
- It’s not defeatist. This view suggests that there is opportunity to overcome bad experiences by making changes and building goodwill aka trust. While a bad experience could significantly offset or destroy your brand, you have tools to improve and recover.
- It is relationship-oriented. You’re keeping track of previous interactions with the brand and factoring them into the current experience. It emphasizes the importance of the beginning of the relationship because you might not have built up a sufficient pRS to stave off the impact of a bad experience.
- It creates a touchpoint flag. You can set thresholds in your CRM of when to intervene in a customer relationship, hopefully with a human connection (and not an automated flow ::eyeroll::)
- It’s meaningful over time. While it can be a point in time calc, it’s most valuable if regularly measured.
- It’s flexible and scalable with the scope of your business (or part of the business you’re responsible for).
- It’s scalable with the level of data you have available and gives you opportunity to identify key missing touchpoints that need to be tracked, i.e. this could spur an exercise in touchpoint documentation and at minimum a discussion about how you track your customers’ experience with those touchpoints (if even possible).
- It can be tracked in aggregate for your entire customer base as a gauge of trust and loyalty, but also at an individual or smaller group level to assist with specific customer retention.
- It can be reverse-engineered. If you track some kind of regular data on the temperature of your customers (such as NPS), understand your customer touchpoints, and plot RS on a regular basis, you should be able to get a better understanding of where things fall down. “We know that touchpoint 1 (shipping) and touchpoint 2 (quality of product) worked as expected, but we still got a low NPS. By process of elimination it’s either touchpoint 3 (customer service) or touchpoint 4 (packaging). Oh wait, NPS dropped by 3 points and the impact of packaging is only a 1 and NPS dropped by 3 points, so it’s probably customer service - or perhaps we’re missing a touchpoint in our calculation.”
It has its downsides, too:
- It’s laborious. It requires a number of touchpoints with the customer, which can often be seen as nagging.
- It requires upkeep. There is a need for routine data hygiene to keep your customers happy.
- It doesn’t factor in for lapsed time. Our model gives lots of weight to a recent interaction, but doesn’t account for a customer whose last interaction was long ago. A more sophisticated model would provide for time decay (theta) that would ramp up the longer that a customer has fallen out of relationship with the brand.
- It’s currently unproven. Unlike NPS there are no tools that automatically calculate and build into a workflow, CRM, or CDP.
A couple of final thoughts:
When exploring negative brand experiences in the past, we’ve seen negatives as an opportunity to create a positive through service. It would be interesting to add some sort of regression score to capture the changing view of an experience over time. Perhaps customer service interactions could have an impact score that is tied to the originally occurring impact measure.
Perhaps that are factors about your specific industry or customer base you can factor in as well as - constants or multipliers that need to be accounted for (but not a general formula).
EIS is a measure to help track your customers’ experiences as a continuum, and give you insight into how to improve them - not be a silver bullet.
We’d love your feedback, and to help us workshop EIS. If you’re interested in working with us to pilot EIS as a CX team, let us know. Drop us a line at hello@futurecommerce.com
Written by Brian Lange and Phillip Jackson
Have you ever met someone who will “boycott” a brand? - or plan to never purchase from them again? I’ve met many. The reasons range from a bad first experience, to brand loyalty to another brand, to personal views. I’m curious though - in particular - about those who had some meaningful experience with a brand and then decided to stop or slow purchasing.
In Step by Step Season 5 we talked about the law of zero — the idea that certain processes and activities are limited by their zeros. In short, a 5-star dining experience can be ruined altogether by a sewage backup. Any 0 appearing in an equation nullifies the rest of the experience.
I think it could actually be worse. I think more often than a 0 we see a negative number dropped into the equation. A really bad experience doesn’t just cancel, it detracts. That is to say, 10*10*2=200, but 10*10*-2=-200. Although in real life, two negative experiences don’t equal a positive experience; negative experiences do compound!
Our key insight is that, the higher the positive experience score, the more impactful the poor outcome is if a negative number is introduced. And more obviously, a higher negative number has the same effect. To say it plainly, the more trust you gain, the more that’s at stake. As you get better at serving your customers, it’s all the more important to make sure they don’t have a negative experience anywhere along the chain, or it will cause them to question or even feel bitter about that good experience they’re having.
Assuming the primary audience of this essay is the operator of an eCommerce or retail sales channel, take note: we believe that understanding what your multipliers are becomes your most important job as an experience leader.
In this piece we explore a potential framework for determining when a negative experience may have soured a customer relationship, and how to interject human interaction (good friction!) into that equation to right the ship.
Quality of Experience is a multiplier
This week, I visited NYC for the first time in 18 months. While concerned about the state of the city due to all the pandemic-related “New York is dead” doom-hype articles, I was pumped to get back to the greatest of American cities. The sights, sounds, food, and atmospheric weight of the city called my name. I had dinners ahead, and people to meet and reconnect with in person, finally. The schedule was tight and my hopes were high.
While I had two phenomenal dinners with talented, interesting people like my friends Kristen, Alex, and Nilla, my experience didn’t live up to my expectations. My hotel breakfast was a voucher at an oddly assorted market down the street. The rats were out at dark. There were enough people walking around that, while I wasn’t shoulder to shoulder, I felt like I was in a crowd. None of these are negative experiences, per se. Close to zero, but not enough to sour me on the new state of the City.
I had a bad haircut. Many of my favorite restaurants were still closed or even permanently closed. My flight was delayed due to the pandemonium that is the new post-Covid travel season. Parts of my experience were straight negatives, leaving me feeling like New York may never be the same again.
Introducing the Experience Impact Score
My mom once told me that you can’t make a withdrawal from an account if you’ve never made a deposit. Brand experiences add up over time, but that account doesn’t necessarily gain interest. To turn customers into evangelists, you have to keep the account topped off. Two bad experiences in a row will deplete a customer’s good will, and three will send them off for good.
We already have tools to measure the sentiment of the customer experience. NPS, or Net Promoter Score, is a very popular, and powerful, tool. NPS captures the sentiment of a customer at a particular moment. Having a net promoter score of greater than 0 is table stakes, and >20 is what most brands aim for. Greater than 80? World class.
But, not all customer journeys are created equal. NPS doesn’t take into account your sentiment as a customer, over time.
This is where we believe the Experience Impact Score (EIS) could be beneficial. It takes into account the last 5 NPS scores of a given customer, along many touchpoints. They can be many and varied, and the goal isn’t to measure group net promotion sentiment, it’s to determine when a human interaction would be required: when a customer encounters a problem, or when they need a nudge to cross a threshold. We’ll explain further below.
Why EIS? Because the impact of brand experience isn’t a single look. It’s the sum of multiple parts. Looking at the same example, NYC has a history for me of delivering incredible experiences that have made a real difference in my life and have changed my view of the world. But not everything that’s ever happened in NYC is life-altering. A bad haircut in the city doesn’t have the same impact on my view of NYC as the cab ride where I go down a one-way street the wrong way. Likewise, when I have a world-altering dinner with wine that changes my view of what wine can be. But as New York changes post-pandemic, tradeoffs begin to emerge. My new expectations reset the prior goodwill I had. Eventually, my new outlook will be shaped by recent experiences more than the ones I had in the distant past, leaving me nostalgic for what the City “used to be.”
This is the story of too many brands who have come and gone. EIS addresses this.
Breaking Down the Experience Impact Score
Here is the breakdown of how we would see EIS work.
Major changes: We introduce a -1 into NPS. Customers angry enough to boycott a brand should be given weight to express frustration. A -1 is a “screw you and the horse you rode in on!”. This negative impact in the score will have a halo affect on other NPS.
EIS has a lower-bound of 0 (for all negative experiences), and an upper-bound of 30. An EIS of 20 or above is a customer who is happy as a clam. EIS has a theoretical minimum of 2, but any customer below 10 is at risk of telling their friends how much you suck.
The following formula represents a series of experiences based on five different touchpoints:
In English: the series sum of the weighted average your NPS, from your last five experiences.
Let’s break that down:
- We measure NPS of a given experience (from -1, 1-10)
- We take the weighted average of that experience, where the weight is applied to the most recent experience (5) to the least-recent experience (1)
- The sum of these scores gives us an index.
A visualization of the index is as follows:
Let’s try it out on a customer interaction. A shopper visits a local DTC show room and signs up to receive their email. They then place an order, and like the product, but the packaging was truly transcendent! Only problem: the product didn’t work as advertised. They had no choice but to try to return the product, but it was just outside of the return period so they had to call a human being. It took nearly 2 weeks to get the refund.
We calculate the EIS of this customer to be 20. Overall, the experiences have been decent to this point, but on the whole, the customer sits on the hump of being neutral.
An example worksheet can be found here. Customers “on the hump” hovering around 10, or around 20, can be nudged higher with a human interaction. This good friction opportunity is usually prompted by the customer (as with our example where she called into the support team) but outlier interactions may prompt CX follow-up as well.
People keep an emotional record of how something impacts them. Erasing that is difficult, and a negative number reflects that. With this in mind, I believe the law of negative numbers works at micro level. It’s very personal, and it works as a personal aggregator, while taking into account the prior work you’ve put into developing a lifelong relationship with the customer.
I like EIS for a lot of reasons:
- It’s not defeatist. This view suggests that there is opportunity to overcome bad experiences by making changes and building goodwill aka trust. While a bad experience could significantly offset or destroy your brand, you have tools to improve and recover.
- It is relationship-oriented. You’re keeping track of previous interactions with the brand and factoring them into the current experience. It emphasizes the importance of the beginning of the relationship because you might not have built up a sufficient pRS to stave off the impact of a bad experience.
- It creates a touchpoint flag. You can set thresholds in your CRM of when to intervene in a customer relationship, hopefully with a human connection (and not an automated flow ::eyeroll::)
- It’s meaningful over time. While it can be a point in time calc, it’s most valuable if regularly measured.
- It’s flexible and scalable with the scope of your business (or part of the business you’re responsible for).
- It’s scalable with the level of data you have available and gives you opportunity to identify key missing touchpoints that need to be tracked, i.e. this could spur an exercise in touchpoint documentation and at minimum a discussion about how you track your customers’ experience with those touchpoints (if even possible).
- It can be tracked in aggregate for your entire customer base as a gauge of trust and loyalty, but also at an individual or smaller group level to assist with specific customer retention.
- It can be reverse-engineered. If you track some kind of regular data on the temperature of your customers (such as NPS), understand your customer touchpoints, and plot RS on a regular basis, you should be able to get a better understanding of where things fall down. “We know that touchpoint 1 (shipping) and touchpoint 2 (quality of product) worked as expected, but we still got a low NPS. By process of elimination it’s either touchpoint 3 (customer service) or touchpoint 4 (packaging). Oh wait, NPS dropped by 3 points and the impact of packaging is only a 1 and NPS dropped by 3 points, so it’s probably customer service - or perhaps we’re missing a touchpoint in our calculation.”
It has its downsides, too:
- It’s laborious. It requires a number of touchpoints with the customer, which can often be seen as nagging.
- It requires upkeep. There is a need for routine data hygiene to keep your customers happy.
- It doesn’t factor in for lapsed time. Our model gives lots of weight to a recent interaction, but doesn’t account for a customer whose last interaction was long ago. A more sophisticated model would provide for time decay (theta) that would ramp up the longer that a customer has fallen out of relationship with the brand.
- It’s currently unproven. Unlike NPS there are no tools that automatically calculate and build into a workflow, CRM, or CDP.
A couple of final thoughts:
When exploring negative brand experiences in the past, we’ve seen negatives as an opportunity to create a positive through service. It would be interesting to add some sort of regression score to capture the changing view of an experience over time. Perhaps customer service interactions could have an impact score that is tied to the originally occurring impact measure.
Perhaps that are factors about your specific industry or customer base you can factor in as well as - constants or multipliers that need to be accounted for (but not a general formula).
EIS is a measure to help track your customers’ experiences as a continuum, and give you insight into how to improve them - not be a silver bullet.
We’d love your feedback, and to help us workshop EIS. If you’re interested in working with us to pilot EIS as a CX team, let us know. Drop us a line at hello@futurecommerce.com
Written by Brian Lange and Phillip Jackson
Have you ever met someone who will “boycott” a brand? - or plan to never purchase from them again? I’ve met many. The reasons range from a bad first experience, to brand loyalty to another brand, to personal views. I’m curious though - in particular - about those who had some meaningful experience with a brand and then decided to stop or slow purchasing.
In Step by Step Season 5 we talked about the law of zero — the idea that certain processes and activities are limited by their zeros. In short, a 5-star dining experience can be ruined altogether by a sewage backup. Any 0 appearing in an equation nullifies the rest of the experience.
I think it could actually be worse. I think more often than a 0 we see a negative number dropped into the equation. A really bad experience doesn’t just cancel, it detracts. That is to say, 10*10*2=200, but 10*10*-2=-200. Although in real life, two negative experiences don’t equal a positive experience; negative experiences do compound!
Our key insight is that, the higher the positive experience score, the more impactful the poor outcome is if a negative number is introduced. And more obviously, a higher negative number has the same effect. To say it plainly, the more trust you gain, the more that’s at stake. As you get better at serving your customers, it’s all the more important to make sure they don’t have a negative experience anywhere along the chain, or it will cause them to question or even feel bitter about that good experience they’re having.
Assuming the primary audience of this essay is the operator of an eCommerce or retail sales channel, take note: we believe that understanding what your multipliers are becomes your most important job as an experience leader.
In this piece we explore a potential framework for determining when a negative experience may have soured a customer relationship, and how to interject human interaction (good friction!) into that equation to right the ship.
Quality of Experience is a multiplier
This week, I visited NYC for the first time in 18 months. While concerned about the state of the city due to all the pandemic-related “New York is dead” doom-hype articles, I was pumped to get back to the greatest of American cities. The sights, sounds, food, and atmospheric weight of the city called my name. I had dinners ahead, and people to meet and reconnect with in person, finally. The schedule was tight and my hopes were high.
While I had two phenomenal dinners with talented, interesting people like my friends Kristen, Alex, and Nilla, my experience didn’t live up to my expectations. My hotel breakfast was a voucher at an oddly assorted market down the street. The rats were out at dark. There were enough people walking around that, while I wasn’t shoulder to shoulder, I felt like I was in a crowd. None of these are negative experiences, per se. Close to zero, but not enough to sour me on the new state of the City.
I had a bad haircut. Many of my favorite restaurants were still closed or even permanently closed. My flight was delayed due to the pandemonium that is the new post-Covid travel season. Parts of my experience were straight negatives, leaving me feeling like New York may never be the same again.
Introducing the Experience Impact Score
My mom once told me that you can’t make a withdrawal from an account if you’ve never made a deposit. Brand experiences add up over time, but that account doesn’t necessarily gain interest. To turn customers into evangelists, you have to keep the account topped off. Two bad experiences in a row will deplete a customer’s good will, and three will send them off for good.
We already have tools to measure the sentiment of the customer experience. NPS, or Net Promoter Score, is a very popular, and powerful, tool. NPS captures the sentiment of a customer at a particular moment. Having a net promoter score of greater than 0 is table stakes, and >20 is what most brands aim for. Greater than 80? World class.
But, not all customer journeys are created equal. NPS doesn’t take into account your sentiment as a customer, over time.
This is where we believe the Experience Impact Score (EIS) could be beneficial. It takes into account the last 5 NPS scores of a given customer, along many touchpoints. They can be many and varied, and the goal isn’t to measure group net promotion sentiment, it’s to determine when a human interaction would be required: when a customer encounters a problem, or when they need a nudge to cross a threshold. We’ll explain further below.
Why EIS? Because the impact of brand experience isn’t a single look. It’s the sum of multiple parts. Looking at the same example, NYC has a history for me of delivering incredible experiences that have made a real difference in my life and have changed my view of the world. But not everything that’s ever happened in NYC is life-altering. A bad haircut in the city doesn’t have the same impact on my view of NYC as the cab ride where I go down a one-way street the wrong way. Likewise, when I have a world-altering dinner with wine that changes my view of what wine can be. But as New York changes post-pandemic, tradeoffs begin to emerge. My new expectations reset the prior goodwill I had. Eventually, my new outlook will be shaped by recent experiences more than the ones I had in the distant past, leaving me nostalgic for what the City “used to be.”
This is the story of too many brands who have come and gone. EIS addresses this.
Breaking Down the Experience Impact Score
Here is the breakdown of how we would see EIS work.
Major changes: We introduce a -1 into NPS. Customers angry enough to boycott a brand should be given weight to express frustration. A -1 is a “screw you and the horse you rode in on!”. This negative impact in the score will have a halo affect on other NPS.
EIS has a lower-bound of 0 (for all negative experiences), and an upper-bound of 30. An EIS of 20 or above is a customer who is happy as a clam. EIS has a theoretical minimum of 2, but any customer below 10 is at risk of telling their friends how much you suck.
The following formula represents a series of experiences based on five different touchpoints:
In English: the series sum of the weighted average your NPS, from your last five experiences.
Let’s break that down:
- We measure NPS of a given experience (from -1, 1-10)
- We take the weighted average of that experience, where the weight is applied to the most recent experience (5) to the least-recent experience (1)
- The sum of these scores gives us an index.
A visualization of the index is as follows:
Let’s try it out on a customer interaction. A shopper visits a local DTC show room and signs up to receive their email. They then place an order, and like the product, but the packaging was truly transcendent! Only problem: the product didn’t work as advertised. They had no choice but to try to return the product, but it was just outside of the return period so they had to call a human being. It took nearly 2 weeks to get the refund.
We calculate the EIS of this customer to be 20. Overall, the experiences have been decent to this point, but on the whole, the customer sits on the hump of being neutral.
An example worksheet can be found here. Customers “on the hump” hovering around 10, or around 20, can be nudged higher with a human interaction. This good friction opportunity is usually prompted by the customer (as with our example where she called into the support team) but outlier interactions may prompt CX follow-up as well.
People keep an emotional record of how something impacts them. Erasing that is difficult, and a negative number reflects that. With this in mind, I believe the law of negative numbers works at micro level. It’s very personal, and it works as a personal aggregator, while taking into account the prior work you’ve put into developing a lifelong relationship with the customer.
I like EIS for a lot of reasons:
- It’s not defeatist. This view suggests that there is opportunity to overcome bad experiences by making changes and building goodwill aka trust. While a bad experience could significantly offset or destroy your brand, you have tools to improve and recover.
- It is relationship-oriented. You’re keeping track of previous interactions with the brand and factoring them into the current experience. It emphasizes the importance of the beginning of the relationship because you might not have built up a sufficient pRS to stave off the impact of a bad experience.
- It creates a touchpoint flag. You can set thresholds in your CRM of when to intervene in a customer relationship, hopefully with a human connection (and not an automated flow ::eyeroll::)
- It’s meaningful over time. While it can be a point in time calc, it’s most valuable if regularly measured.
- It’s flexible and scalable with the scope of your business (or part of the business you’re responsible for).
- It’s scalable with the level of data you have available and gives you opportunity to identify key missing touchpoints that need to be tracked, i.e. this could spur an exercise in touchpoint documentation and at minimum a discussion about how you track your customers’ experience with those touchpoints (if even possible).
- It can be tracked in aggregate for your entire customer base as a gauge of trust and loyalty, but also at an individual or smaller group level to assist with specific customer retention.
- It can be reverse-engineered. If you track some kind of regular data on the temperature of your customers (such as NPS), understand your customer touchpoints, and plot RS on a regular basis, you should be able to get a better understanding of where things fall down. “We know that touchpoint 1 (shipping) and touchpoint 2 (quality of product) worked as expected, but we still got a low NPS. By process of elimination it’s either touchpoint 3 (customer service) or touchpoint 4 (packaging). Oh wait, NPS dropped by 3 points and the impact of packaging is only a 1 and NPS dropped by 3 points, so it’s probably customer service - or perhaps we’re missing a touchpoint in our calculation.”
It has its downsides, too:
- It’s laborious. It requires a number of touchpoints with the customer, which can often be seen as nagging.
- It requires upkeep. There is a need for routine data hygiene to keep your customers happy.
- It doesn’t factor in for lapsed time. Our model gives lots of weight to a recent interaction, but doesn’t account for a customer whose last interaction was long ago. A more sophisticated model would provide for time decay (theta) that would ramp up the longer that a customer has fallen out of relationship with the brand.
- It’s currently unproven. Unlike NPS there are no tools that automatically calculate and build into a workflow, CRM, or CDP.
A couple of final thoughts:
When exploring negative brand experiences in the past, we’ve seen negatives as an opportunity to create a positive through service. It would be interesting to add some sort of regression score to capture the changing view of an experience over time. Perhaps customer service interactions could have an impact score that is tied to the originally occurring impact measure.
Perhaps that are factors about your specific industry or customer base you can factor in as well as - constants or multipliers that need to be accounted for (but not a general formula).
EIS is a measure to help track your customers’ experiences as a continuum, and give you insight into how to improve them - not be a silver bullet.
We’d love your feedback, and to help us workshop EIS. If you’re interested in working with us to pilot EIS as a CX team, let us know. Drop us a line at hello@futurecommerce.com
Written by Brian Lange and Phillip Jackson
Have you ever met someone who will “boycott” a brand? - or plan to never purchase from them again? I’ve met many. The reasons range from a bad first experience, to brand loyalty to another brand, to personal views. I’m curious though - in particular - about those who had some meaningful experience with a brand and then decided to stop or slow purchasing.
In Step by Step Season 5 we talked about the law of zero — the idea that certain processes and activities are limited by their zeros. In short, a 5-star dining experience can be ruined altogether by a sewage backup. Any 0 appearing in an equation nullifies the rest of the experience.
I think it could actually be worse. I think more often than a 0 we see a negative number dropped into the equation. A really bad experience doesn’t just cancel, it detracts. That is to say, 10*10*2=200, but 10*10*-2=-200. Although in real life, two negative experiences don’t equal a positive experience; negative experiences do compound!
Our key insight is that, the higher the positive experience score, the more impactful the poor outcome is if a negative number is introduced. And more obviously, a higher negative number has the same effect. To say it plainly, the more trust you gain, the more that’s at stake. As you get better at serving your customers, it’s all the more important to make sure they don’t have a negative experience anywhere along the chain, or it will cause them to question or even feel bitter about that good experience they’re having.
Assuming the primary audience of this essay is the operator of an eCommerce or retail sales channel, take note: we believe that understanding what your multipliers are becomes your most important job as an experience leader.
In this piece we explore a potential framework for determining when a negative experience may have soured a customer relationship, and how to interject human interaction (good friction!) into that equation to right the ship.
Quality of Experience is a multiplier
This week, I visited NYC for the first time in 18 months. While concerned about the state of the city due to all the pandemic-related “New York is dead” doom-hype articles, I was pumped to get back to the greatest of American cities. The sights, sounds, food, and atmospheric weight of the city called my name. I had dinners ahead, and people to meet and reconnect with in person, finally. The schedule was tight and my hopes were high.
While I had two phenomenal dinners with talented, interesting people like my friends Kristen, Alex, and Nilla, my experience didn’t live up to my expectations. My hotel breakfast was a voucher at an oddly assorted market down the street. The rats were out at dark. There were enough people walking around that, while I wasn’t shoulder to shoulder, I felt like I was in a crowd. None of these are negative experiences, per se. Close to zero, but not enough to sour me on the new state of the City.
I had a bad haircut. Many of my favorite restaurants were still closed or even permanently closed. My flight was delayed due to the pandemonium that is the new post-Covid travel season. Parts of my experience were straight negatives, leaving me feeling like New York may never be the same again.
Introducing the Experience Impact Score
My mom once told me that you can’t make a withdrawal from an account if you’ve never made a deposit. Brand experiences add up over time, but that account doesn’t necessarily gain interest. To turn customers into evangelists, you have to keep the account topped off. Two bad experiences in a row will deplete a customer’s good will, and three will send them off for good.
We already have tools to measure the sentiment of the customer experience. NPS, or Net Promoter Score, is a very popular, and powerful, tool. NPS captures the sentiment of a customer at a particular moment. Having a net promoter score of greater than 0 is table stakes, and >20 is what most brands aim for. Greater than 80? World class.
But, not all customer journeys are created equal. NPS doesn’t take into account your sentiment as a customer, over time.
This is where we believe the Experience Impact Score (EIS) could be beneficial. It takes into account the last 5 NPS scores of a given customer, along many touchpoints. They can be many and varied, and the goal isn’t to measure group net promotion sentiment, it’s to determine when a human interaction would be required: when a customer encounters a problem, or when they need a nudge to cross a threshold. We’ll explain further below.
Why EIS? Because the impact of brand experience isn’t a single look. It’s the sum of multiple parts. Looking at the same example, NYC has a history for me of delivering incredible experiences that have made a real difference in my life and have changed my view of the world. But not everything that’s ever happened in NYC is life-altering. A bad haircut in the city doesn’t have the same impact on my view of NYC as the cab ride where I go down a one-way street the wrong way. Likewise, when I have a world-altering dinner with wine that changes my view of what wine can be. But as New York changes post-pandemic, tradeoffs begin to emerge. My new expectations reset the prior goodwill I had. Eventually, my new outlook will be shaped by recent experiences more than the ones I had in the distant past, leaving me nostalgic for what the City “used to be.”
This is the story of too many brands who have come and gone. EIS addresses this.
Breaking Down the Experience Impact Score
Here is the breakdown of how we would see EIS work.
Major changes: We introduce a -1 into NPS. Customers angry enough to boycott a brand should be given weight to express frustration. A -1 is a “screw you and the horse you rode in on!”. This negative impact in the score will have a halo affect on other NPS.
EIS has a lower-bound of 0 (for all negative experiences), and an upper-bound of 30. An EIS of 20 or above is a customer who is happy as a clam. EIS has a theoretical minimum of 2, but any customer below 10 is at risk of telling their friends how much you suck.
The following formula represents a series of experiences based on five different touchpoints:
In English: the series sum of the weighted average your NPS, from your last five experiences.
Let’s break that down:
- We measure NPS of a given experience (from -1, 1-10)
- We take the weighted average of that experience, where the weight is applied to the most recent experience (5) to the least-recent experience (1)
- The sum of these scores gives us an index.
A visualization of the index is as follows:
Let’s try it out on a customer interaction. A shopper visits a local DTC show room and signs up to receive their email. They then place an order, and like the product, but the packaging was truly transcendent! Only problem: the product didn’t work as advertised. They had no choice but to try to return the product, but it was just outside of the return period so they had to call a human being. It took nearly 2 weeks to get the refund.
We calculate the EIS of this customer to be 20. Overall, the experiences have been decent to this point, but on the whole, the customer sits on the hump of being neutral.
An example worksheet can be found here. Customers “on the hump” hovering around 10, or around 20, can be nudged higher with a human interaction. This good friction opportunity is usually prompted by the customer (as with our example where she called into the support team) but outlier interactions may prompt CX follow-up as well.
People keep an emotional record of how something impacts them. Erasing that is difficult, and a negative number reflects that. With this in mind, I believe the law of negative numbers works at micro level. It’s very personal, and it works as a personal aggregator, while taking into account the prior work you’ve put into developing a lifelong relationship with the customer.
I like EIS for a lot of reasons:
- It’s not defeatist. This view suggests that there is opportunity to overcome bad experiences by making changes and building goodwill aka trust. While a bad experience could significantly offset or destroy your brand, you have tools to improve and recover.
- It is relationship-oriented. You’re keeping track of previous interactions with the brand and factoring them into the current experience. It emphasizes the importance of the beginning of the relationship because you might not have built up a sufficient pRS to stave off the impact of a bad experience.
- It creates a touchpoint flag. You can set thresholds in your CRM of when to intervene in a customer relationship, hopefully with a human connection (and not an automated flow ::eyeroll::)
- It’s meaningful over time. While it can be a point in time calc, it’s most valuable if regularly measured.
- It’s flexible and scalable with the scope of your business (or part of the business you’re responsible for).
- It’s scalable with the level of data you have available and gives you opportunity to identify key missing touchpoints that need to be tracked, i.e. this could spur an exercise in touchpoint documentation and at minimum a discussion about how you track your customers’ experience with those touchpoints (if even possible).
- It can be tracked in aggregate for your entire customer base as a gauge of trust and loyalty, but also at an individual or smaller group level to assist with specific customer retention.
- It can be reverse-engineered. If you track some kind of regular data on the temperature of your customers (such as NPS), understand your customer touchpoints, and plot RS on a regular basis, you should be able to get a better understanding of where things fall down. “We know that touchpoint 1 (shipping) and touchpoint 2 (quality of product) worked as expected, but we still got a low NPS. By process of elimination it’s either touchpoint 3 (customer service) or touchpoint 4 (packaging). Oh wait, NPS dropped by 3 points and the impact of packaging is only a 1 and NPS dropped by 3 points, so it’s probably customer service - or perhaps we’re missing a touchpoint in our calculation.”
It has its downsides, too:
- It’s laborious. It requires a number of touchpoints with the customer, which can often be seen as nagging.
- It requires upkeep. There is a need for routine data hygiene to keep your customers happy.
- It doesn’t factor in for lapsed time. Our model gives lots of weight to a recent interaction, but doesn’t account for a customer whose last interaction was long ago. A more sophisticated model would provide for time decay (theta) that would ramp up the longer that a customer has fallen out of relationship with the brand.
- It’s currently unproven. Unlike NPS there are no tools that automatically calculate and build into a workflow, CRM, or CDP.
A couple of final thoughts:
When exploring negative brand experiences in the past, we’ve seen negatives as an opportunity to create a positive through service. It would be interesting to add some sort of regression score to capture the changing view of an experience over time. Perhaps customer service interactions could have an impact score that is tied to the originally occurring impact measure.
Perhaps that are factors about your specific industry or customer base you can factor in as well as - constants or multipliers that need to be accounted for (but not a general formula).
EIS is a measure to help track your customers’ experiences as a continuum, and give you insight into how to improve them - not be a silver bullet.
We’d love your feedback, and to help us workshop EIS. If you’re interested in working with us to pilot EIS as a CX team, let us know. Drop us a line at hello@futurecommerce.com
Written by Brian Lange and Phillip Jackson
Continue Reading...
THIS ARTICLE IS FOR MEMBERS ONLY
Those things we shouldn’t say out loud? We say them on the private feed. Bi-weekly “after dark” podcasts and a members-only newsletter, just for subscribers.
Our research reports combine visionary thinking with data-backed findings from our own advisory panel, made up of leaders at brands you know and trust.
Query and prompt our vast archive of research, podcasts, and newsletters with a ChatGPT-like interface. Get exclusive access to Alani™, the AI-powered engine for Future Commerce, powered by BundleIQ.