Back in the eighties, TV’s G.I. Joe closed each cartoon appearance with a cheesy public service announcement ending in “Now you know. And knowing is half the battle.” For customer experience leaders today, he might add, “And the other half is how you know.”
How do you determine which journeys matter most in defining the customer experience? And how do you redesign each journey to maximize value by improving that experience? These questions can be addressed in three steps. While straightforward, they require knowing where to invest time in research and innovation, and where “just doing it” is not only good enough but necessary.
1. Swiftly—even if imperfectly—define the journeys and decide on one or two to focus on first.
Identifying the journeys that are most relevant to your customers requires quality thought, but little precision. Assume your team already has a good idea of what these are, even if they don’t yet know which to prioritize. If, for example, you’re a wireless provider, signing up for service, using devices, solving technical problems, resolving billing problems, changing a plan, and upgrading a device are the obvious journeys.
Don’t overthink. Don’t wait for detailed research. Make your list, and move on to quickly prioritize journeys by their potential impact, ranging from quick wins to building enduring momentum. Don’t reach for perfection. Marry quality thinking to evidence sufficient to identify one or two journeys to start with. Exercise leadership judgment, even if this means your choices are not obvious to the team. Think beyond the lift in customer experience to consider impact on operational costs (such as reducing customer care calls) and such immediate revenue drivers as quote-to-sale conversion rates.
Spend a half-day with the team laying out the main waypoints in the top one or two journeys. Begin to refine. For instance, does the on-boarding journey start after the sale or before? Does the problem resolution journey end when the issue is resolved on company systems or after the next billing cycle? Move toward holistic solutions by defining a journey’s beginning and end broadly rather than narrowly.
2. Design research to identify what matters within each journey and how to optimize these critical areas.
Step 1 gets you into action on your top-priority journeys. Step 2 takes you two levels deeper, even as it extends your understanding of all the journeys by linking the voice of the customer (VOC) to the drivers of performance. Here, precise research design makes the difference between “interesting” insights and those that actually drive improvement.
First, quantify your understanding of what drives customer experience and behavior by designing research to quantify VOC. Use derived importance analysis (a type of multivariate regression) to rank the relative importance of each journey in determining customer preference for your company or brand. Emotional or intangible factors typically float to the top, with price, product, and service following.
Derived importance analysis can reveal new, non-obvious, journeys to focus on. For example, a pay-tv provider identified the billing process as a powerful improvement lever. Bill design had small impact per customer, but VOC analysis revealed its aggregate importance by virtue of the sheer number of customers it touched.
Having quantified VOC, dive two levels deeper. Identify the sub-journeys within each journey that are most important, along with their pain points. To address the pain points, descend another level by asking customers why they are dissatisfied (see exhibits 1 and 2):
The example directs questions to pay-TV customers unhappy with their first bill. The answers suggest the problem is not the bill but unclear messaging at on-boarding. From the get-go, customers are confused about charges. The fix? Improve communications during sign-up.
Finally, identify customer behaviors that impact cost and revenue. Discover how pain points elicit such expensive behavior as calls to customer care, reduced cross-sell, and—worst case—churn. Creating such insight requires linking survey results to operational data. For example, repeated calls to customer care from customers who use a specific product package point to the package as a problem.
3. Get out in front to create an end-to-end vision.
Analytical approaches are potent, but spending time in call centers, field trucks, or retail outlets allows the team to map customer journeys end to end, uncovering opportunities and problems research and operational data miss. Armed with a frontline map linked to VOC and operational data, you can prioritize those customer experience initiatives that move the needle on satisfaction and impact the bottom line directly. Track all metrics of your highest-priority initiatives at the executive level to build a business case for change beyond the primary motivator of doing the right thing for the customer. This is critical to sustainability of change.
Additional thoughts for B2B leaders
Are B2C customer journeys relevant in the B2B context? Yes, but a few things are different.
Each B2B “customer” actually consists of multiple stakeholders. Depending on the size of your customer organizations, you may need to collect VOC data from one or several of these stakeholders. If you have few large customers, you need both qualitative customer research and insight from your frontline employees to understand the journeys. If, however, you have many smaller B2B customers, you can use most of the B2C toolkit with little modification.
Now you know
Define journeys quickly and then devote time to designing research that reveals which journeys matter most. Dig down to actionable insights that move the needle on customer satisfaction. Link insights to operational data to build a business case for change that reaches beyond customer satisfaction alone. While research is important, get out and in front to map journeys end to end, identifying opportunities and pain points that research may miss.
If change is good, sustainable change is better. Make change sustainable by tracking end-to-end customer experience metrics in routine executive-level reporting.