Posts tagged: KPIs

The 3 Levels of Metrics: From Driving Cars to Solving Crimes

Business-MetricsYou can’t manage what you don’t measure. That’s a long-time business mantra espoused frequently by my good friend Larry Freed. And it’s certainly true. But in an e-commerce where we can effectively measure our customers’ every footstep, we can easily become overwhelmed with all that data. Because while we can’t manage what we don’t measure, we also can’t manage everything we can measure.

I’ve found it’s best to break our metrics down to three levels in order to make the most of them.

1. KPIs
The first and highest level of metrics contains the Key Performance Indicators or KPIs. I believe strongly there should be relatively few KPIs — maybe five or six at most — and the KPIs should align tightly with the company’s overall business objectives. If an objective is to develop more orders from site visitors, then conversion rate would be the KPI. If another objective is about maximizing the customer experience, then customer satisfaction is the right metric.

In addition to conversion rate and customer satisfaction, a set of KPIs might include metrics like average order value (AOV), market share, number of active customers,  task completion rate or others that appropriately measure the company’s key objectives.

I’ve found the best KPI sets are balanced so that the best way to drive the business forward is to find ways to improve all of the KPIs, which is why businesses often have balanced scorecards. The reality is, we could find ways to drive any one metric at the expense of the others, so finding the right balance is critical. Part of that balance is ensuring that the most important elements of the business are considered, so it’s important to have some measure of employee satisfaction (because employee satisfaction leads to customer satisfaction) and some measure of profitability.  Some people look at a metric like Gross Margin as the profitability measure, but I prefer something deeper down the financial statement like Contribution Margin or EBITDA because they take other cost factors like ad spend, operational efficiencies, etc. into account and can be affected by most people in the organization.

It’s OK for KPIs to be managed at different frequencies. We often talk about metrics dashboards, and a car’s dashboard is the right metaphor. Car manufacturers have limited space to work with, so they include only the gauges the most help the driver operate the car. The speedometer is managed frequently while operating the car. The fuel gauge is critically important, but it’s monitored only occasionally (and more frequently when it’s low). Engine temperature is a hugely important measure for the health of the car, but we don’t need to do much with it until there’s a problem. Business KPIs can be monitored in a similarly varied frequency, so it’s important that we don’t choose them based on their likelihood to change over some specific time period. It’s more important to choose the metrics that most represent the health of the business.

2. Supporting Metrics
I call the next level of metrics Supporting Metrics. Supporting Metrics are tightly aligned with KPIs, but they are more focused on individual functions or even individual people within the organization. A KPI like conversion rate can be broken down by various marketing channels pretty easily, for example. We could have email conversion rate, paid search conversion rate, direct traffic conversion rate, etc. I also like to look at True Conversion Rate, which measures conversion against intent to buy.

Supporting metrics should be an individual person’s or functional area’s scorecard to measure how their work is driving the business forward. Ensuring supporting metrics are tightly aligned with the overall company objectives helps to ensure work efforts throughout the organization are tightly aligned with the overall objectives.

As with KPIs, we want to ensure any person or functional area isn’t burdened with so many supporting metrics that they become unmanageable. And this is an area where we frequently fall down because all those metrics and data points are just so darn alluring.

The key is to recognize the all-important third level of metrics. I call them Forensic Metrics.

3. Forensic Metrics
Forensic Metrics are just what they sound like. They’re those deep-dive metrics we use when we’re trying to solve a problem we’re facing in KPIs or Supporting Metrics. But there are tons of them, and we can’t possibly manage them on a day-to-day basis. In the same way we don’t dust our homes for prints every day when we come home from work, we can’t try to pay attention to forensic metrics all the time. If we come home and find our TV missing, then dusting for prints makes a lot of sense. If we find out conversion rate has dropped suddenly, it’s time to dig into all sorts of forensic metrics like path analysis, entry pages, page views, time on site, exit links, and the list goes on and on.

Site analytics packages, data warehouse and log files are chock full of valuable forensic metrics. But those forensic metrics should not find their way onto daily or weekly managed scorecards. They can only serve to distract us from our primary objectives.

—————————————————–

Breaking down our metrics into these three levels takes some serious discipline. When we decide we’re only going to focus on a relatively small number of metrics, we’re doing ourselves and our businesses a big favor. But it’s really important we’re narrowing that focus on the metrics and objectives that are most driving the business forward. But, heck, we should be doing that anyway.

What do you think? How do you break down your metrics?

 

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

Bought Loyalty vs. Earned Loyalty

Earned loyalty vs Bought loyaltyAcquiring new customers is hard work, but turning them into loyal customers is even harder. The acquisition efforts can usually come almost solely from the Marketing department, but customer retention takes a village. And all those villagers have to march to the beat of a strategy that effectively balances the concepts of bought loyalty and earned loyalty.

I first heard the concepts of bought and earned loyalty many years ago in a speech given by ForeSee Results CEO Larry Freed, and those concepts stuck with me.  They’re not mutually exclusive. In the most effective retention strategies I’ve seen, bought loyalty is a subset of a larger earned loyalty strategy.

So let’s break each down a bit and discuss how they work together.

Bought loyalty basically comes in the form of promotional discounts. We temporarily reduce prices in the form of sales or coupons in order to induce customers to shop with us right away.

Bought loyalty has lots of positives. It’s generally very effective at increasing top line sales immediately (especially in down economies), and customers love a good deal. It’s also pretty easy to measure the improvement in sales during a short promotional period, and sales growth feels good. Really good.

And those good feelings are mighty addictive.

But as with most addictions, the negative effects tend to sneak up on us and punch us in the face. The 10% quarterly offers become 15% monthly offers and then 20% weekly offers as customers wait for better and better deals before they shop. Top line sales continue to grow only at the cost of steadily reduced margins. Breaking the habit comes with a lot of pain as customers trained to wait for discounts simply stop shopping. Bought loyalty, by itself,  is fickle.

But it doesn’t have to go down that way.

We can avoid a bought loyalty slippery slope when we incorporate bought loyalty tactics as part of a larger earned loyalty strategy.

We earn our customers’ loyalty when we meet not only their wants but their needs. After all, retail is a service business. We have to learn a lot about our customers to know what those wants and needs are so that we align our offerings to meet those wants and needs. Which, of course, is easy to say and much more difficult to do. But do it we must.

To earn loyalty, we have to provide great service and convenience for our customers. But we have to know how our customers define “great service” and “convenience” and ensure we’re delivering to those definitions. Earning loyalty means offering relevant assortments and personalized messaging, but it’s only by truly understanding our customers that we can know what “relevant” and “personalized” mean to them. And a little bit of bought loyalty through truly valuable promotions can provide an occasional kick start, but we have to know what “valuable promotion” means to our customers.

We earn loyalty when the experience we provide our customers meets or even exceeds their expectations. As such, our earned loyalty retention strategies have to start before we’ve even acquired the customer. If we over-promise and under-deliver, we significantly reduce our ability to retain customers, much less move them through the Customer Engagement Cycle we’ve discussed here previously.

But earned loyalty can’t just be the outcome of a marketing campaign. It’s much bigger than that, and it doesn’t happen without the participation of the entire organization. Clearly, front line staff in stores, call center agents and those who create the online customer experience have to be on board. But so too do corporate staff, including merchants for assortment and marketers for messaging. And financial models for earned loyalty strategies inevitably look different than those built solely for bought loyalty.

Since customer expectations are in constant flux, we have to constantly measure how well we’re doing in their eyes. Those measures must be Key Performance Indicators held in as high a regard as revenue, margins, average order size and conversion rates. (Shameless plug: the best way I know to measure customer experience and satisfaction is the ACSI methodology provided by ForeSee Results). Our customers’ perceptions of our business are reality, and measuring and monitoring those perceptions to determine what’s working and what’s not is the best way to determining a path towards earning loyalty.

Earning loyalty requires clear vision, careful planning, a little bought loyalty, lots and lots of communication (both internally and externally), and some degree of patience to wait for its value to take hold. But when the full power of an earned loyalty Customer Engagement Cycle kicks in, its effects can be mighty. The costs of acquiring and retaining customers drop while sales and margins rise. That’s a nice equation.

What do you think? Have you seen effective retention strategies that build on both bought and earned loyalty? Or do you think is all just a crock?

Retail: Shaken Not Stirred by Kevin Ertell


Home | About