Posts tagged: metrics

The 3 Levels of Metrics: From Driving Cars to Solving Crimes

Business-MetricsYou can’t manage what you don’t measure. That’s a long-time business mantra espoused frequently by my good friend Larry Freed. And it’s certainly true. But in an e-commerce where we can effectively measure our customers’ every footstep, we can easily become overwhelmed with all that data. Because while we can’t manage what we don’t measure, we also can’t manage everything we can measure.

I’ve found it’s best to break our metrics down to three levels in order to make the most of them.

1. KPIs
The first and highest level of metrics contains the Key Performance Indicators or KPIs. I believe strongly there should be relatively few KPIs — maybe five or six at most — and the KPIs should align tightly with the company’s overall business objectives. If an objective is to develop more orders from site visitors, then conversion rate would be the KPI. If another objective is about maximizing the customer experience, then customer satisfaction is the right metric.

In addition to conversion rate and customer satisfaction, a set of KPIs might include metrics like average order value (AOV), market share, number of active customers,  task completion rate or others that appropriately measure the company’s key objectives.

I’ve found the best KPI sets are balanced so that the best way to drive the business forward is to find ways to improve all of the KPIs, which is why businesses often have balanced scorecards. The reality is, we could find ways to drive any one metric at the expense of the others, so finding the right balance is critical. Part of that balance is ensuring that the most important elements of the business are considered, so it’s important to have some measure of employee satisfaction (because employee satisfaction leads to customer satisfaction) and some measure of profitability.  Some people look at a metric like Gross Margin as the profitability measure, but I prefer something deeper down the financial statement like Contribution Margin or EBITDA because they take other cost factors like ad spend, operational efficiencies, etc. into account and can be affected by most people in the organization.

It’s OK for KPIs to be managed at different frequencies. We often talk about metrics dashboards, and a car’s dashboard is the right metaphor. Car manufacturers have limited space to work with, so they include only the gauges the most help the driver operate the car. The speedometer is managed frequently while operating the car. The fuel gauge is critically important, but it’s monitored only occasionally (and more frequently when it’s low). Engine temperature is a hugely important measure for the health of the car, but we don’t need to do much with it until there’s a problem. Business KPIs can be monitored in a similarly varied frequency, so it’s important that we don’t choose them based on their likelihood to change over some specific time period. It’s more important to choose the metrics that most represent the health of the business.

2. Supporting Metrics
I call the next level of metrics Supporting Metrics. Supporting Metrics are tightly aligned with KPIs, but they are more focused on individual functions or even individual people within the organization. A KPI like conversion rate can be broken down by various marketing channels pretty easily, for example. We could have email conversion rate, paid search conversion rate, direct traffic conversion rate, etc. I also like to look at True Conversion Rate, which measures conversion against intent to buy.

Supporting metrics should be an individual person’s or functional area’s scorecard to measure how their work is driving the business forward. Ensuring supporting metrics are tightly aligned with the overall company objectives helps to ensure work efforts throughout the organization are tightly aligned with the overall objectives.

As with KPIs, we want to ensure any person or functional area isn’t burdened with so many supporting metrics that they become unmanageable. And this is an area where we frequently fall down because all those metrics and data points are just so darn alluring.

The key is to recognize the all-important third level of metrics. I call them Forensic Metrics.

3. Forensic Metrics
Forensic Metrics are just what they sound like. They’re those deep-dive metrics we use when we’re trying to solve a problem we’re facing in KPIs or Supporting Metrics. But there are tons of them, and we can’t possibly manage them on a day-to-day basis. In the same way we don’t dust our homes for prints every day when we come home from work, we can’t try to pay attention to forensic metrics all the time. If we come home and find our TV missing, then dusting for prints makes a lot of sense. If we find out conversion rate has dropped suddenly, it’s time to dig into all sorts of forensic metrics like path analysis, entry pages, page views, time on site, exit links, and the list goes on and on.

Site analytics packages, data warehouse and log files are chock full of valuable forensic metrics. But those forensic metrics should not find their way onto daily or weekly managed scorecards. They can only serve to distract us from our primary objectives.

—————————————————–

Breaking down our metrics into these three levels takes some serious discipline. When we decide we’re only going to focus on a relatively small number of metrics, we’re doing ourselves and our businesses a big favor. But it’s really important we’re narrowing that focus on the metrics and objectives that are most driving the business forward. But, heck, we should be doing that anyway.

What do you think? How do you break down your metrics?

 

The 4 Keys to a Customer-Centric Culture

customer centric organizationRetail: Shaken Not Stirred reader Sarah submitted an interesting question for today’s post:

“What does it really mean to create a customer-centric culture ? We hear companies say it all the time. I would wager that almost every retailer claims to have it. But what does it really mean and how do you know if you really have it?”

Culture is a powerful and interesting beast, and I certainly don’t claim to be an expert in developing corporate cultures. However, it’s a topic of great interest for me, and I’ve had the opportunity to observe and operate within many corporate cultures. I’ve learned that corporate cultures cannot be decreed from the top as cultures get their power from all of the people within them. While CEOs and other leaders can be influential in culture development, they can also be completely enveloped by powerful cultures that are driven from all levels of the organization and formed over many, many years.

That said, I believe there are certain dynamics that drive cultures, and we can influence and shift cultures by focusing on these key areas.

Without further ado, here are what I believe are the four key facets of a truly customer-centric culture:

  1. Faith
    Customer-centric organizations believe in an almost religious way that sales and profits are the by-product of great customer experiences. They are unwavering in their belief that intense focus on creating the best possible experience for their customers is the best way to grow their businesses. Some of these organization will go as far as saying sales don’t matter, but that’s not exactly accurate. All businesses need to create profits, but truly customer-centric organizations focus on the customer experience and not on directly “driving sales.” They believe the best way to improve sales is to view them as an outcome of great customer experiences rather than something that can be directly affected.

    I once had the opportunity to meet with Yahoo and Google in back-to-back meetings regarding potential partnerships with my company, and the two discussions could not have been more different. The Yahoo team was very focused in determining how the partnership would increase Yahoo’s revenues while the Google team interrupted us immediately when we began to discuss revenue. They said they were only interested in opportunities that would enhance the Google experience for their users. Period. I didn’t take this to mean they weren’t interested in growing their business. They simply believed that Google’s purpose was to help people find all the world’s information, and they would maximize their revenue by delivering on their purpose in the best way possible for their users.

  2. Fortitude
    Relentless focus on the customer experience is not easy, particularly for public companies. Truly customer-centric organizations constantly have their faith tested by both external and internal forces who are looking for short-term sales or profits, even if those sales and profits might come at the expense of the customer experience. Customer-centric organizations focus on the value of a customer engagement cycle that relies on great customer experience as an engine that drives retention and positive word of mouth.

    There will always be pressure to run short-term promotion to goose sales. It’s not that customer-centric organizations don’t run promotions; it’s just that they run those promotions in context of their larger purposes in service of their customer. They focus on earning  sales and loyalty rather than buying sales and loyalty.

  3. Employees first (even before customers)
    It may seem counterintuitive to say customer-centric organizations put their employees before their customers, but in my experience this is true and this may actually be the most important of the four keys I’m discussing here. It’s a bit like when we’re instructed by flight attendants to secure our own oxygen masks before helping our children secure theirs. All employees play a part in the experiences we provide our customers. Some have direct contact with our customers and others make daily decisions that ultimately affect the experiences our customers have with us. Their attitudes about their jobs and the company can make or break the experience they provide for our customers. This is sort of obvious for front line staff like store associates and call center agents, but it’s also true for site developers, delivery truck drivers, mid-level managers, executives and, frankly, janitors. Even those not on the front lines are constantly making decisions that affect our customers’ experiences.

    Truly customer-centric organizations therefore provide absolutely great career experiences for their employees so their employees pass along the greatness to their customers. While decent salaries are certainly a factor, money alone is not enough. An “employees first” approach means employees are treated with great respect. They’re trusted with the authority to deliver on clearly defined accountabilities. They’re also given clear direction and clear guidelines and fully supported when they make decisions that improve the customer experience.  Colleen Barrett, President Emeritus at Southwest Airlines (a customer-centric organization), also points out that the customer is not always right. There are scenarios where the customer is clearly out-of-bounds and truly customer-centric organizations know when to support an employee over the customer. Watch a brief clip of her discussion at the recent Shop.org Annual Summit for some of her keen wisdom on empowering employees and defining an employee-first, customer-centric culture.

  4. They talk the talk and walk the walk
    As Sarah says in her question, most retail organizations profess to be customer-centric. Those that truly are customer-centric talk about customer experience internally exponentially more than they talk about it externally. Strategic and tactical discussions always center around improvements for the customer. These organizations measure the success of their businesses by metrics that represent the perceptions and voices of their customers. They spend a lot of time and effort ensuring these voice of customer metrics are credible, reliable and accurate, and they focus on them incessantly. These metrics are the first metrics that are discussed in weekly staff meetings from the executive level to the front line level. Bonuses are driven by these metrics, too, but the regular discussion of the voice of customer metrics and the drive to improve the experience on a daily basis is what separates customer-centric organizations from companies that discuss sales first and customer metrics later, if ever.

Are these attributes ideals for a perfect world that aren’t rooted in reality? I don’t think so. Organizations such as Google, Zappos and Southwest Airlines attribute their success to such thinking, and based on some of my experiences with them they seem to be living up to the promise. Is it easy? No way. While earning loyalty may not yield the immediate sales results buying loyalty can, the longer term efficiencies gained through providing great customer experiences can more than make up for the difference.

Those are my observations about customer-centric cultures. But as I said a the beginning of this post, I am not an expert. I’m very curious to hear from you.

What are your observations about customer-centric cultures? Have your worked for such an organization? Did true customer-centricity ultimately lead to solid financial results? What would you add to the keys I’ve listed?

(By the way, this is the first time I’ve had a reader submitted topic for discussion, but I would love to have more. Please email me at kevin.ertell@yahoo.com if you’ve got a topic that would be good for discussion in this space.)

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

The Missing Links in the Customer Engagement Cycle

customer engagement cycleThe Customer Engagement Cycle plays a central role in many marketing strategies, but it’s not always defined in the same way. Probably the most commonly described stages are Awareness, Consideration, Inquiry, Purchase and Retention. In retail, we often think of the cycle as Awareness, Acquisition, Conversion, Retention. In either case, I think there are a couple of key stages that do not receive enough consideration given their critical ability to drive the cycle.

The missing links are Satisfaction and Referral.

Before discussing these missing links, let’s take a quick second to define the other stages:

Awareness: This is basic branding and positioning of the business. We certainly can’t progress people through the cycle before they’ve even heard of us.

Acquisition: I’ve always thought of this as getting someone into our doors or onto our site. It’s a major step, but it’s not yet profitable.

Conversion: This one is simply defined as making a sales. Woo hoo! It may or may not be a profitable sales on its own, but it’s still a significant stage in the cycle.

Retention: We get them to shop with us again. Excellent! Repeat sales tend to be more profitable and almost certainly have lower marketing costs than first purchases.

Now, let’s get to those Missing Links

In my experience, the key to a strong and active customer engagement cycle is a very satisfying customer experience. And while the Wikipedia article on Customer Engagement doesn’t mention Satisfaction as often as I would like, it does include this key statement: “Satisfaction is simply the foundation, and the minimum requirement, for a continuing relationship with customers.”

In fact, I think the quality of the customer experience is so important that I would actually inject it multiple times into the cycle: Awareness, Acquisition, Satisfaction, Conversion, Satisfaction, Retention, Satisfaction, Referral.

Of course, it’s possible to get through at least some of the stages of the cycle without an excellent customer experience. People will soldier through a bad experience if they want the product bad enough or if there’s an incredible price. But it’s going to be a lot harder to retain that type of customer and if you get a referral, it might not be the type of referral you want.

I wonder if Satisfaction and Referral are often left out of cycle strategies because they are the stages most out of marketers’ control.

A satisfying customer experience is not completely in the marketer’s control. For sure, marketing plays a role. A customer’s satisfaction can be defined as the degree to which her actual experience measures up to her expectations. Our marketing messages are all about expectations, so it’s important that we are compelling without over-hyping the experience. And certainly marketers can influence policy decisions, website designs, etc. to help drive better customer experiences.

In the end, though, the actual in-store or online experience will determine the strength of the customer engagement.

Everyone plays a part in the satisfaction stages. Merchants must ensure advertised product is in stock and well positioned. Store operators must ensure the stores are clean, the product is available on the sales floor and the staff are friendly, enthusiastic and helpful. The e-commerce team must ensure advertised products can be easily found, the site is performing well, product information in complete and useful,  and the products are shipped on time and in good condition.

We also have to ensure our incentives and metrics are supporting a quality customer experience, because the wrong metrics can incent the wrong behavior. For example, if we measure an online search engine marketing campaign by the number of visitors generated or even the total sales generated, we can absolutely end up going down the wrong path. We can buy tons of search terms that by their sheer volume will generate lots of traffic and some degree of increased sales. But if those search terms link to the home page or some other page that is largely irrelevant to the search term, the experience will be likely disappointing for the customer who clicked through.

In fact, I wrote a white paper a few months ago, Online Customer Acquisition: Quality Trumps Quantity, that delved into customer experience by acquisition source for the Top 100 Internet Retailers. We found that those who came via external search engines were among the least satisfied customers of those sites with the least likelihood to purchase and recommend. Not good. These low ratings could largely be attributed to the irrelevance of the landing pages from those search terms.

Satisfaction breeds Referral

Referrals or Recommendations are truly wonderful. As I wrote previously, the World’s Greatest Marketers are our best and most vocal customers. They are more credible than we’ll ever be, and the cost efficiencies of acquisition through referral are significantly better than our traditional methods of awareness and acquisition marketing. In my previously mentioned post, I discussed some ways to help customers along on the referral path. But, of course, customers can be pretty resourceful on their own.

We’ve all seen blog posts, Facebook posts or tweets about bad customer experiences. But plenty of positive public commentary can also be found.  Target’s and Gap’s Facebook walls have lots of customers expressing their love for those brands. Even more powerful are blog posts some customers write about their experiences.  I came across a post yesterday from entitled Tales of Perfection that related two excellent experiences the blogger had with Guitar Center and a burger joint called Arry’s. Both stories are highly compelling and speak to the excellent quality of the employees at each business. Nice!

————————————————–

Developing a business strategy, not just a marketing strategy, around the customer engagement cycle can be extremely powerful. It requires the entire company to get on board to understand the value of maximizing the customer experience at every touch point with the customer, and it requires a set of incentives and metrics that fully support strengthening the cycle along the way.

What do you think? How do you think about the customer engagement cycle? How important do feel the customer experience is in strengthening the cycle? Or do you think this is all hogwash?


Wanna be better with metrics? Watch more poker and less baseball.

Both baseball and poker have been televising their World Series championships, and announcers for both frequently describe strategies and tactics based on the statistics of the games. Poker announcers base their commentary and discussion on the probabilities associated with a small number of key metrics, while baseball announcers barrage us with numbers that sound meaningful but that are often pure nonsense.

Similarly, today’s web analytics give us the capability to track and report data on just about anything, but just because we can generate a number doesn’t mean that number is meaningful to our business. In fact, reading meaning into meaningless numbers can cause us to make very bad decisions.

Don’t get me wrong, I am a huge believer in making data-based decisions, in baseball, poker, and on our websites. But making good decisions is heavily dependent on using the right data and seeing the data in the right light. I sometimes worry that constant exposure to sports announcers’ misreading and misappropriation of numbers is actually contributing to a misreading and misunderstanding of numbers in our business settings.

Let’s consider a couple of examples of misreading and misappropriating numbers that have occurred in baseball over the last couple of weeks:

  1. Selection bias
    This one is incredibly common in the world of sports and nearly as common in business. Recently, headlines here in Detroit focused on the Tigers “choking” and blowing a seven-game lead with only 16 games to go. In a recent email exchange on this topic, my friend Chris Eagle pointed out the problems with the sports announcers’ hyperbole:

    “They’re picking the high-water mark for the Tigers in order to make their statement look good.  If you pick any other random time frame (say end-of-August, which I selected simply because it’s a logical break point), the Tigers were up 3.5 games.  But it doesn’t look like much of a choke if you say the Tigers lost a 3.5 game lead with a month and change to go.”

    Unfortunately, this type of analysis error occurs far too often in business. We might find that our weekend promotions are driving huge sales over the last six months, which sounds really impressive until we notice that non-sale days have dropped significantly as we’ve just shifted our business to days when we are running promotions (which may ultimately mean we’ve reduced our margins overall by selling more discounted product and less full-price merchandise).

    In a different way, Dennis Mortensen addressed the topic in his excellent blog post “The Recency Bias in Web Analytics,” where he points out the tendency to give undue weight to more recent numbers. He included a strong example about the problems of dashboards that lack context. Dashboards with gauges look really cool but are potentially dangerous as they are only showing metrics from a very short period of time. Which leads me to…

  2. Inconsistency of averages over short terms
    Baseball announcers and reporters can’t get enough of this one. Consider this article on the Phillies’ Ryan Howard after Game 3 of the World Series that includes, “Ryan Howard‘s home run trot has been replaced by a trudge back to the dugout.The Phillies’ big bopper has gone down swinging more than he’s gone deep…He’s still 13 for 44 overall in the postseason (.295) but only 2 for 13 (.154) in the World Series.” Actually, during the length of the season, he had three times as many strike outs as home runs, so his trudges back to the dugout seem pretty normal. And the problem with the World Series batting average stat is the low sample size. A sample of thirteen at bats is simply too small to match against his season long average of .279. Do different pitchers or the pressures of the situation have an effect? Maybe, but there’s nothing in the data to support such a conclusion. Segmenting by pitcher or “postseason” suffers from the same small sample size problems, where the margin of error expands significantly. Furthermore, and this is really key, knowing an average without knowing the variability of the original data set is incomplete and often misleading.

    This problems with variability and sample sizes arise frequently in retail analysis when we either run a test with too small a sample size and assume we can project it to the rest of the business, or we run a properly sized test but assume we’ll automatically see those same results in the first day of a full application of the promotion. Essentially, the latter point is what is happening with Ryan Howard in the postseason. We often hear the former as well when a player is all of the sudden crowned a star when he outperforms his season averages over a few games in the postseason.

    In retail, we frequently see this type of issue when we’re comparing something like average order value of two different promotions or two variations in an A/B test. Say we’ve run an A/B test of two promotions. Over 3,100 iterations of test A, we have an average order size of $31.68. And over 3,000 iterations of Test B, we have an average order size of $32.15. So, test B is the clear winner, right? Wrong. It turns our there is a lot more variability in test B, which has a standard deviation of 11.37 compared with test A’s standard deviation of 7.29. As a result the margin of error on the comparison expands to +/- 48 cents, which means both averages are within the margin of error and we can say with 95% confidence that there really is no difference between the tests. Therefore, it would be a mistake to project an increase in transaction size if we went with test B.

    Check out that example using this simple calculator created by my fine colleagues at ForeSee Results and play around with your own scenarios.  Download Test difference between two averages.

Poker announcers don’t seem to fall into all these statistical traps. Instead, they focus on a few key metrics like the number of outs and the size of the pot to discuss strategies for each player based largely on the probability of success in light of the risks and rewards of a particular tactic. Sure, there are intangibles like “poker tells” that occur, but even those are considered in light of the statistical probabilities of a particular situation.

Retail is certainly more complicated than poker, and the number of potential variables to deal with is immense. However, we can be much more prepared to deal with the complexities of our situations if we take a little more time to view our metrics in the right light. Our data-driven decisions can be far more accurate if we ensure we’re looking at the full data set, not a carefully selected subset, and we take the extra few minutes to understand the effects of variability on averages we report. A little extra critical thinking can go a long way.

What do you think? Are there better ways to analyze key metrics at your company? Do you consider variability in your analyses? Do you find the file to test two averages useful?



Related posts:

How retail sales forecasts are like baby due dates

Are web analytics like 24-hour news networks

True conversion – the on-base percentage of web analytics

How the US Open was like a retail promotion analysis

The Right Metrics: Why keeping it simple may not work for measuring e-retail performance (Internet Retailer article)

Retail: Shaken Not Stirred by Kevin Ertell


Home | About