How the US Open was like a retail promotion analysis

Last week’s US Open golf tournament had a surprise leader going into the final round in Ricky Barnes, who came out of relative obscurity to record the best 36-hold score in US Open history, beating out golf greats like Tiger Woods and Phil Mickelson.

The media played it up, talking about Barnes as finally coming into his own and really blossoming. Was he the next big golf star? From CBS Sports:

“Until this week at the 109th U.S. Open, keeping up with the big boys has always been difficult prospect (for Barnes), full of disappointment, figurative bloody noses and scabby knees.

‘I know he hates losing,’ brother Andy said. ‘Maybe because he did a lot of it when he was younger.’

And more than a bit as a young adult, too, which is what made his record-setting start at Bethpage Black all the more surprising. In a field full of the household names with whom Barnes has been so desperately trying to compete, he’s finally atop the leaderboard.”

As Barnes faltered during the final rounds and Woods and Mickelson improved, it was all about who was handling the pressure well and who wasn’t. Barnes’ score dropped off significantly over the final two rounds while the bigger names improved.

Was it the pressure? I would argue that what we really saw was what statisticians call a regression toward the mean (or average). Basically, Woods and Mickelson began the tournament with rounds that were well below their averages, but with each round they began to score closer to what they would normally be expected to score. Barnes basically did the opposite. When continuously measured over time, Tiger Woods is still clearly the world’s top golfer. This can be clearly seen in the world golf rankings where Woods is #1 and Barnes is #153.

So, why is this like a retail promotion analysis?
Because we retailers have a tendency to look at each short term promotion result in isolation and then make concrete conclusions and kick off immediate modifications. Come Monday morning, we’re looking to see how the weekend sale did, and we’re ready to change next weekend’s sale if this past one didn’t perform to expectations. We don’t take into account the possibility that we might have witnessed an outlier result that is not really indicative of the actual effectiveness of the promotion but is actually just the result of random luck — good or bad. After a single test, we could be ready to declare the promotion equivalent of Ricky Barnes the world’s greatest and the promo Tiger Woods an also ran. And the next time we run the Barnes promotion and it’s a dog, we’ll revert back.

An old colleague of mine used to call this the “full accelerator, full brake” syndrome. The net effect of all of this short term measurement and immediate reaction is a steady reduction in the average effectiveness of our promotions.

Instead, we should measure the effectiveness of promotions over a much longer period of time and over many instances. Because of the massive amount of variables that can affect a promotion (including the obvious and more visible variables like weather and road construction and the less obvious and invisible variables like an unusual number of people happened to plan family picnics at the same time and therefore didn’t shop like they normally would have) we simply cannot count on a short term measurement to provide the accuracy we need to make a wise decision. Short term, sometimes the promotions will show improvements and sometimes they won’t, just as Tiger Woods does not win every golf tournament he enters. Over time, though, we will come closer and closer to determining their true value.

This requires patience and courage that will be difficult in the fast paced retail environment, especially for public companies. However, it will produce a lot less churn and increase efficiency and effectiveness overall. And in an economic time when we’re trying to maximize the effectiveness of the staff we have left, less churn can go a long way.

What do you think? How are promotion analyses handled in your company? Do you measure over the long haul?


  • By Mark Evans, June 30, 2009 @ 4:38 pm

    Knee-jerk reactions to incomplete data are a pitfall for many businesses and managers. The flip side can be “the paralysis of analysis”. One of the key strengths of a great manager or executive is knowing how to find that balance – when is there enough data, coupled with solid business judgment and customer understanding, to make a verdict and move on? What is the downside to waiting to have more data? This applies to promotional analyses and many other business decisions.

  • By Kevin Ertell, June 30, 2009 @ 5:15 pm

    Excellent point, Mark. Paralysis by analysis is just as bad as making rash decisions. I completely agree it’s important to find the right balance. I would suggest that running continuous analysis, rather than simply waiting for more data, might be the proper way to balance. To keep up with the analogy, we wouldn’t rank Tiger Woods at the top of the list based on a single tournament win, but he would earn his place at the list through the effect of continued success over multiple tournaments. Might promotions similarly earn their place through repeated successes over time?

  • By Mark Schneyer, June 30, 2009 @ 11:45 pm

    This discussion reminds me of something i read on Paul Krugman’s blog yesterday–
    and the chart in the previous post. So I guess the US Open is like climate change (except we’re not reverting to the mean)…

  • By Kevin Ertell, July 1, 2009 @ 8:45 am

    Good comment, Mark. Krugman’s point is the same that measuring continuously is a better way to pull out true trends than simply looking at short term results. There are too many random variables in just about any measurement to be sure of the degree of accuracy of any one measurement. Thanks for pointing out Krugman’s article.

  • By Andy Orr, July 1, 2009 @ 11:59 am

    I think the knee jerk reactions are usually a result of either a poor strategy or more often the lack of a strategy all together. All too often the strategy seems to be to “drive more sales this weekend”. That’s not a strategy it is a desired result. The strategy is about how you intelligently expect to do that over the long run.
    When you don’t have a thoughtful strategy, it’s far too easy to change the tactics week after week, based on short term results which, as Kevin points out can be mis-leading.
    A thoughtful strategy identifies the target audience and the specific behavior you are trying to change. For example a grocery store might have a core group of customers who shop regularly but never visit the deli or pharmacy. A thoughtful strategy to increase share of wallet from these customers might involve tactics designed to introduce them to your great pharmacy service or fresh cut deli meats at great introductory prices.
    The success of these tactics is easier to measure (over the long term), is more likely to drive incremental purchases and is less vulnerable to the knee jerk reactions.
    With a thoughtful strategy, sometimes tactics need to be tweaked for optimal effectiveness. Outside of a thoughtful strategy tactics tend to get thrown out altogether and you start from scratch each week.

  • By Kevin Ertell, July 2, 2009 @ 4:25 pm

    You make some fine points, Andy. It’s not only important to define a clear strategy, it’s also important to give that strategy a chance to work by measuring its effectiveness over a reasonable period of time. Along the way, tweaks could certainly be necessary, but frequent wholesale changes in direction can have a detrimental long term effect.

Other Links to this Post

  1. Are web analytics like 24-hour news networks? | Retail: Shaken Not Stirred — January 6, 2010 @ 7:57 pm

  2. Is elitism the source of poor usability? | Retail: Shaken Not Stirred — January 6, 2010 @ 8:01 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

Retail: Shaken Not Stirred by Kevin Ertell

Home | About