Posts tagged: Analysis

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

How to achieve FAME in analysis

focused handsIn retail, and in web retail in particular, we are drowning in data. We can and do track just about everything, and we’re constantly pouring over the numbers. But I sometimes worry that the abundance of data is so overwhelming that it often leads to a shortage of insight. All that data is worthless (or worse) if we don’t produce thoughtful analysis and then carefully craft communication of our findings in ways that enable decision makers to react to the data rather than try to analyze it themselves.

The most effective analyses I’ve seen have remarkably similar attributes, and they happen to work into a nice, easy-to-remember acronym — F.A.M.E.

Here, in my experience, are the keys to achieving FAME in analysis:

Focused

Any finding should be fact based and clear enough that it can be stated in a succinct format similar to a newspaper headline. It’s OK to augment the main headline with a sub-headline that adds further clarification, but anything more complicated is not nearly focused enough to be an effective finding.

For example, an effective finding might be, “Visitors arriving from Google search terms are converting 23% lower than visitors arriving from email.” An accompanying sub-heading might further clarify the statement with something like, “Unclear value proposition, irrelevant landing pages and high first time visitor counts are contributing factors.”

All subsequent data presented should support these headlines. Any data that is interesting but irrelevant to the finding should be excluded from the analysis. In other words, remove the clutter so the main points are as clear as possible.

Actionable

Effective findings and their accompanying recommendations are specific enough in focus and narrow enough in scope that decision makers can reasonably develop a plan of action to address them. The finding mentioned above regarding Google search visitors fits the bill, and a recommendation that focuses on modifying landing pages to match search terms would be appropriate. Less appropriate would be a vague finding like “customers coming from Google search terms are viewing more pages than customers coming from email campaigns” accompanied by an equally vague recommendation to “consider ways to reduce pages clicked by Google search campaign visitors.” Is viewing more pages good or bad? Why? The recommendation in this case insinuates that it’s bad, but it’s not clear why. What’s the benefit of taking action in quantifiable terms?

Truly actionable analysis doesn’t burden decision makers with connecting the data to executable conclusions. In other words, the thought put into the analysis should make the diagnosis of problems clear so that decision makers can get to work on determining necessary solutions.

Manageable

The number of findings in any set of analyses should be contained enough that the analyst and anyone in the audience can recite the findings and recommendations (but not all the supporting details) in 30 seconds. Sometimes, less is more. This constraint helps ease the subsequent communication that will be necessary to reasonably react to the findings and plan and execute a response. Conversely, information overload obscures key messages and makes it difficult for teams to coalesce around key issues.

Enlightening

Last, but most certainly not least, effective findings are enlightening. Effective analyses should present — and support with clear, credible data — a view of the business that is not widely held. They should, at the very least, elicit a “hmmm…” from the audience and ideally a “whoa!” They should excite decision makers and spur them to action.

————————————–

The FAME attributes are not always easy to achieve. They require a lot of hard thought, but the value of clear, data-supported insight to an organization is immense.

The most effective analysts I’ve seen achieve FAME on a regular basis. They have a thorough understanding of the business’ objectives, and they develop their insights to help decision makers truly understand what’s working and what’s not working. And then they lay out clear opportunities for improvement. That’s data-driven business management at its best.

What do you think? What attributes do you find key in effective analyses?

Why most sales forecasts suck…and how Monte Carlo simulations can make them better

Sales forecasts don’t suck because they’re wrong.  They suck because they try to be too right. They create an impossible illusion of precision that ultimately does a disservice to managers who need accurate forecasts to assist with our planning. Even meteorologists — who are scientists with tons of historical data, incredibly high powered computers and highly sophisticated statistical models — can’t forecast with the precision we retailers attempt to forecast. And we don’t have nearly the data, the tools or the models meteorologists have.

Luckily, there’s a better way. Monte Carlo simulations run in Excel can transform our limited data sets into statistically valid probability models that give us a much more accurate view into the future. And I’ve created a model you can download and use for yourself.

There are literally millions of variables involved in our weekly sales, and we clearly can’t manage them all. We focus on the few significant variables we can affect as if they are 100% responsible for sales, but they’re not and they are also not 100% reliable.

Monte Carlo simulations can help us emulate real world combinations of variables, and they can give us reliable probabilities of the results of combinations.

But first, I think it’s helpful to provide some background on our current processes…

We love our numbers, but we often forget some of the intricacies about numbers and statistics that we learned along the way. Most of us grew up not believing a poll of 3,000 people could predict a presidential election. After all, the pollsters didn’t call us. How could the opinions of 3,000 people predict the opinions of 300 million people?

But then we took our first statistics classes. We learned all the intricacies of statistics. We learned about the importance of properly generated and significantly sized random samples. We learned about standard deviations and margins of errors and confidence intervals. And we believed.

As time passed, we moved on from our statistics classes and got into business. Eventually, we started to forget a lot about properly selected samples, standard deviations and such and we just remembered that you can believe the numbers.

But we can’t just believe any old number.

All those intricacies matter. Sample size matters a lot, for example. Basing forecasts, as we often do, on limited sets of data can lead to inaccurate forecasts.

Here’s a simplified explanation of how most retailers that I know develop sales forecasts:

  1. Start with base sales from last year for the the same time period you’re forecasting (separating out promotion driven sales)
  2. Apply the current sales trend (which is maybe determined by an average of the previous 10 week comps). This method may vary from retailer to retailer, but this is the general principle.
  3. Look at previous iterations of the promotions being planned for this time period. Determine the incremental revenue produced by those promotions (potentially through comparisons to control groups). Average of the incremental results of previous iterations of the promotion, and add that average to the amount determined in steps 1 and 2.
  4. Voilà! This is the sales forecast.

Of course, this number is impossibly precise and the analysts who generate it usually know that. However, those on the receiving end tend to assume it is absolutely accurate and the probability of hitting the forecast is close to 100% — a phenomenon I discussed previously when comparing sales forecasts to baby due dates.

As most of us know from experience, actually hitting the specific forecast almost never happens.

We need accuracy in our forecasts so that we can make good decisions, but unjustified precision is not accuracy. It would be far more accurate to forecast a range of sales with accompanying probabilities. And that’s where the Monte Carlo simulation comes in.

Monte Carlo simulations

Several excellent books I read in the past year (The Drunkard’s Walk, Fooled by Randomness, Flaw of Averages, and Why Can’t You Just Give Me a Number?) all promoted the wonders of Monte Carlo simulations (and Sam Savage of Flaw of Averages even has a cool Excel add-in). As I read about them, I couldn’t help but think they could solve some of the problems we retailers face with sales forecasts (and ROI calculations, too, but that’s a future post). So I finally decided to try to build one myself. I found an excellent free tutorial online and got started. The results are a file you can download and try for yourself.

A Monte Carlo simulation might be most easily explained as a “what if” model and sensitivity analysis on steroids. Basically, the model allows us to feed in a limited set of variables about which we have some general probability estimates and then, based on those inputs, generate a statistically valid set of data we can use to run probability calculations for a variety of possible scenarios.

It turns out to be a lot easier than it sounds, and this is all illustrated in the example file.

The results are really what matters. Rather than producing a single number, we get probabilities for different potential sales that we can use to more accurately plan our promotions and our operations. For example, we might see that our base business has about a 75% chance of being negative, so we might want to amp up our promotions for the week in order have a better chance of meeting our growth targets.  Similarly, rather than reflexively “anniversaring” promotions, we can easily model the incremental probabilities of different promotions to maximize both sales and profits over time.

The model allows for easily comparing and contrasting the probabilities of multiple possible options. We can use what are called probability weighted “expected values” to find our best options. Basically, rather than straight averages that can be misleading, expected values are averages that are weighted based on the probability of each potential result.

Of course, probabilities and ranges aren’t as comfortable to us as specific numbers, and using them really requires a shift in mindset. But accepting that the future is uncertain and planning based on the probabilities of potential results puts us in the best possible position to maximize those results. Understanding the range of possible results allows for better and smarter planning. Sometimes, the results will go against the probabilities, but consistently making decisions based on probabilities will ultimately earn the best results over time.

One of management’s biggest roles is to guide our businesses through uncertain futures. As managers and executives, we make the decisions that determine the directions of our companies. Let’s ensure we’re making our decisions based on the best and most accurate information — even if it’s not the simplest information.

What do you think? What issues have you seen with sales forecasts? Have you tried my example? How did it work for you?

Wanna be better with metrics? Watch more poker and less baseball.

Both baseball and poker have been televising their World Series championships, and announcers for both frequently describe strategies and tactics based on the statistics of the games. Poker announcers base their commentary and discussion on the probabilities associated with a small number of key metrics, while baseball announcers barrage us with numbers that sound meaningful but that are often pure nonsense.

Similarly, today’s web analytics give us the capability to track and report data on just about anything, but just because we can generate a number doesn’t mean that number is meaningful to our business. In fact, reading meaning into meaningless numbers can cause us to make very bad decisions.

Don’t get me wrong, I am a huge believer in making data-based decisions, in baseball, poker, and on our websites. But making good decisions is heavily dependent on using the right data and seeing the data in the right light. I sometimes worry that constant exposure to sports announcers’ misreading and misappropriation of numbers is actually contributing to a misreading and misunderstanding of numbers in our business settings.

Let’s consider a couple of examples of misreading and misappropriating numbers that have occurred in baseball over the last couple of weeks:

  1. Selection bias
    This one is incredibly common in the world of sports and nearly as common in business. Recently, headlines here in Detroit focused on the Tigers “choking” and blowing a seven-game lead with only 16 games to go. In a recent email exchange on this topic, my friend Chris Eagle pointed out the problems with the sports announcers’ hyperbole:

    “They’re picking the high-water mark for the Tigers in order to make their statement look good.  If you pick any other random time frame (say end-of-August, which I selected simply because it’s a logical break point), the Tigers were up 3.5 games.  But it doesn’t look like much of a choke if you say the Tigers lost a 3.5 game lead with a month and change to go.”

    Unfortunately, this type of analysis error occurs far too often in business. We might find that our weekend promotions are driving huge sales over the last six months, which sounds really impressive until we notice that non-sale days have dropped significantly as we’ve just shifted our business to days when we are running promotions (which may ultimately mean we’ve reduced our margins overall by selling more discounted product and less full-price merchandise).

    In a different way, Dennis Mortensen addressed the topic in his excellent blog post “The Recency Bias in Web Analytics,” where he points out the tendency to give undue weight to more recent numbers. He included a strong example about the problems of dashboards that lack context. Dashboards with gauges look really cool but are potentially dangerous as they are only showing metrics from a very short period of time. Which leads me to…

  2. Inconsistency of averages over short terms
    Baseball announcers and reporters can’t get enough of this one. Consider this article on the Phillies’ Ryan Howard after Game 3 of the World Series that includes, “Ryan Howard‘s home run trot has been replaced by a trudge back to the dugout.The Phillies’ big bopper has gone down swinging more than he’s gone deep…He’s still 13 for 44 overall in the postseason (.295) but only 2 for 13 (.154) in the World Series.” Actually, during the length of the season, he had three times as many strike outs as home runs, so his trudges back to the dugout seem pretty normal. And the problem with the World Series batting average stat is the low sample size. A sample of thirteen at bats is simply too small to match against his season long average of .279. Do different pitchers or the pressures of the situation have an effect? Maybe, but there’s nothing in the data to support such a conclusion. Segmenting by pitcher or “postseason” suffers from the same small sample size problems, where the margin of error expands significantly. Furthermore, and this is really key, knowing an average without knowing the variability of the original data set is incomplete and often misleading.

    This problems with variability and sample sizes arise frequently in retail analysis when we either run a test with too small a sample size and assume we can project it to the rest of the business, or we run a properly sized test but assume we’ll automatically see those same results in the first day of a full application of the promotion. Essentially, the latter point is what is happening with Ryan Howard in the postseason. We often hear the former as well when a player is all of the sudden crowned a star when he outperforms his season averages over a few games in the postseason.

    In retail, we frequently see this type of issue when we’re comparing something like average order value of two different promotions or two variations in an A/B test. Say we’ve run an A/B test of two promotions. Over 3,100 iterations of test A, we have an average order size of $31.68. And over 3,000 iterations of Test B, we have an average order size of $32.15. So, test B is the clear winner, right? Wrong. It turns our there is a lot more variability in test B, which has a standard deviation of 11.37 compared with test A’s standard deviation of 7.29. As a result the margin of error on the comparison expands to +/- 48 cents, which means both averages are within the margin of error and we can say with 95% confidence that there really is no difference between the tests. Therefore, it would be a mistake to project an increase in transaction size if we went with test B.

    Check out that example using this simple calculator created by my fine colleagues at ForeSee Results and play around with your own scenarios.  Download Test difference between two averages.

Poker announcers don’t seem to fall into all these statistical traps. Instead, they focus on a few key metrics like the number of outs and the size of the pot to discuss strategies for each player based largely on the probability of success in light of the risks and rewards of a particular tactic. Sure, there are intangibles like “poker tells” that occur, but even those are considered in light of the statistical probabilities of a particular situation.

Retail is certainly more complicated than poker, and the number of potential variables to deal with is immense. However, we can be much more prepared to deal with the complexities of our situations if we take a little more time to view our metrics in the right light. Our data-driven decisions can be far more accurate if we ensure we’re looking at the full data set, not a carefully selected subset, and we take the extra few minutes to understand the effects of variability on averages we report. A little extra critical thinking can go a long way.

What do you think? Are there better ways to analyze key metrics at your company? Do you consider variability in your analyses? Do you find the file to test two averages useful?



Related posts:

How retail sales forecasts are like baby due dates

Are web analytics like 24-hour news networks

True conversion – the on-base percentage of web analytics

How the US Open was like a retail promotion analysis

The Right Metrics: Why keeping it simple may not work for measuring e-retail performance (Internet Retailer article)

Are web analytics like 24-hour news networks?

We have immediate access to loads of data with our web sites, but just because we can access lots of data in real time doesn’t mean we should access our data in real time. In fact, accessing and reporting on the numbers too quickly can often lead to distractions, false conclusions, premature reactions and bad decisions.

I was attending the web-analytics-focused Semphonic X Change conference last week in San Francisco (which, by the way, was fantastic) where lots of discussion centered around both the glories and the issues associated with the mass amount of data we have available to us in the world of the web.

Before heading down for the conference breakfast Friday morning (September 11), I switched on CNN and saw — played out in all their glory on national TV — the types of issues that can occur with reporting too early on available data.

It seems CNN reporters “monitoring video” from a local TV station saw Coast Guard vessels in the Potomac River apparently trying to keep another vessel from passing. They then monitored the Coast Guard radio and heard someone say, “You’re approaching a Coast Guard security zone. … If you don’t stop your vessel, you will be fired upon. Stop your vessel immediately.” And, for my favorite part of the story, they made the decision to go on air when they heard someone say “bang, bang, bang, bang” and “we have expended 10 rounds.” They didn’t hear actual gun shots, mind you, they heard someone say “bang.” Could this be a case of someone wanting the data to say something it isn’t really saying?

In the end, it turned out the Coast Guard was simply executing a training exercise it runs four times a week! Yet, the results of CNN’s premature, erroneous and nationally broadcast report caused distractions to the Coast Guard leadership and White House leadership, caused the misappropriation of FBI agents who were sent to the waterfront unnecessarily, led to the grounding of planes at Washington National airport for 22 minutes, and resulted in reactionary demands from law enforcement agencies that they be alerted of such exercises in the future, even though the exercises run four times per week and those alerts will likely be quickly ignored because they will become so routine.

In the days when we only got news nightly, reporters would have chased down the information, discovered it was a non-issue and the report would have never aired. The 24-hour networks have such a need for speed of reporting that they’ve sacrificed accuracy and credibility.

Let’s not let such a rush negatively affect our businesses.

Later on that same day, I was attending a conference discussion on the role of web analytics in site redesigns. Several analysts in the room mentioned their frustrations when they were asked by executives for a report on how the new design was doing only a couple of hours after the launch of new site design. They wanted to be able to provide solid insight, but they knew they couldn’t provide anything reliable so soon.

Even though a lot of data is already available a couple of hours in, that data lacks the context necessary to start drawing conclusions.

For one, most site redesigns experience a dip in key metrics initially as regular customers adjust to a new look and feel. In the physical retail world, we used to call this the “Where’s my stuff?” phenomenon. But even if we set the initial dip aside, there are way too many variables involved in the short term of web activity to make any reliable assessments of the new design’s effectiveness. As with any short term measurement, the possibilities for random outliers to unnaturally sway the measurement to one direction or another is high. It takes some time and an accumulation of data to be sure we have a reliable story to tell.

And even with time, web data collection is not perfect. Deleted cookies, missed connections, etc. can all cause some problems in the overall completeness of the data. For that matter, I’ve rarely seen the perfect set of data in any retail environment. Given the imperfect nature of the data we’re using to make key strategic decisions, we need to give our analysts time to review it, debate it and come to reasoned conclusions before we react.

I realize the temptation is strong to get an “early read” on the progress of a new site design (or any strategic issue, really). I’ve certainly felt it myself on many occasions. However, since just about every manager and executive I know (including myself) has a strong bias for action, we have to be aware of the risks associated with these “early reads” and our own abilities or inabilities to make conclusions and immediately react. Early reads can lead to the bad decisions associated with the full accelerator/full brake syndrome I’ve referenced previously.

We can spend months or even years preparing for a massive new strategic effort and strangle it within days by overreacting to early data. Instead, I wonder if it’s a better to determine well in advance of the launch — when we’re thinking more rationally and the temptation to know something is low — when we’ll first analyze the success of our new venture. Why not make such reporting part of the project plan and publicly set expectations about when we’ll review the data and what type of adjustments we should plan to make based on what we learn?

In the end, let’s let our analysts strive for the credibility of the old nightly news rather than emulate the speed and rush to judgment that too often occurs in this era of 24-hours news. Our businesses and our strategies are too important and have taken too long to build to sacrifice them to a short-term need for speed.

What do you think? Have you seen this issue in action? How do you need with the balance between quick information and thoughtful analysis?

Photo credit: Wikimedia Commons




Retail: Shaken Not Stirred by Kevin Ertell


Home | About