Category: Analysis

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

How to achieve FAME in analysis

focused handsIn retail, and in web retail in particular, we are drowning in data. We can and do track just about everything, and we’re constantly pouring over the numbers. But I sometimes worry that the abundance of data is so overwhelming that it often leads to a shortage of insight. All that data is worthless (or worse) if we don’t produce thoughtful analysis and then carefully craft communication of our findings in ways that enable decision makers to react to the data rather than try to analyze it themselves.

The most effective analyses I’ve seen have remarkably similar attributes, and they happen to work into a nice, easy-to-remember acronym — F.A.M.E.

Here, in my experience, are the keys to achieving FAME in analysis:

Focused

Any finding should be fact based and clear enough that it can be stated in a succinct format similar to a newspaper headline. It’s OK to augment the main headline with a sub-headline that adds further clarification, but anything more complicated is not nearly focused enough to be an effective finding.

For example, an effective finding might be, “Visitors arriving from Google search terms are converting 23% lower than visitors arriving from email.” An accompanying sub-heading might further clarify the statement with something like, “Unclear value proposition, irrelevant landing pages and high first time visitor counts are contributing factors.”

All subsequent data presented should support these headlines. Any data that is interesting but irrelevant to the finding should be excluded from the analysis. In other words, remove the clutter so the main points are as clear as possible.

Actionable

Effective findings and their accompanying recommendations are specific enough in focus and narrow enough in scope that decision makers can reasonably develop a plan of action to address them. The finding mentioned above regarding Google search visitors fits the bill, and a recommendation that focuses on modifying landing pages to match search terms would be appropriate. Less appropriate would be a vague finding like “customers coming from Google search terms are viewing more pages than customers coming from email campaigns” accompanied by an equally vague recommendation to “consider ways to reduce pages clicked by Google search campaign visitors.” Is viewing more pages good or bad? Why? The recommendation in this case insinuates that it’s bad, but it’s not clear why. What’s the benefit of taking action in quantifiable terms?

Truly actionable analysis doesn’t burden decision makers with connecting the data to executable conclusions. In other words, the thought put into the analysis should make the diagnosis of problems clear so that decision makers can get to work on determining necessary solutions.

Manageable

The number of findings in any set of analyses should be contained enough that the analyst and anyone in the audience can recite the findings and recommendations (but not all the supporting details) in 30 seconds. Sometimes, less is more. This constraint helps ease the subsequent communication that will be necessary to reasonably react to the findings and plan and execute a response. Conversely, information overload obscures key messages and makes it difficult for teams to coalesce around key issues.

Enlightening

Last, but most certainly not least, effective findings are enlightening. Effective analyses should present — and support with clear, credible data — a view of the business that is not widely held. They should, at the very least, elicit a “hmmm…” from the audience and ideally a “whoa!” They should excite decision makers and spur them to action.

————————————–

The FAME attributes are not always easy to achieve. They require a lot of hard thought, but the value of clear, data-supported insight to an organization is immense.

The most effective analysts I’ve seen achieve FAME on a regular basis. They have a thorough understanding of the business’ objectives, and they develop their insights to help decision makers truly understand what’s working and what’s not working. And then they lay out clear opportunities for improvement. That’s data-driven business management at its best.

What do you think? What attributes do you find key in effective analyses?

Social, mobile and other bright, shiny objects

It’s official. Social media and mobile commerce are this year’s bright, shiny objects. I recently attended a couple of industry conferences where those two topics dominated the agendas, and the trade mags and email newsletters are full of articles on everything social and mobile.

Heck, I’ve also written a white paper and blogged about social media.

Don’t get me wrong. I think social and mobile are important opportunities for us to improve our businesses. I just don’t think we should focus on them to the exclusion of some of the core aspects of our sites and businesses that still need a lot of work.

The level of our success with any of these new technologies is going to be limited by the effectiveness of our core site capabilities and constrained by any internal organizational challenges we might have.

Here are some topics I’d love to see get a little more press and conference content time:

  • Usability
    From my vantage point at ForeSee Results, where I can see customer perceptions at many different retailers, it’s clear that our sites have not come close to solving all of our usability issues. In fact, I’ll go as far as saying improving usability is the #1 way to increase conversion. I’m currently reading a book called “The Design of Everyday Things” by Don Norman. The book was written in the ’80s (I think) so there’s no mention of websites. Instead, he talks a lot about the design of doors, faucets and other everyday objects and, most interestingly, the psychology of we humans who interact with these things. The principles he discusses are absolutely relevant to web page design. Other books, such as “Don’t Make Me Think” by Steve Krug and anything by Jakob Nielsen are also great sources of knowledge. I’d sure love to see us cover these types of topics a little more in our conferences and trade mags. Also, how do different retailers approach find and solve usability issues? In the end, if the experiences we create aren’t usable our social and mobile strategies won’t reach their potential.
  • Organizational structure
    How often do we come back from a conference with great new ideas about implementing some new strategies (say, a new social media or mobile commerce strategy) only to run into competing agendas, lack of resources or organizational bureaucracies? Discussing and writing about organizational structure doesn’t have the panache of social media or other exciting new frontiers, but there’s little doubt in my mind that the structure of our organizations can make or break the success of our businesses. When we were first setting up the organization for the new Borders.com, we spent a LOT of time studying the structures of other companies learning about the pros and the cons from those who lived through different schemes. It was hugely useful and more interesting than you might think. Mark Fodor, CEO of Cross View, just wrote an excellent piece for Online Strategies magazine that discussed the hurdles involved in going cross-channel and included a very good discussion about the need for mindset shifts. I’d love to see these topics further explored in interactive environments at industry conferences.
  • Incentives
    Books like Freakonomics make strong cases for the fact that incentives drive our behaviors. I’d love to hear how other companies set up their internal incentive structures. And there are multiple types of incentives. Certainly, there are financial incentives that come in the form of bonuses. But there are also the sometimes more powerful social incentives. What gets talked about all the time? How do those topics of discussion influence people’s behaviors? How do all those incentives align with the needs generated by new strategies to maximize the power of social media or mobile commerce?
  • Data/analytics storytelling
    We have so much data available to us, and we all talk about being data driven. But how do we get the most from that data? How do we use that data to form our strategies, support our strategies and communicate our strategies. John Lovett of Web Analytics Desmystified wrote an excellent piece on telling stories with data recently. There are also several great blogs on analytics like MineThatData, Occam’s Razor, and the aforementioned Web Analytics Demystified. I’d love to see more discussions in trade mags and conferences about how to get the most from our data, both in analyzing it and relating the findings to others.
  • International expansion
    We used to talk a lot about international, but it doesn’t seem to be a big topic lately. Yet the opportunities to grow our businesses internationally are immense. So, too, are the challenges. Jim Okamura and Maris Daugherty at the JC Williams Group wrote an absolutely excellent white paper late last year on the prizes and perils of international expansion. Jim did have a breakout session at last year’s Shop.org Annual Summit, but I’d love to see more discussion from retailers who have gone or are going international to learn more. Or it would also be good to hear from those who simply ship internationally or those who have decided to stay domestic to learn more about their decision making processes.
  • Leadership
    Leading lots of people and convincing big, disparate groups to do new things is hard. I just read the book Switch: How to Change Things When Change is Hard by Dan and Chip Heath. There are some amazing tips in that book about implementing change in organizations (and in other parts of life, for that matter). I would love to see more discussion of these types of leadership topics that help us all implement the changes we know we need to make to take advantage of new opportunities like social media and mobile commerce.

I know a lot of these topics are more business basics than retail or e-commerce specific. But the reality is we need to be our absolute best at these business basics in order to implement any of our new ideas and strategies. I personally always enjoy talking to other retailers about some of these basics, and I certainly never tire of reading books that expand my horizons. I’d love to see more about these topics in our conferences and trade mags.

But these are just my opinions. I’d really love to know what you think. As a member of the executive content committee for Shop.org, I’m actually in a position to influence some of the excellent content that my good friend Larry Joseloff regularly puts together. But I’d love to know if you agree or not before I start banging the drum. Would you mind dropping me a quick comment or an email letting me know if you agree or disagree. A simple “Right on” if you agree or a “You’re nuts” if you don’t is plenty sufficient; although, I certainly appreciate your expanded thoughts if you’d like to share them.

Please, let me know what you think of my little rant.


The Missing Links in the Customer Engagement Cycle

customer engagement cycleThe Customer Engagement Cycle plays a central role in many marketing strategies, but it’s not always defined in the same way. Probably the most commonly described stages are Awareness, Consideration, Inquiry, Purchase and Retention. In retail, we often think of the cycle as Awareness, Acquisition, Conversion, Retention. In either case, I think there are a couple of key stages that do not receive enough consideration given their critical ability to drive the cycle.

The missing links are Satisfaction and Referral.

Before discussing these missing links, let’s take a quick second to define the other stages:

Awareness: This is basic branding and positioning of the business. We certainly can’t progress people through the cycle before they’ve even heard of us.

Acquisition: I’ve always thought of this as getting someone into our doors or onto our site. It’s a major step, but it’s not yet profitable.

Conversion: This one is simply defined as making a sales. Woo hoo! It may or may not be a profitable sales on its own, but it’s still a significant stage in the cycle.

Retention: We get them to shop with us again. Excellent! Repeat sales tend to be more profitable and almost certainly have lower marketing costs than first purchases.

Now, let’s get to those Missing Links

In my experience, the key to a strong and active customer engagement cycle is a very satisfying customer experience. And while the Wikipedia article on Customer Engagement doesn’t mention Satisfaction as often as I would like, it does include this key statement: “Satisfaction is simply the foundation, and the minimum requirement, for a continuing relationship with customers.”

In fact, I think the quality of the customer experience is so important that I would actually inject it multiple times into the cycle: Awareness, Acquisition, Satisfaction, Conversion, Satisfaction, Retention, Satisfaction, Referral.

Of course, it’s possible to get through at least some of the stages of the cycle without an excellent customer experience. People will soldier through a bad experience if they want the product bad enough or if there’s an incredible price. But it’s going to be a lot harder to retain that type of customer and if you get a referral, it might not be the type of referral you want.

I wonder if Satisfaction and Referral are often left out of cycle strategies because they are the stages most out of marketers’ control.

A satisfying customer experience is not completely in the marketer’s control. For sure, marketing plays a role. A customer’s satisfaction can be defined as the degree to which her actual experience measures up to her expectations. Our marketing messages are all about expectations, so it’s important that we are compelling without over-hyping the experience. And certainly marketers can influence policy decisions, website designs, etc. to help drive better customer experiences.

In the end, though, the actual in-store or online experience will determine the strength of the customer engagement.

Everyone plays a part in the satisfaction stages. Merchants must ensure advertised product is in stock and well positioned. Store operators must ensure the stores are clean, the product is available on the sales floor and the staff are friendly, enthusiastic and helpful. The e-commerce team must ensure advertised products can be easily found, the site is performing well, product information in complete and useful,  and the products are shipped on time and in good condition.

We also have to ensure our incentives and metrics are supporting a quality customer experience, because the wrong metrics can incent the wrong behavior. For example, if we measure an online search engine marketing campaign by the number of visitors generated or even the total sales generated, we can absolutely end up going down the wrong path. We can buy tons of search terms that by their sheer volume will generate lots of traffic and some degree of increased sales. But if those search terms link to the home page or some other page that is largely irrelevant to the search term, the experience will be likely disappointing for the customer who clicked through.

In fact, I wrote a white paper a few months ago, Online Customer Acquisition: Quality Trumps Quantity, that delved into customer experience by acquisition source for the Top 100 Internet Retailers. We found that those who came via external search engines were among the least satisfied customers of those sites with the least likelihood to purchase and recommend. Not good. These low ratings could largely be attributed to the irrelevance of the landing pages from those search terms.

Satisfaction breeds Referral

Referrals or Recommendations are truly wonderful. As I wrote previously, the World’s Greatest Marketers are our best and most vocal customers. They are more credible than we’ll ever be, and the cost efficiencies of acquisition through referral are significantly better than our traditional methods of awareness and acquisition marketing. In my previously mentioned post, I discussed some ways to help customers along on the referral path. But, of course, customers can be pretty resourceful on their own.

We’ve all seen blog posts, Facebook posts or tweets about bad customer experiences. But plenty of positive public commentary can also be found.  Target’s and Gap’s Facebook walls have lots of customers expressing their love for those brands. Even more powerful are blog posts some customers write about their experiences.  I came across a post yesterday from entitled Tales of Perfection that related two excellent experiences the blogger had with Guitar Center and a burger joint called Arry’s. Both stories are highly compelling and speak to the excellent quality of the employees at each business. Nice!

————————————————–

Developing a business strategy, not just a marketing strategy, around the customer engagement cycle can be extremely powerful. It requires the entire company to get on board to understand the value of maximizing the customer experience at every touch point with the customer, and it requires a set of incentives and metrics that fully support strengthening the cycle along the way.

What do you think? How do you think about the customer engagement cycle? How important do feel the customer experience is in strengthening the cycle? Or do you think this is all hogwash?


Why most sales forecasts suck…and how Monte Carlo simulations can make them better

Sales forecasts don’t suck because they’re wrong.  They suck because they try to be too right. They create an impossible illusion of precision that ultimately does a disservice to managers who need accurate forecasts to assist with our planning. Even meteorologists — who are scientists with tons of historical data, incredibly high powered computers and highly sophisticated statistical models — can’t forecast with the precision we retailers attempt to forecast. And we don’t have nearly the data, the tools or the models meteorologists have.

Luckily, there’s a better way. Monte Carlo simulations run in Excel can transform our limited data sets into statistically valid probability models that give us a much more accurate view into the future. And I’ve created a model you can download and use for yourself.

There are literally millions of variables involved in our weekly sales, and we clearly can’t manage them all. We focus on the few significant variables we can affect as if they are 100% responsible for sales, but they’re not and they are also not 100% reliable.

Monte Carlo simulations can help us emulate real world combinations of variables, and they can give us reliable probabilities of the results of combinations.

But first, I think it’s helpful to provide some background on our current processes…

We love our numbers, but we often forget some of the intricacies about numbers and statistics that we learned along the way. Most of us grew up not believing a poll of 3,000 people could predict a presidential election. After all, the pollsters didn’t call us. How could the opinions of 3,000 people predict the opinions of 300 million people?

But then we took our first statistics classes. We learned all the intricacies of statistics. We learned about the importance of properly generated and significantly sized random samples. We learned about standard deviations and margins of errors and confidence intervals. And we believed.

As time passed, we moved on from our statistics classes and got into business. Eventually, we started to forget a lot about properly selected samples, standard deviations and such and we just remembered that you can believe the numbers.

But we can’t just believe any old number.

All those intricacies matter. Sample size matters a lot, for example. Basing forecasts, as we often do, on limited sets of data can lead to inaccurate forecasts.

Here’s a simplified explanation of how most retailers that I know develop sales forecasts:

  1. Start with base sales from last year for the the same time period you’re forecasting (separating out promotion driven sales)
  2. Apply the current sales trend (which is maybe determined by an average of the previous 10 week comps). This method may vary from retailer to retailer, but this is the general principle.
  3. Look at previous iterations of the promotions being planned for this time period. Determine the incremental revenue produced by those promotions (potentially through comparisons to control groups). Average of the incremental results of previous iterations of the promotion, and add that average to the amount determined in steps 1 and 2.
  4. Voilà! This is the sales forecast.

Of course, this number is impossibly precise and the analysts who generate it usually know that. However, those on the receiving end tend to assume it is absolutely accurate and the probability of hitting the forecast is close to 100% — a phenomenon I discussed previously when comparing sales forecasts to baby due dates.

As most of us know from experience, actually hitting the specific forecast almost never happens.

We need accuracy in our forecasts so that we can make good decisions, but unjustified precision is not accuracy. It would be far more accurate to forecast a range of sales with accompanying probabilities. And that’s where the Monte Carlo simulation comes in.

Monte Carlo simulations

Several excellent books I read in the past year (The Drunkard’s Walk, Fooled by Randomness, Flaw of Averages, and Why Can’t You Just Give Me a Number?) all promoted the wonders of Monte Carlo simulations (and Sam Savage of Flaw of Averages even has a cool Excel add-in). As I read about them, I couldn’t help but think they could solve some of the problems we retailers face with sales forecasts (and ROI calculations, too, but that’s a future post). So I finally decided to try to build one myself. I found an excellent free tutorial online and got started. The results are a file you can download and try for yourself.

A Monte Carlo simulation might be most easily explained as a “what if” model and sensitivity analysis on steroids. Basically, the model allows us to feed in a limited set of variables about which we have some general probability estimates and then, based on those inputs, generate a statistically valid set of data we can use to run probability calculations for a variety of possible scenarios.

It turns out to be a lot easier than it sounds, and this is all illustrated in the example file.

The results are really what matters. Rather than producing a single number, we get probabilities for different potential sales that we can use to more accurately plan our promotions and our operations. For example, we might see that our base business has about a 75% chance of being negative, so we might want to amp up our promotions for the week in order have a better chance of meeting our growth targets.  Similarly, rather than reflexively “anniversaring” promotions, we can easily model the incremental probabilities of different promotions to maximize both sales and profits over time.

The model allows for easily comparing and contrasting the probabilities of multiple possible options. We can use what are called probability weighted “expected values” to find our best options. Basically, rather than straight averages that can be misleading, expected values are averages that are weighted based on the probability of each potential result.

Of course, probabilities and ranges aren’t as comfortable to us as specific numbers, and using them really requires a shift in mindset. But accepting that the future is uncertain and planning based on the probabilities of potential results puts us in the best possible position to maximize those results. Understanding the range of possible results allows for better and smarter planning. Sometimes, the results will go against the probabilities, but consistently making decisions based on probabilities will ultimately earn the best results over time.

One of management’s biggest roles is to guide our businesses through uncertain futures. As managers and executives, we make the decisions that determine the directions of our companies. Let’s ensure we’re making our decisions based on the best and most accurate information — even if it’s not the simplest information.

What do you think? What issues have you seen with sales forecasts? Have you tried my example? How did it work for you?

Retail: Shaken Not Stirred by Kevin Ertell


Home | About