Posts tagged: statistics

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

Why most sales forecasts suck…and how Monte Carlo simulations can make them better

Sales forecasts don’t suck because they’re wrong.  They suck because they try to be too right. They create an impossible illusion of precision that ultimately does a disservice to managers who need accurate forecasts to assist with our planning. Even meteorologists — who are scientists with tons of historical data, incredibly high powered computers and highly sophisticated statistical models — can’t forecast with the precision we retailers attempt to forecast. And we don’t have nearly the data, the tools or the models meteorologists have.

Luckily, there’s a better way. Monte Carlo simulations run in Excel can transform our limited data sets into statistically valid probability models that give us a much more accurate view into the future. And I’ve created a model you can download and use for yourself.

There are literally millions of variables involved in our weekly sales, and we clearly can’t manage them all. We focus on the few significant variables we can affect as if they are 100% responsible for sales, but they’re not and they are also not 100% reliable.

Monte Carlo simulations can help us emulate real world combinations of variables, and they can give us reliable probabilities of the results of combinations.

But first, I think it’s helpful to provide some background on our current processes…

We love our numbers, but we often forget some of the intricacies about numbers and statistics that we learned along the way. Most of us grew up not believing a poll of 3,000 people could predict a presidential election. After all, the pollsters didn’t call us. How could the opinions of 3,000 people predict the opinions of 300 million people?

But then we took our first statistics classes. We learned all the intricacies of statistics. We learned about the importance of properly generated and significantly sized random samples. We learned about standard deviations and margins of errors and confidence intervals. And we believed.

As time passed, we moved on from our statistics classes and got into business. Eventually, we started to forget a lot about properly selected samples, standard deviations and such and we just remembered that you can believe the numbers.

But we can’t just believe any old number.

All those intricacies matter. Sample size matters a lot, for example. Basing forecasts, as we often do, on limited sets of data can lead to inaccurate forecasts.

Here’s a simplified explanation of how most retailers that I know develop sales forecasts:

  1. Start with base sales from last year for the the same time period you’re forecasting (separating out promotion driven sales)
  2. Apply the current sales trend (which is maybe determined by an average of the previous 10 week comps). This method may vary from retailer to retailer, but this is the general principle.
  3. Look at previous iterations of the promotions being planned for this time period. Determine the incremental revenue produced by those promotions (potentially through comparisons to control groups). Average of the incremental results of previous iterations of the promotion, and add that average to the amount determined in steps 1 and 2.
  4. Voilà! This is the sales forecast.

Of course, this number is impossibly precise and the analysts who generate it usually know that. However, those on the receiving end tend to assume it is absolutely accurate and the probability of hitting the forecast is close to 100% — a phenomenon I discussed previously when comparing sales forecasts to baby due dates.

As most of us know from experience, actually hitting the specific forecast almost never happens.

We need accuracy in our forecasts so that we can make good decisions, but unjustified precision is not accuracy. It would be far more accurate to forecast a range of sales with accompanying probabilities. And that’s where the Monte Carlo simulation comes in.

Monte Carlo simulations

Several excellent books I read in the past year (The Drunkard’s Walk, Fooled by Randomness, Flaw of Averages, and Why Can’t You Just Give Me a Number?) all promoted the wonders of Monte Carlo simulations (and Sam Savage of Flaw of Averages even has a cool Excel add-in). As I read about them, I couldn’t help but think they could solve some of the problems we retailers face with sales forecasts (and ROI calculations, too, but that’s a future post). So I finally decided to try to build one myself. I found an excellent free tutorial online and got started. The results are a file you can download and try for yourself.

A Monte Carlo simulation might be most easily explained as a “what if” model and sensitivity analysis on steroids. Basically, the model allows us to feed in a limited set of variables about which we have some general probability estimates and then, based on those inputs, generate a statistically valid set of data we can use to run probability calculations for a variety of possible scenarios.

It turns out to be a lot easier than it sounds, and this is all illustrated in the example file.

The results are really what matters. Rather than producing a single number, we get probabilities for different potential sales that we can use to more accurately plan our promotions and our operations. For example, we might see that our base business has about a 75% chance of being negative, so we might want to amp up our promotions for the week in order have a better chance of meeting our growth targets.  Similarly, rather than reflexively “anniversaring” promotions, we can easily model the incremental probabilities of different promotions to maximize both sales and profits over time.

The model allows for easily comparing and contrasting the probabilities of multiple possible options. We can use what are called probability weighted “expected values” to find our best options. Basically, rather than straight averages that can be misleading, expected values are averages that are weighted based on the probability of each potential result.

Of course, probabilities and ranges aren’t as comfortable to us as specific numbers, and using them really requires a shift in mindset. But accepting that the future is uncertain and planning based on the probabilities of potential results puts us in the best possible position to maximize those results. Understanding the range of possible results allows for better and smarter planning. Sometimes, the results will go against the probabilities, but consistently making decisions based on probabilities will ultimately earn the best results over time.

One of management’s biggest roles is to guide our businesses through uncertain futures. As managers and executives, we make the decisions that determine the directions of our companies. Let’s ensure we’re making our decisions based on the best and most accurate information — even if it’s not the simplest information.

What do you think? What issues have you seen with sales forecasts? Have you tried my example? How did it work for you?

Wanna be better with metrics? Watch more poker and less baseball.

Both baseball and poker have been televising their World Series championships, and announcers for both frequently describe strategies and tactics based on the statistics of the games. Poker announcers base their commentary and discussion on the probabilities associated with a small number of key metrics, while baseball announcers barrage us with numbers that sound meaningful but that are often pure nonsense.

Similarly, today’s web analytics give us the capability to track and report data on just about anything, but just because we can generate a number doesn’t mean that number is meaningful to our business. In fact, reading meaning into meaningless numbers can cause us to make very bad decisions.

Don’t get me wrong, I am a huge believer in making data-based decisions, in baseball, poker, and on our websites. But making good decisions is heavily dependent on using the right data and seeing the data in the right light. I sometimes worry that constant exposure to sports announcers’ misreading and misappropriation of numbers is actually contributing to a misreading and misunderstanding of numbers in our business settings.

Let’s consider a couple of examples of misreading and misappropriating numbers that have occurred in baseball over the last couple of weeks:

  1. Selection bias
    This one is incredibly common in the world of sports and nearly as common in business. Recently, headlines here in Detroit focused on the Tigers “choking” and blowing a seven-game lead with only 16 games to go. In a recent email exchange on this topic, my friend Chris Eagle pointed out the problems with the sports announcers’ hyperbole:

    “They’re picking the high-water mark for the Tigers in order to make their statement look good.  If you pick any other random time frame (say end-of-August, which I selected simply because it’s a logical break point), the Tigers were up 3.5 games.  But it doesn’t look like much of a choke if you say the Tigers lost a 3.5 game lead with a month and change to go.”

    Unfortunately, this type of analysis error occurs far too often in business. We might find that our weekend promotions are driving huge sales over the last six months, which sounds really impressive until we notice that non-sale days have dropped significantly as we’ve just shifted our business to days when we are running promotions (which may ultimately mean we’ve reduced our margins overall by selling more discounted product and less full-price merchandise).

    In a different way, Dennis Mortensen addressed the topic in his excellent blog post “The Recency Bias in Web Analytics,” where he points out the tendency to give undue weight to more recent numbers. He included a strong example about the problems of dashboards that lack context. Dashboards with gauges look really cool but are potentially dangerous as they are only showing metrics from a very short period of time. Which leads me to…

  2. Inconsistency of averages over short terms
    Baseball announcers and reporters can’t get enough of this one. Consider this article on the Phillies’ Ryan Howard after Game 3 of the World Series that includes, “Ryan Howard‘s home run trot has been replaced by a trudge back to the dugout.The Phillies’ big bopper has gone down swinging more than he’s gone deep…He’s still 13 for 44 overall in the postseason (.295) but only 2 for 13 (.154) in the World Series.” Actually, during the length of the season, he had three times as many strike outs as home runs, so his trudges back to the dugout seem pretty normal. And the problem with the World Series batting average stat is the low sample size. A sample of thirteen at bats is simply too small to match against his season long average of .279. Do different pitchers or the pressures of the situation have an effect? Maybe, but there’s nothing in the data to support such a conclusion. Segmenting by pitcher or “postseason” suffers from the same small sample size problems, where the margin of error expands significantly. Furthermore, and this is really key, knowing an average without knowing the variability of the original data set is incomplete and often misleading.

    This problems with variability and sample sizes arise frequently in retail analysis when we either run a test with too small a sample size and assume we can project it to the rest of the business, or we run a properly sized test but assume we’ll automatically see those same results in the first day of a full application of the promotion. Essentially, the latter point is what is happening with Ryan Howard in the postseason. We often hear the former as well when a player is all of the sudden crowned a star when he outperforms his season averages over a few games in the postseason.

    In retail, we frequently see this type of issue when we’re comparing something like average order value of two different promotions or two variations in an A/B test. Say we’ve run an A/B test of two promotions. Over 3,100 iterations of test A, we have an average order size of $31.68. And over 3,000 iterations of Test B, we have an average order size of $32.15. So, test B is the clear winner, right? Wrong. It turns our there is a lot more variability in test B, which has a standard deviation of 11.37 compared with test A’s standard deviation of 7.29. As a result the margin of error on the comparison expands to +/- 48 cents, which means both averages are within the margin of error and we can say with 95% confidence that there really is no difference between the tests. Therefore, it would be a mistake to project an increase in transaction size if we went with test B.

    Check out that example using this simple calculator created by my fine colleagues at ForeSee Results and play around with your own scenarios.  Download Test difference between two averages.

Poker announcers don’t seem to fall into all these statistical traps. Instead, they focus on a few key metrics like the number of outs and the size of the pot to discuss strategies for each player based largely on the probability of success in light of the risks and rewards of a particular tactic. Sure, there are intangibles like “poker tells” that occur, but even those are considered in light of the statistical probabilities of a particular situation.

Retail is certainly more complicated than poker, and the number of potential variables to deal with is immense. However, we can be much more prepared to deal with the complexities of our situations if we take a little more time to view our metrics in the right light. Our data-driven decisions can be far more accurate if we ensure we’re looking at the full data set, not a carefully selected subset, and we take the extra few minutes to understand the effects of variability on averages we report. A little extra critical thinking can go a long way.

What do you think? Are there better ways to analyze key metrics at your company? Do you consider variability in your analyses? Do you find the file to test two averages useful?



Related posts:

How retail sales forecasts are like baby due dates

Are web analytics like 24-hour news networks

True conversion – the on-base percentage of web analytics

How the US Open was like a retail promotion analysis

The Right Metrics: Why keeping it simple may not work for measuring e-retail performance (Internet Retailer article)

How are retail sales forecasts like baby due dates?

Q. How are retail sales forecasts like baby due dates.

A. They both provide an improper illusion of precision and cause considerable consternation when they’re missed.

Our first child was born perfectly healthy almost two weeks past her due date, but every day past that less than precisely accurate due date was considerably more frustrating for my amazing and beautiful wife. While her misery was greater than many of us endure in retail sales results meetings, we nonetheless experience more misery than necessary due to improperly specific forecast numbers creating unrealistic expectations.

I believe there’s a way to continue to provide the planning value of a sales forecast (and baby due date) while reducing the consternation involved in the almost inevitable miss of the predictions generated today.

But first, let’s explore how sales forecasts are produced today.

In my experience, an analyst or team of analysts will pull a variety of data sources into a model used to generate their forecast. They’ll feed sales for the same time period over the last several years at least; they’ll look at the current year sales trend to try to factor in the current environment; they’ll take some guidance from merchant planning; and they’ll mix in planned promotions for the time period, which also includes looking at past performance of the same promotions. That description is probably oversimplified for most retailers, but the basic process is there.

Once all the data is in the mix, some degree of statistical analysis is run on the data and used to generate a forecast of sales for the coming time period — let’s say it’s a week. Here’s where the problems start. The sales forecast are specific numbers, maybe rounded to the nearest thousand. For example, the forecast for the week might be $38,478k. From that number, daily sales will be further parsed out by determining percentages of the week that each day represents, and each day’s actual sales will be measured against those forecast days.

And let the consternation begin because the forecast almost never matches up to actual sales.

The laws of statistics are incredibly powerful — sometimes so powerful that we forget all the intricacies involved. We forget about confidence intervals, margins of error, standard deviations, proper sampling techniques, etc. The reality is we can use statistical methodologies to pretty accurately predict the probability we’ll get a certain range of sales for a coming week. We can use various modeling techniques and different mixes of data to potentially increase the probability and decrease the range, but we’ll still have a probability and a range.

I propose we stop forecasting specific amounts and start forecasting the probability we’ll achieve sales in a particular range.

Instead of projecting an unreliably specific amount like $38,478k, we would instead forecast a 70% probability that sales would fall between $37,708k and $39,243k. Looking at our businesses in this manner better reflects the reality that literally millions of variables have an effect on our sales each day, and random outliers at any given time can cause significant swings in results over small periods of time.

Of course, that doesn’t mean we won’t still need sales targets to achieve our sales plans. But if we don’t acknowledge the inherent uncertainty of our forecasts, we won’t truly understand the size of the risks associated with achieving plan. And we need to understand the risks in order to develop the right contingency and mitigation tactics. The National Weather Service, which uses similar methods of forecasting, explains the reasons for their methods as follows:

“These are guidelines based on weather model output data along with local forecasting experience in order to give persons [an idea] as to what the statistical chance of rain is so that people can be prepared and take whatever action may be required. For example, if someone pouring concrete was at a critical point of a job, a 40% chance of rain may be enough to have that person change their plans or at least be alerted to such an event. No guarantees, but forecasts are getting better.”

Imagine how the Monday conversation would change when reviewing last week’s sales if we had the probability and range forecast suggested above and actual sales came in at $37,805k? Instead of focusing on how we missed a phantom forecast figure by 1.7%, we could quickly acknowledge that sales came in as predicted and then focus on what tactics we employed above and beyond what was fed into the model that generated the forecast. Did those tactics generate additional sales or not? How did those tactics affect or not affect existing tactics? Do we need to make strategic changes, or should we accept that our even though our strategy can be affected by millions of variables in the short term it’s still on track for the long term?

Expressing our forecasts in probabilities and ranges, whether we’re
talking about sales, baby due dates or the weather, helps us get a
better sense of the possibilities the future might hold and allows us
to plan with our eyes wide open. And maybe, just maybe, those last couple weeks of pregnancy will be slightly less frustrating (and, believe me, every little bit helps).

What do you think? Would forecasts with probabilities and ranges enhance sales discussions at your company? Do sales forecasts work differently at your company?



Is elitism the source of poor usability?

Most sites are still achieving single digit conversion rates even though customer intent-to-purchase rates are 20% or higher in most cases. Customers are continuing to run into obstacles to the purchase process that need to be eliminated. The good news is that during this time of limited capital investments, retailers can use low cost means to find and eliminate as many obstacles to purchase as possible.

The first step is to get into the right mindset and remove what I feel is the biggest disconnect with the customers that many retailers have: we’re way more comfortable and experienced with our own sites than our customers are. We use our sites every day, and we know exactly how they’re supposed to work. However, our customers are generally nowhere near as familiar with our sites as we are.

Two weeks ago, I was lucky to be able to attend GSI‘s Connect conference for its clients. I was even luckier to attend a fantastic session by GSI’s Senior Director of Usability, Michael Summers. Michael got the audience’s attention pretty quickly by calling us all elitists…and he had a good point. He asked us how many of us fit the demographic for today’s main Internet users and quickly made the point that we were higher educated, higher paid and more Internet savvy — by a long shot — than the average site user in the marketplace. If that wasn’t enough, he showed some video of average Americans shopping online who had trouble with some of what we in the industry would consider among the most basic aspects of websites.

To solve this disconnect we need to see our sites through our customers’ eyes. There are a number of ways to do this that I’ve found to be effective.

  1. Use statistically significant customer satisfaction surveys to get trendable data that will  point to the biggest problem areas of the site.
    The two key phrases here are “statistically significant” and “trendable.” Per my last post, continuous measurement is important to avoid random outliers and uncover the underlying truth. When done correctly, customer satisfaction surveys can be extremely reliable, accurate, and predictive and can tell you not only which areas of a site customers complain about most, but also which areas of the site will actually have the biggest impact on purchase intent and loyalty. This is critical information to provide some some direction on where to focus your usability efforts.
  2. Ask open-ended questions to add color to the quantitative information.
    Quantitative analysis is extremely useful, but numbers alone aren’t nearly enough. Numbers will certainly tell you the problem areas of the site, but to really get your arms around what the numbers are saying requires adding some color to them with some qualitative information. Asking more open-ended questions like “If you could make one improvement to our site, what would it be?” are good starters to bring some of the numbers to life. If the numbers tell you that customers in general are having problems with navigation and you see that multiple customers say in open-ended comments they just want to see all the blue dresses in stock, you might start to consider adding color choice to your navigation. Or maybe you already have an option to navigate by color, but the customers aren’t seeing it and you’ll need to find a way to make it more apparent.
  3. Watch your customers use your site.
    The absolute best way to add color to the data is to actually watch customers use the site. In the past, I’ve seen great discoveries come from taking a laptop into a store and asking real customers to shop on the site while I or someone on my team watched silently. In these situations, it’s very important not to be too prescriptive in the tasks the customer is asked to do. Ask them to “find and buy a new pair of dress shoes” rather than “go to the men’s tab, then select dress shoes and find a pair of black, size 9 shoes.” It never fails to amaze me in this situation how many different avenues customers will take to accomplish the task, and they’ll frequently run into trouble. These trouble spots are the areas to find and eliminate. Some of the smallest fixes can often significantly improve conversion and customer satisfaction.If the logistics of getting into a store are too difficult or you don’t have physical stores, there are technology alternatives, like Tealeaf’s CX and ForeSee’s new CS Session Replay, that provide the ability to replay customers’ sessions on your screen.
  4. Have an expert conduct a usability audit.
    Even after discovering where customers are having trouble, it’s sometimes still very difficult to determine exactly what you should be doing differently to make the experience easier and more intuitive for your customers. In those cases, expert advice via a third party usability audit is an excellent solution. I’ve used trained usability experts in the past to identify specific improvements that led to tremendous business results. Third party usability auditors bring to the table both fresh and trained eyes that have likely seen problems similar to those on your site before and have come up with solutions for those problems or seen how other sites have solved those problems.

Regardless of the mechanisms you choose to use, the key to better usability, better customer satisfaction and the resulting better conversion and sales, is finding ways to see your site through your customers’ eyes.

Are you a usability elitist? Do you watch customers use your site? What have you learned in the process?




Retail: Shaken Not Stirred by Kevin Ertell


Home | About