Category: Analysis

The 3 Levels of Metrics: From Driving Cars to Solving Crimes

Business-MetricsYou can’t manage what you don’t measure. That’s a long-time business mantra espoused frequently by my good friend Larry Freed. And it’s certainly true. But in an e-commerce where we can effectively measure our customers’ every footstep, we can easily become overwhelmed with all that data. Because while we can’t manage what we don’t measure, we also can’t manage everything we can measure.

I’ve found it’s best to break our metrics down to three levels in order to make the most of them.

1. KPIs
The first and highest level of metrics contains the Key Performance Indicators or KPIs. I believe strongly there should be relatively few KPIs — maybe five or six at most — and the KPIs should align tightly with the company’s overall business objectives. If an objective is to develop more orders from site visitors, then conversion rate would be the KPI. If another objective is about maximizing the customer experience, then customer satisfaction is the right metric.

In addition to conversion rate and customer satisfaction, a set of KPIs might include metrics like average order value (AOV), market share, number of active customers,  task completion rate or others that appropriately measure the company’s key objectives.

I’ve found the best KPI sets are balanced so that the best way to drive the business forward is to find ways to improve all of the KPIs, which is why businesses often have balanced scorecards. The reality is, we could find ways to drive any one metric at the expense of the others, so finding the right balance is critical. Part of that balance is ensuring that the most important elements of the business are considered, so it’s important to have some measure of employee satisfaction (because employee satisfaction leads to customer satisfaction) and some measure of profitability.  Some people look at a metric like Gross Margin as the profitability measure, but I prefer something deeper down the financial statement like Contribution Margin or EBITDA because they take other cost factors like ad spend, operational efficiencies, etc. into account and can be affected by most people in the organization.

It’s OK for KPIs to be managed at different frequencies. We often talk about metrics dashboards, and a car’s dashboard is the right metaphor. Car manufacturers have limited space to work with, so they include only the gauges the most help the driver operate the car. The speedometer is managed frequently while operating the car. The fuel gauge is critically important, but it’s monitored only occasionally (and more frequently when it’s low). Engine temperature is a hugely important measure for the health of the car, but we don’t need to do much with it until there’s a problem. Business KPIs can be monitored in a similarly varied frequency, so it’s important that we don’t choose them based on their likelihood to change over some specific time period. It’s more important to choose the metrics that most represent the health of the business.

2. Supporting Metrics
I call the next level of metrics Supporting Metrics. Supporting Metrics are tightly aligned with KPIs, but they are more focused on individual functions or even individual people within the organization. A KPI like conversion rate can be broken down by various marketing channels pretty easily, for example. We could have email conversion rate, paid search conversion rate, direct traffic conversion rate, etc. I also like to look at True Conversion Rate, which measures conversion against intent to buy.

Supporting metrics should be an individual person’s or functional area’s scorecard to measure how their work is driving the business forward. Ensuring supporting metrics are tightly aligned with the overall company objectives helps to ensure work efforts throughout the organization are tightly aligned with the overall objectives.

As with KPIs, we want to ensure any person or functional area isn’t burdened with so many supporting metrics that they become unmanageable. And this is an area where we frequently fall down because all those metrics and data points are just so darn alluring.

The key is to recognize the all-important third level of metrics. I call them Forensic Metrics.

3. Forensic Metrics
Forensic Metrics are just what they sound like. They’re those deep-dive metrics we use when we’re trying to solve a problem we’re facing in KPIs or Supporting Metrics. But there are tons of them, and we can’t possibly manage them on a day-to-day basis. In the same way we don’t dust our homes for prints every day when we come home from work, we can’t try to pay attention to forensic metrics all the time. If we come home and find our TV missing, then dusting for prints makes a lot of sense. If we find out conversion rate has dropped suddenly, it’s time to dig into all sorts of forensic metrics like path analysis, entry pages, page views, time on site, exit links, and the list goes on and on.

Site analytics packages, data warehouse and log files are chock full of valuable forensic metrics. But those forensic metrics should not find their way onto daily or weekly managed scorecards. They can only serve to distract us from our primary objectives.

—————————————————–

Breaking down our metrics into these three levels takes some serious discipline. When we decide we’re only going to focus on a relatively small number of metrics, we’re doing ourselves and our businesses a big favor. But it’s really important we’re narrowing that focus on the metrics and objectives that are most driving the business forward. But, heck, we should be doing that anyway.

What do you think? How do you break down your metrics?

 

The power of a little naiveté

questioningMost of us are experts in something. Our expertise and experience are usually significant advantages that allow us to deal effectively with complex problems and situations. But they can occasionally be Achilles’ heels when they breed the type of overconfidence that causes us to overlook simple solutions in favor of more complex and costly solutions. Injecting a little naiveté into some problem solving sessions can spur new thinking that results in more effective and efficient solutions.

In my experience, experts tend to skip right by the simple solutions to most problems. Groups of experts working to solve a problem are even more likely to head directly to the more complex solutions.

Consider this example from the excellent book I’m currently reading, CustomerCulture by Michael D. Basch (thanks to Anna Barcelos for the tip):

Hershey’s Chocolate Company had a problem on its Rollo production line. It had worked with teams of employees to improve quality and had raised the consciousness of their employees around service in all aspects of the operation. This example involves a problem where the candy went through an automatic wrapping machine, and the wrapped candy was dropped onto a conveyor that dumped it into boxes to be sold in retail stores. When the box reached the specified weight, it would be shifted to a new empty box, and the process would continue.

The problem was that, all too often, empty wrappers would come out of the wrapping machine and end up in the retail boxes. These boxes had cellophane windows where the consumer could see the empty wrappers, and, although the box was sold by weight, the customers’ perception was of poor quality and the feeling of being taken advantage of.

The company put a team of engineers on the problem, and a new wrapping machine was not cost justified. Therefore, the problem became “How to get the empty wrappers off the conveyor.” The engineers then designed an elaborate vibratory conveyor system. A vibratory conveyor vibrates, and heavy things tend to move with the force of gravity. In this way, they could vibrate the filled wrappers off the vibratory conveyor to the box filling conveyor. The cost would be about $10,000 to move equipment around and to install the new system. Of greater consequence was the time. This line was working 24 hours a day and 7 days a week and was still falling behind. A retrofit would stall production for a day and one-half.

Fortunately, part of the team inventing the new system was the production workers who worked the line every day. The engineers presented their solution for feedback. The next day, two production workers were discussing the problem just before lunch when one said, “I’ve got it.” The other asked, “What have you got?” “I’ll show you after lunch,” came a hasty reply as the man left the building. After lunch, he returned with a $15 fan he had purchased at Wal-Mart. He plugged in the fan. It blew the empty wrappers off the conveyor, and the problem was solved—no great cost, no stalled production.

In the end, the simple solution was both highly effective and highly efficient. I don’t know why expertise largely blinds us to these types of solutions, but maybe it’s because our training and our past experiences have been so focused on complex solutions that we just automatically go there. And when we’re discussing the problems with groups of experts, as was the case in the Hershey’s example, maybe we also just assume the others in the group have already considered more simple solutions.

Hence, the power of a little naiveté.

Too often, we associate naiveté with ineptitude, but the root of the word, naive, is really more about lack of understanding or sophistication. And that lack of sophistication can be just what the doctor ordered in some problem solving situations. I can think of many conversations I’ve had over the years with hard core technical folks where I asked a series of “dumb” questions that ultimately led to those highly trained experts developing simpler and ultimately more effective solutions.

Next time you have a complicated problem you’re trying to solve, rather than just gathering the best of the best (and only the best of the best) to discuss solutions, consider inviting a few “differently experienced” folks into the room. These don’t have to be inexperienced people in general, but rather people specifically inexperienced in the particular problem being solved. The main idea is to get some different thinking injected into the conversation. One of the main tenets of the Monkey Cage Sessions problem solving technique I’ve written about before is inviting people of different experience levels and backgrounds into a single session that allows views of the problem from multiple perspectives.

We need our assumptions to be questioned if we hope to find the absolute best solutions. Let’s tolerate a few “dumb” and “naive” questions and appreciate fresh perspectives on the problem. We might be surprised what solutions we come up through the power of a little naiveté.

What do you think? Have you ever encountered the power of naiveté in problem solving situations? Or do you think letting lesser experienced folks into complicated solution finding sessions is a waste of time?

Why most project estimates suck…and how Monte Carlo simulations can make them better

missed deadlinesHave you ever been part of a project that was late and over budget? I’d be surprised if you haven’t. We humans are famously bad at estimating the future, and project planning is heavily dependent on our ability to estimate the future. Most of us are optimists and some of us are pessimists, but very, very few of us are realists by nature. Monte Carlo simulations can be useful in our estimation process to help us become more realistic about our estimates, and that realism can significantly improve our ability to deliver results more in line with expectations.

We generally recognize our inability to accurately estimate large projects in one chunk, so we break them up into smaller milestones that are easier to estimate. While the work breakdown process is good, the confidence it gives us in our estimates can lead to larger problems. We don’t ask ourselves often enough how accurate we think those estimates are before stringing them together to determine project due dates. If we did, the conversation might go like this:

“How accurate do you think these milestone estimates are?”

“Pretty accurate. We certainly spent a lot of time discussing them and comparing them to past projects.”

“OK. But if you had to put a number on it, would you say they are 100% accurate?”

“Well, let’s not get crazy. I can’t be sure they’re 100% accurate.”

“So put a number on it. How confident are you that they’re accurate?”

“I still feel pretty good about them. I’d say conservatively that I’m at least 90% sure.”

At this point, we’re about to discover some pretty major problems with our assumptions. We typically string together a number of these milestones, which are dependent on each other, and call them the critical path. The end of the critical path is the project due date.

But if we’re only 90% confident our estimates for each milestone are correct, the likelihood of missing our date is pretty high. Let’s say we have five major milestones in our critical path, and we’re 90% sure each is accurate. To determine the probability that all five will come in as expected, we have to multiply .90 x .90 x.90 x .90 x .90. Even with these high confidence rates, we’re now looking at about a 59% chance of hitting our due dates and a 41% chance of missing them. And that’s with only five milestones and really high (and probably unwarranted) confidence in our estimates. The numbers only get worse from here.

So we start missing deadlines and inevitably either pump more money into the effort or start cutting scope. Our original business case and ROI justification for the effort are now inaccurate because it’s going to cost more and produce less benefits. Sound familiar?

Monte Carlo simulations can help us get a better handle on the probabilities of actually delivering on our timeline and budget estimates. Just as I previously demonstrated using Monte Carlo simulations for sales forecasting, a simulation focused on project estimates can essentially become a “what if” model and sensitivity analysis on steroids for project planning. Basically, the model allows us to feed in a limited set of variables about which we have some general probability estimates and then, based on those inputs, generate a statistically valid set of data we can use to run probability calculations for the entire project.

Great. So now we know how likely we are to miss our timeline and budget. So what?

Once we have a more realistic view our our project timeline and budget, we can do far more effective planning. We can develop contingency planning with full knowledge of the likelihood of needing any particular contingency. Having a better sense of potential budget increases or scope decreases in advance of the project start date will help us make better decisions about starting the project to begin with.

We’ll also be able to better plan our needs from other groups in the corporation who might be involved with the final project but not directly involved in the project. For example, we might need to fit a new product launch campaign into an already packed marketing schedule. Will new site functionality require training for customer service? We’ll need to plan time to pull agents off the phones for their training. Setting expectations with these external groups will greatly enhance at least the internally perceived success of our effort. And that certainly counts for something.

Why go through all this complication? Let’s just take all the estimates we get from the team and double them. That should help get ensure we stay within the timeline.

The “double the estimates” approach is one I’ve seen used before. While it does help create timelines that won’t be exceeded, overestimation can also cause problems. Any coordination with external teams will still be a problem if we end up needing them before we originally planned. And over-allocating time, resources and budget can drive up opportunity costs and limit our ability to produce meaningful results over time.

Monte Carlo to the rescue

I created a free, sample Monte Carlo simulation you can download for use in project planning. It illustrates on a small scale some of the possibilities that can occur with even a minor project. We see that even a five milestone effort with 85% confidence in the estimate of each milestone is expected to be more that 20% overdue. But we can also get a sense of the probabilities of various timelines and use it to refine overall estimates.

By understanding the probability of various delivery dates and project budgets, we can better plan scope, business models and contingency plans. We can better coordinate with other teams who will play a part in the ultimate success of the project once it’s complete. In short, we can become realists and, as a result, deliver much better business results.

What do you think? Would this sort of tool help in your planning? What other methods have you used to set better expectations and plan more accurately?

Blinded By Certainty

blindfoldedIn reality, very little in our lives is absolutely certain. We can be certain the sun will rise in the east and set in the west. We can be certain death will follow life. And we can be pretty darn certain Steve Jobs will wear a black turtleneck and jeans at his next public appearance.

But we’re certain about a lot more things than we should be.

A recent University of Michigan study by Brendan Nylan and Jason Reifler shows that the more certain we are about particular ideas or situations the more we become blind to facts that discredit our certainty. In fact, in many cases opposing facts are not just ignored but actually strengthen our prior beliefs.  A recent Boston Globe article provides an excellent summary of the research.

From the article:

Most of us like to believe that our opinions have been formed over time by careful, rational consideration of facts and ideas, and that the decisions based on those opinions, therefore, have the ring of soundness and intelligence. In reality, we often base our opinions on our beliefs, which can have an uneasy relationship with facts. And rather than facts driving beliefs, our beliefs can dictate the facts we chose to accept. They can cause us to twist facts so they fit better with our preconceived notions. Worst of all, they can lead us to uncritically accept bad information just because it reinforces our beliefs. This reinforcement makes us more confident we’re right, and even less likely to listen to any new information.

Both the research and the article focus primarily on our political viewpoints, but while reading I couldn’t help but think of people I’ve come across in the business world who were unbelievably certain about their viewpoints based on information or experiences that seemed less than obvious to me. I immediately thought of dozens of people, and I bet you’re thinking of many such people now.

In fact, it was so easy for me to think of other people that fit the bill that I couldn’t help but think the man in the mirror was not immune to this universal human fallacy.

In my experience in the business world, we often assume with undue certainty that past experiences will reflect future possibilities. We say things like, “We tried that before and it didn’t work” or “I know what our customers want.” While our past experiences are extremely valuable and are very important for informing future decisions, we simply don’t have enough of them to blindly ignore changes in circumstances, timing and other variables that could significantly alter results for a new effort.

So how do we overcome our natural instincts in order to make better business decisions?

  1. Be aware of the problems with certainty
    You’ve read this far, so maybe you’re awareness is already active. I know that I am reassessing all the things I “know” to try to truly separate what is fact and what is assumption. I very much value all my experience, and I know I make better decisions because of what I’ve seen and heard along the way. But I want to make doubly sure that assumptions I make based on past experiences are tested and validated before I turn them into absolute fact.
  2. Actively seek alternate points-of-view
    In my experience, the combination of multiple experiences provides a much more solid foundation for decision making than basing decisions on singular past experiences. Techniques I’ve used, like The Monkey Cage Sessions, are based on the incorporating viewpoints from people in different functional areas and levels of the organization. While it’s acceptable to discount data or opinions that are in opposition to a decision I might make, I want to be sure I’m not simply rationalizing opposing information or viewpoints solely because they are different from my biases.
  3. Envision alternate scenarios
    I addressed this some in a previous post, “Obscure and pregnant with conflicting meanings”, where I discussed a technique I called “Scenario Imagination.” I’ve since read an excellent interview with Daniel Kahneman and Gary Klein where they detail a similar and better technique they call “pre-mortem” (which is also a better name than mine). Whenever we make decisions, we have a tendency to assume our decisions are going to produce the best possible results. These pre-mortem techniques have us imagine worst case scenarios to try to dissect potential problems before they occur.
  4. Be flexible and plan for contingencies
    Once we admit we’re not 100% certain, we can move forward with plans that are flexible and able to react to changing conditions. To be clear, I’m not saying we should just be wishy-washy and not make clear decisions. What I’m saying is that we should be open to new facts and be sure we have created an environment that allows us to change course when warranted.

If we’re aware of our certainty biases and take active steps to address them, I believe we can significantly improve our decision-making in our businesses.

What do you think? Upon self-examination, have you turned beliefs into facts in your mind? How would you suggest addressing these biases? Or, do you think is all a load of hooey?

The Monkey Cage Sessions

monkey throwingI’ve seen a lot of strategies and “solutions” fail over the years primarily because the solution was crafted before the problem addressed was thoroughly understood.

Many times, the strategy or solution was the result of a brainstorming session filled with type A personalities (me included) ready to make things happen.

You may be familiar with the type of session I’m referencing. Usually, there’s a guru consultant leading the charge. He separates the group into teams and gives them Post-It notes and colored sticker dots. “Write down as many ideas as you can in the next 20 minutes. Don’t think too much. Be creative! No idea is dumb. Stick your ideas on the wall. Now go!” After 20 minutes, a leader from each group presents their best ideas to the rest of the room. Then each person in the room is allowed to vote for maybe six of his or her favorite ideas using the colored sticker dots. A few people are assigned the winning ideas and off we go.

Those types of session frustrate me. I’m concerned there’s too much action, too many unspoken assumptions, and not nearly enough serious thinking.

Over the years, I’ve developed a problem solving technique that I’ve found to work a lot better. I call it the Monkey Cage Sessions. The technique is all about thoroughly identifying the problems from all angles before developing carefully considered, thoughtful and collaborative solutions.

It’s got an intentionally silly name because the process should be fun.

Here’s how it works:

Step 1 Define the problems.

We start by gathering a group of cross-functional people – ideally from different levels of the organization – together in a room to talk about the problem or problems we’re trying to solve. This could be as simple as enhancing a Careers page on the corporate website or as complicated as building a complete company strategic plan. It’s important to define the general scope of the problem, but it should be defined fairly loosely so as not to stifle the discussion.

The rules of the meeting are fairly simple. We only discuss problems. No solutions. This is a license to bitch. Let it be cathartic.

I usually stand at the whiteboard, marker in hand, and write down everything everyone says. There is no need to be overly structured here, and anything anyone says is legitimate. We throw it all at the wall and we’ll sort it out later.

Sometimes people want to debate whether or not something another person says is really a problem. If someone said it, it’s at least a perceived problem. It’s legitimate. Also, there is often an attempt to offer an explanation for why a problem exists. The explanation is covering for another problem, so that problem should be written down.

People are always tempted to offer solutions, even when they think they’re offering problems. For example, someone might say it’s a problem that we don’t have a content management system. Actually, a content management system might be the solution to a problem. What problem might a content management system solve? Beware of any problem statement that starts with “We need…” and be prepared to break down that need into the problems needing the solution.

Sometimes the problems offered up are very broad and vague. In those cases, it’s important to work with the group to dissect that broad problem into its component parts.

This first session generally uncovers a LOT of problems, but the problem is still usually not completely identified yet. Which leads to…

Step 2 Categorize the problems

While the chaotic approach of the first session works well to get an initial set of problem descriptions, it’s important to create some order in order to prepare for the problem solving stage. So Step 2 involves writing down all of the problems and sorting them into logical categories. I don’t have any pre-determined set of categories. Instead, I prefer to the let the problems listed dictate the categorization.

Step 3 – Widen the circle

We probably have a pretty good description of the problems now, but we’ve also still likely missed some. For Step 3 we send the typed and categorized list of problems to the original group as well as a widened circle of people. The original group will likely have thought of a couple more issues since the day of the meeting, and the new group of people will almost definitely add new problems to the list. Since this is the final stage of problem description, we want to give this step at least a few days to allow the team to think this through as completely as possible.

Step 4 – Develop the solutions

Finally, we can start solving the problems. Woo hoo!

Now it’s time to gather a subset of the original meeting to start working towards solutions. There should be at least a few days between Step 3 and Step 4. We want to give people some time to think over the full problem set. The group should enter the Step 4 meeting with at least some basic solution ideas. There is no need to come into the room with comprehensive solutions that solve every problem on the list, but the solutions considered should certainly attempt to solve as many problems as possible (without causing too many new problems).

I usually find that by this point many of the solutions are fairly obvious. But there should be good discussion about the relative merits of each suggested solution, and the solutions should be measured up against the problem list to determine how comprehensive they are.

I like to end the meeting by assigning people to lead each of the proposed solutions. Obviously, any suggested solution from this session will need to be fleshed out in a lot more detail, and the leader from this meeting is responsible for determining the viability and solution and then potentially leading the development and ultimate execution to completion.

Subsequent progress is then handled via a separate execution process.

———————————-

I’ve had very good luck over the years using this technique. Some of the primary benefits I’ve found are:

  1. Better understanding of the problems
    As the initial meeting wraps up, most people are inevitably feeling enlightened about the problem. They’ve outwardly expressed their own assumptions (which sometimes even they didn’t know they were making) and they’ve understood the perspectives and assumptions of others. They’ve seen the problem in an entirely new light.
  2. More comprehensive solutions
    The heightened understanding of the problem and the critically important time between steps to allow the team to be more thoughtful in their ideas. Those ideas are usually pretty all-encompassing solutions to start with, but the discussions in Step 4 lead the team to collectively choose the best of the best of the solutions offered.
  3. Better execution
    Solutions are nothing but fancy ideas until they’re executed. And poor execution can cause even the best ideas to fail. The process of fully defining the problems and sharing that work with wide circles of people is an incredibly important stage that sets the foundation for success in execution. When the execution team provides input in the process and understands the basis for the solution, they are far more supportive in the effort. They are also far more prepared to make the daily, detailed decisions that are often the difference between success and failure.

So, that’s the Monkey Cage Sessions. I hope you find it helpful. If you try implementing the process in your business, I’d love to hear how it goes.

What do you think? Would this process work in your organization? Have you ever used a similar process?


Retail: Shaken Not Stirred by Kevin Ertell


Home | About