Category: Leadership

The Monkey Cage Sessions

monkey throwingI’ve seen a lot of strategies and “solutions” fail over the years primarily because the solution was crafted before the problem addressed was thoroughly understood.

Many times, the strategy or solution was the result of a brainstorming session filled with type A personalities (me included) ready to make things happen.

You may be familiar with the type of session I’m referencing. Usually, there’s a guru consultant leading the charge. He separates the group into teams and gives them Post-It notes and colored sticker dots. “Write down as many ideas as you can in the next 20 minutes. Don’t think too much. Be creative! No idea is dumb. Stick your ideas on the wall. Now go!” After 20 minutes, a leader from each group presents their best ideas to the rest of the room. Then each person in the room is allowed to vote for maybe six of his or her favorite ideas using the colored sticker dots. A few people are assigned the winning ideas and off we go.

Those types of session frustrate me. I’m concerned there’s too much action, too many unspoken assumptions, and not nearly enough serious thinking.

Over the years, I’ve developed a problem solving technique that I’ve found to work a lot better. I call it the Monkey Cage Sessions. The technique is all about thoroughly identifying the problems from all angles before developing carefully considered, thoughtful and collaborative solutions.

It’s got an intentionally silly name because the process should be fun.

Here’s how it works:

Step 1 Define the problems.

We start by gathering a group of cross-functional people – ideally from different levels of the organization – together in a room to talk about the problem or problems we’re trying to solve. This could be as simple as enhancing a Careers page on the corporate website or as complicated as building a complete company strategic plan. It’s important to define the general scope of the problem, but it should be defined fairly loosely so as not to stifle the discussion.

The rules of the meeting are fairly simple. We only discuss problems. No solutions. This is a license to bitch. Let it be cathartic.

I usually stand at the whiteboard, marker in hand, and write down everything everyone says. There is no need to be overly structured here, and anything anyone says is legitimate. We throw it all at the wall and we’ll sort it out later.

Sometimes people want to debate whether or not something another person says is really a problem. If someone said it, it’s at least a perceived problem. It’s legitimate. Also, there is often an attempt to offer an explanation for why a problem exists. The explanation is covering for another problem, so that problem should be written down.

People are always tempted to offer solutions, even when they think they’re offering problems. For example, someone might say it’s a problem that we don’t have a content management system. Actually, a content management system might be the solution to a problem. What problem might a content management system solve? Beware of any problem statement that starts with “We need…” and be prepared to break down that need into the problems needing the solution.

Sometimes the problems offered up are very broad and vague. In those cases, it’s important to work with the group to dissect that broad problem into its component parts.

This first session generally uncovers a LOT of problems, but the problem is still usually not completely identified yet. Which leads to…

Step 2 Categorize the problems

While the chaotic approach of the first session works well to get an initial set of problem descriptions, it’s important to create some order in order to prepare for the problem solving stage. So Step 2 involves writing down all of the problems and sorting them into logical categories. I don’t have any pre-determined set of categories. Instead, I prefer to the let the problems listed dictate the categorization.

Step 3 – Widen the circle

We probably have a pretty good description of the problems now, but we’ve also still likely missed some. For Step 3 we send the typed and categorized list of problems to the original group as well as a widened circle of people. The original group will likely have thought of a couple more issues since the day of the meeting, and the new group of people will almost definitely add new problems to the list. Since this is the final stage of problem description, we want to give this step at least a few days to allow the team to think this through as completely as possible.

Step 4 – Develop the solutions

Finally, we can start solving the problems. Woo hoo!

Now it’s time to gather a subset of the original meeting to start working towards solutions. There should be at least a few days between Step 3 and Step 4. We want to give people some time to think over the full problem set. The group should enter the Step 4 meeting with at least some basic solution ideas. There is no need to come into the room with comprehensive solutions that solve every problem on the list, but the solutions considered should certainly attempt to solve as many problems as possible (without causing too many new problems).

I usually find that by this point many of the solutions are fairly obvious. But there should be good discussion about the relative merits of each suggested solution, and the solutions should be measured up against the problem list to determine how comprehensive they are.

I like to end the meeting by assigning people to lead each of the proposed solutions. Obviously, any suggested solution from this session will need to be fleshed out in a lot more detail, and the leader from this meeting is responsible for determining the viability and solution and then potentially leading the development and ultimate execution to completion.

Subsequent progress is then handled via a separate execution process.

———————————-

I’ve had very good luck over the years using this technique. Some of the primary benefits I’ve found are:

  1. Better understanding of the problems
    As the initial meeting wraps up, most people are inevitably feeling enlightened about the problem. They’ve outwardly expressed their own assumptions (which sometimes even they didn’t know they were making) and they’ve understood the perspectives and assumptions of others. They’ve seen the problem in an entirely new light.
  2. More comprehensive solutions
    The heightened understanding of the problem and the critically important time between steps to allow the team to be more thoughtful in their ideas. Those ideas are usually pretty all-encompassing solutions to start with, but the discussions in Step 4 lead the team to collectively choose the best of the best of the solutions offered.
  3. Better execution
    Solutions are nothing but fancy ideas until they’re executed. And poor execution can cause even the best ideas to fail. The process of fully defining the problems and sharing that work with wide circles of people is an incredibly important stage that sets the foundation for success in execution. When the execution team provides input in the process and understands the basis for the solution, they are far more supportive in the effort. They are also far more prepared to make the daily, detailed decisions that are often the difference between success and failure.

So, that’s the Monkey Cage Sessions. I hope you find it helpful. If you try implementing the process in your business, I’d love to hear how it goes.

What do you think? Would this process work in your organization? Have you ever used a similar process?


Click (not the one you think) to success

Click book coverIn my experience, the most important factor for success in business is the ability to interact well with other people. Leadership skills, financial skills and technical skills all matter a lot, but they don’t amount to a hill of beans without solid people skills.

The reality is none of us can be successful completely on our own. We need the help of other people — be they peers, staff, managers, vendors or business partners — to successfully accomplish our tasks and goals.

Human relationships are more complicated than Wall Street financial schemes, but we often take interpersonal skills for granted. We rarely study them to the degree we study financial or technical skills. After all, we’ve been talking to people all our lives. We’re experienced. But I’ll argue there are subtleties that make all the difference, and they’re worth studying.

In my opinion, the best business book ever written is How to Win Friends and Influence People by Dale Carnegie — and it’s actually not even classified as a business book. I’ve never read a better guide to the basics of interacting effectively with people.

But I just finished a book that will take its place nicely alongside the Carnegie classic on my bookshelf.

Click: The Magic of Instant Connections by Ori and Rom Brafman (authors of Sway, one of my favorite books from last year) explores the factors or “accelerators” that exist when people “click” with each other. We’ve all had those instant connections with people in our lives, and those types of connections generally lead to powerful and productive relationships. While the Brafmans dig into both the personal and business nature of those connections, for purposes of this post I’ll focus on the business benefits of understanding and fostering such connections.

The book covers a wide range of connection accelerators, more than I could ever cover in this space, so I’ll just address a few that really stood out to me:

Proximity
Simple physical proximity can make a huge difference in our ability to connect with others. A study of a large number of military cadets found that 9 of 10 cadets formed close relationships with the cadets seated directly next to them in alphabetical seat assignments. Another study found that 40% of students living in randomly assigned dorms named their next-door neighbor as the person they most clicked with, but that percentage dropped in half when considering the student just two doors away. Maybe more startling, the students who lived in the middle of a hall were considerably more likely to be popular than those living at the end of a hall.

Why?

The authors explain that these connections are often driven by “spontaneous conversation…Over time, these seemingly casual interactions with people can have long-term consequences.”

I think many of us have instinctively understood the value of placing working teams in close proximity to each other. I’ve personally always attributed that value to the working conversations that are overheard and allow various member of the team to better understand and communicate issues about the work. But maybe that close proximity is also allowing people to better connect with each other. Maybe those connections allow us to better relate to each other and give each other the benefit of the doubt. Looking back at my career, I can think of many instances where office moves have coincided with strengthening or straining my working relationships with people.

Proximity is more important than I ever thought. We should carefully consider office layouts to foster the right types of connections. If close proximity is not possible for certain teams or people, we should understand the negative effects of separation and look for other ways to foster the connection.

Resonance
Resonance “results from an overwhelming sense of connection to our environment that deepens the quality of our interactions.” Huh? For example, the book reports that we’re 30 times more likely to laugh at a joke in the presence of others than if we hear it alone. My friend and colleague Jeff Dwoskin moonlights as a stand-up comedian, and he once explained to me that the difference between a good comedy club and a bad comedy club is the arrangement of audience seating. When tables are close together, people laugh more. When there are lots of booths that separate the audience into tiny groups, it’s much harder to get a laugh and keep the funny going.

Many companies swear by their open seating arrangements. Rich Sheridan, founder of Ann Arbor-based Menlo Innovations, seats his agile development teams on open tables together. No cubes. No walls. He says it’s a huge key to their success. Does that work for everyone working team in all situations? I doubt it. But certainly working environments have impact on working relationships and their resulting productivity, and resonance is a concept worth considering.

Similarity
“No matter what form it takes, similarity leads to greater likability…Once we accept people into our in-group, we start seeing them in a different light: we’re kinder to them, more generous.”

Kinder. More generous. Those sound like good bases for effective working relationships. It’s amazing how finding common ground can bring teams closer and help them work more effectively together. Sure, those of us working for the same company in the same industry all have industry and company in common, but it seems like the more personal similarities are more likely to bring people together. For that reason, we should encourage water cooler chats and other personal interactions in the work place. Everything in moderation, for sure, but a little personal time can actually end up improving productivity by reducing stress and misinterpretations that lead to unproductive miscommunications. The book reports that a “Finnish health survey conducted on thousands of employees between 2000 and 2003 revealed that those employees who had experienced a genuine sense of community at work were healthier psychologically.”

—————————————-

“Common bonds and that sense of community don’t just foster instant connections — they help to make happier individuals.” The Brofmans provide numerous examples of teams that performed significantly better than others primarily due to the interpersonal dynamics of their members. We simply cannot succeed in life without the support of other people. It’s worth taking the time to understand how to improve those relationships for the betterment of all parties. And pick up Click, it’s well worth the read.

What do you think? Is this all hogwash? Do you have stories of how personal relationships have led to success in your life?

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

“We tried that before and it didn’t work”

Light bulb“We tried that before and it didn’t work.”

Man, I’ve heard that phrase a lot in my life. And truth be told, I’ve spoken it more than I care to admit.

But when something fails once in the past (or even more than once) should it be doomed forever?

I was lucky enough to hear futurist Bob Johansen speak last week at Resource Interactive’s excellent iCitizen conference, and he said something that really stuck with me:

“Almost nothing that happens in the future is new; it’s almost always something that has been tried and failed in the past.”

It’s so true. Think about Apple’s recent successes. MP3 players floundered before the iPod came along. Smartphones existed in limited fashion before the iPhone changed the landscape. And tablet computers had been an unrealized dream for quite some time. In discussing the tablet computer in 2001, Bill Gates famously said that “within five years I predict it will be the most popular form of PC sold in America.” When that didn’t happen, it wasn’t hard to find people predicting the tablet’s failure: “The Tablet? It isn’t RIP. But it’s certainly never going to be the noise Bill Gates thought.” But then along came the iPad and its million units sold in the first month alone. And don’t get me started on e-books, which many loudly proclaimed were bound to fail. Jeff Bezos begs to differ.

We humans have this tendency to throw the baby out with the bathwater when something fails.

But the reality is that the success of any new idea — be it a product, a promotional idea, a merchandising technique, a sales tactic or website functionality —  is dependent on many different variables. Execution matters a lot. But we’re also dependent on many other situational contexts in the idea’s ecosystem, like timing, audience/customers, design, the economy, and the general randomness of life. Even slight tweaks to any of those variables can be the difference between success and failure.

In the others words, we shouldn’t automatically assume a past failure of an idea means the idea was bad. To be clear, I’m not suggesting there aren’t bad ideas that deserve to remain in the trash heap. However, we should at least break down the failure of an idea that we must have considered worthy at one point. (Why else would we have tried it in the first place?) What went wrong and what went right? Was it the execution? The positioning? The audience? Did we even have enough data points in our measurement that our findings of failure are statistically significant? Did it really fail?

Once we’ve broken the failure of the idea down into its component parts, we’ll have a better sense of whether or not the idea itself was at fault. We’ll have a much better understanding of the problems we would face if we tried it again, and that better understanding will give us a better platform from which to base our next attempt if we so desire.  We’ve all heard the stories of Thomas Edison’s thousands of failures before he finally got the incandescent light bulb right. Would we all be in the dark today if he gave up?

What do you think? Have you good ideas junked because of past failures? Was it the idea or something else?

Bought Loyalty vs. Earned Loyalty

Earned loyalty vs Bought loyaltyAcquiring new customers is hard work, but turning them into loyal customers is even harder. The acquisition efforts can usually come almost solely from the Marketing department, but customer retention takes a village. And all those villagers have to march to the beat of a strategy that effectively balances the concepts of bought loyalty and earned loyalty.

I first heard the concepts of bought and earned loyalty many years ago in a speech given by ForeSee Results CEO Larry Freed, and those concepts stuck with me.  They’re not mutually exclusive. In the most effective retention strategies I’ve seen, bought loyalty is a subset of a larger earned loyalty strategy.

So let’s break each down a bit and discuss how they work together.

Bought loyalty basically comes in the form of promotional discounts. We temporarily reduce prices in the form of sales or coupons in order to induce customers to shop with us right away.

Bought loyalty has lots of positives. It’s generally very effective at increasing top line sales immediately (especially in down economies), and customers love a good deal. It’s also pretty easy to measure the improvement in sales during a short promotional period, and sales growth feels good. Really good.

And those good feelings are mighty addictive.

But as with most addictions, the negative effects tend to sneak up on us and punch us in the face. The 10% quarterly offers become 15% monthly offers and then 20% weekly offers as customers wait for better and better deals before they shop. Top line sales continue to grow only at the cost of steadily reduced margins. Breaking the habit comes with a lot of pain as customers trained to wait for discounts simply stop shopping. Bought loyalty, by itself,  is fickle.

But it doesn’t have to go down that way.

We can avoid a bought loyalty slippery slope when we incorporate bought loyalty tactics as part of a larger earned loyalty strategy.

We earn our customers’ loyalty when we meet not only their wants but their needs. After all, retail is a service business. We have to learn a lot about our customers to know what those wants and needs are so that we align our offerings to meet those wants and needs. Which, of course, is easy to say and much more difficult to do. But do it we must.

To earn loyalty, we have to provide great service and convenience for our customers. But we have to know how our customers define “great service” and “convenience” and ensure we’re delivering to those definitions. Earning loyalty means offering relevant assortments and personalized messaging, but it’s only by truly understanding our customers that we can know what “relevant” and “personalized” mean to them. And a little bit of bought loyalty through truly valuable promotions can provide an occasional kick start, but we have to know what “valuable promotion” means to our customers.

We earn loyalty when the experience we provide our customers meets or even exceeds their expectations. As such, our earned loyalty retention strategies have to start before we’ve even acquired the customer. If we over-promise and under-deliver, we significantly reduce our ability to retain customers, much less move them through the Customer Engagement Cycle we’ve discussed here previously.

But earned loyalty can’t just be the outcome of a marketing campaign. It’s much bigger than that, and it doesn’t happen without the participation of the entire organization. Clearly, front line staff in stores, call center agents and those who create the online customer experience have to be on board. But so too do corporate staff, including merchants for assortment and marketers for messaging. And financial models for earned loyalty strategies inevitably look different than those built solely for bought loyalty.

Since customer expectations are in constant flux, we have to constantly measure how well we’re doing in their eyes. Those measures must be Key Performance Indicators held in as high a regard as revenue, margins, average order size and conversion rates. (Shameless plug: the best way I know to measure customer experience and satisfaction is the ACSI methodology provided by ForeSee Results). Our customers’ perceptions of our business are reality, and measuring and monitoring those perceptions to determine what’s working and what’s not is the best way to determining a path towards earning loyalty.

Earning loyalty requires clear vision, careful planning, a little bought loyalty, lots and lots of communication (both internally and externally), and some degree of patience to wait for its value to take hold. But when the full power of an earned loyalty Customer Engagement Cycle kicks in, its effects can be mighty. The costs of acquiring and retaining customers drop while sales and margins rise. That’s a nice equation.

What do you think? Have you seen effective retention strategies that build on both bought and earned loyalty? Or do you think is all just a crock?

Retail: Shaken Not Stirred by Kevin Ertell


Home | About