Category: KPIs

The 3 Levels of Metrics: From Driving Cars to Solving Crimes

Business-MetricsYou can’t manage what you don’t measure. That’s a long-time business mantra espoused frequently by my good friend Larry Freed. And it’s certainly true. But in an e-commerce where we can effectively measure our customers’ every footstep, we can easily become overwhelmed with all that data. Because while we can’t manage what we don’t measure, we also can’t manage everything we can measure.

I’ve found it’s best to break our metrics down to three levels in order to make the most of them.

1. KPIs
The first and highest level of metrics contains the Key Performance Indicators or KPIs. I believe strongly there should be relatively few KPIs — maybe five or six at most — and the KPIs should align tightly with the company’s overall business objectives. If an objective is to develop more orders from site visitors, then conversion rate would be the KPI. If another objective is about maximizing the customer experience, then customer satisfaction is the right metric.

In addition to conversion rate and customer satisfaction, a set of KPIs might include metrics like average order value (AOV), market share, number of active customers,  task completion rate or others that appropriately measure the company’s key objectives.

I’ve found the best KPI sets are balanced so that the best way to drive the business forward is to find ways to improve all of the KPIs, which is why businesses often have balanced scorecards. The reality is, we could find ways to drive any one metric at the expense of the others, so finding the right balance is critical. Part of that balance is ensuring that the most important elements of the business are considered, so it’s important to have some measure of employee satisfaction (because employee satisfaction leads to customer satisfaction) and some measure of profitability.  Some people look at a metric like Gross Margin as the profitability measure, but I prefer something deeper down the financial statement like Contribution Margin or EBITDA because they take other cost factors like ad spend, operational efficiencies, etc. into account and can be affected by most people in the organization.

It’s OK for KPIs to be managed at different frequencies. We often talk about metrics dashboards, and a car’s dashboard is the right metaphor. Car manufacturers have limited space to work with, so they include only the gauges the most help the driver operate the car. The speedometer is managed frequently while operating the car. The fuel gauge is critically important, but it’s monitored only occasionally (and more frequently when it’s low). Engine temperature is a hugely important measure for the health of the car, but we don’t need to do much with it until there’s a problem. Business KPIs can be monitored in a similarly varied frequency, so it’s important that we don’t choose them based on their likelihood to change over some specific time period. It’s more important to choose the metrics that most represent the health of the business.

2. Supporting Metrics
I call the next level of metrics Supporting Metrics. Supporting Metrics are tightly aligned with KPIs, but they are more focused on individual functions or even individual people within the organization. A KPI like conversion rate can be broken down by various marketing channels pretty easily, for example. We could have email conversion rate, paid search conversion rate, direct traffic conversion rate, etc. I also like to look at True Conversion Rate, which measures conversion against intent to buy.

Supporting metrics should be an individual person’s or functional area’s scorecard to measure how their work is driving the business forward. Ensuring supporting metrics are tightly aligned with the overall company objectives helps to ensure work efforts throughout the organization are tightly aligned with the overall objectives.

As with KPIs, we want to ensure any person or functional area isn’t burdened with so many supporting metrics that they become unmanageable. And this is an area where we frequently fall down because all those metrics and data points are just so darn alluring.

The key is to recognize the all-important third level of metrics. I call them Forensic Metrics.

3. Forensic Metrics
Forensic Metrics are just what they sound like. They’re those deep-dive metrics we use when we’re trying to solve a problem we’re facing in KPIs or Supporting Metrics. But there are tons of them, and we can’t possibly manage them on a day-to-day basis. In the same way we don’t dust our homes for prints every day when we come home from work, we can’t try to pay attention to forensic metrics all the time. If we come home and find our TV missing, then dusting for prints makes a lot of sense. If we find out conversion rate has dropped suddenly, it’s time to dig into all sorts of forensic metrics like path analysis, entry pages, page views, time on site, exit links, and the list goes on and on.

Site analytics packages, data warehouse and log files are chock full of valuable forensic metrics. But those forensic metrics should not find their way onto daily or weekly managed scorecards. They can only serve to distract us from our primary objectives.

—————————————————–

Breaking down our metrics into these three levels takes some serious discipline. When we decide we’re only going to focus on a relatively small number of metrics, we’re doing ourselves and our businesses a big favor. But it’s really important we’re narrowing that focus on the metrics and objectives that are most driving the business forward. But, heck, we should be doing that anyway.

What do you think? How do you break down your metrics?

 

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

How to achieve FAME in analysis

focused handsIn retail, and in web retail in particular, we are drowning in data. We can and do track just about everything, and we’re constantly pouring over the numbers. But I sometimes worry that the abundance of data is so overwhelming that it often leads to a shortage of insight. All that data is worthless (or worse) if we don’t produce thoughtful analysis and then carefully craft communication of our findings in ways that enable decision makers to react to the data rather than try to analyze it themselves.

The most effective analyses I’ve seen have remarkably similar attributes, and they happen to work into a nice, easy-to-remember acronym — F.A.M.E.

Here, in my experience, are the keys to achieving FAME in analysis:

Focused

Any finding should be fact based and clear enough that it can be stated in a succinct format similar to a newspaper headline. It’s OK to augment the main headline with a sub-headline that adds further clarification, but anything more complicated is not nearly focused enough to be an effective finding.

For example, an effective finding might be, “Visitors arriving from Google search terms are converting 23% lower than visitors arriving from email.” An accompanying sub-heading might further clarify the statement with something like, “Unclear value proposition, irrelevant landing pages and high first time visitor counts are contributing factors.”

All subsequent data presented should support these headlines. Any data that is interesting but irrelevant to the finding should be excluded from the analysis. In other words, remove the clutter so the main points are as clear as possible.

Actionable

Effective findings and their accompanying recommendations are specific enough in focus and narrow enough in scope that decision makers can reasonably develop a plan of action to address them. The finding mentioned above regarding Google search visitors fits the bill, and a recommendation that focuses on modifying landing pages to match search terms would be appropriate. Less appropriate would be a vague finding like “customers coming from Google search terms are viewing more pages than customers coming from email campaigns” accompanied by an equally vague recommendation to “consider ways to reduce pages clicked by Google search campaign visitors.” Is viewing more pages good or bad? Why? The recommendation in this case insinuates that it’s bad, but it’s not clear why. What’s the benefit of taking action in quantifiable terms?

Truly actionable analysis doesn’t burden decision makers with connecting the data to executable conclusions. In other words, the thought put into the analysis should make the diagnosis of problems clear so that decision makers can get to work on determining necessary solutions.

Manageable

The number of findings in any set of analyses should be contained enough that the analyst and anyone in the audience can recite the findings and recommendations (but not all the supporting details) in 30 seconds. Sometimes, less is more. This constraint helps ease the subsequent communication that will be necessary to reasonably react to the findings and plan and execute a response. Conversely, information overload obscures key messages and makes it difficult for teams to coalesce around key issues.

Enlightening

Last, but most certainly not least, effective findings are enlightening. Effective analyses should present — and support with clear, credible data — a view of the business that is not widely held. They should, at the very least, elicit a “hmmm…” from the audience and ideally a “whoa!” They should excite decision makers and spur them to action.

————————————–

The FAME attributes are not always easy to achieve. They require a lot of hard thought, but the value of clear, data-supported insight to an organization is immense.

The most effective analysts I’ve seen achieve FAME on a regular basis. They have a thorough understanding of the business’ objectives, and they develop their insights to help decision makers truly understand what’s working and what’s not working. And then they lay out clear opportunities for improvement. That’s data-driven business management at its best.

What do you think? What attributes do you find key in effective analyses?

Beyond the Buy Button: The Huge Additional Value of Retail Websites

Sometimes, I think we focus so intensely on the e-commerce sales of our sites that we miss the overwhelming additional value they bring to our businesses. Retail websites, particularly for multi-channel retailers, are more multi-dimensional than any other channel and any other brand vehicle. We fail to recognize the value of these sites beyond the buy button at our own peril.

Some are starting to see the additional value. During her presentation at the Retail Innovation and Marketing conference in San Francisco last week, Express Chief Marketing Officer Lisa Gavales talked about her epiphany surrounding Express.com’s value to the brand. It was Express.com’s traffic numbers that sparked the light bulb in her head. She realized that Express.com got as much traffic in a week as all of the Express stores combined. In other words, half of Express brand interactions were occurring on Express.com. Lisa immediately understood the marketing value of such high levels of engagements from Express’ customers. So much so, in fact, that she came to a conclusion she deemed controversial during her presentation — Express.com should be a marketing vehicle first and a direct sales channel second.

After the presentation, my good friend Scott Silverman, Shop.org’s Executive Director, asked me if I agreed with Lisa’s positioning of Express.com. I rambled on a bit before essentially saying “yes and no.” I’ll now take this space for what I hope is a more coherent answer.

I completely agree with Lisa that retail websites are much more valuable to the overall business than their direct sales indicate. Applying resources and strategic importance to sites based only on their percentage of sales is a mistake that could prove very costly in the long run. Customers use our sites for many reasons beyond direct transactions and our failure to highly prioritize those intentions is a disservice to our customers that will affect our bottom lines. But the value of our sites goes well beyond just marketing and direct sales and simply switching priorities is not enough. Furthermore, I worry that prioritizing marketing higher than everything else will lead to the types of conversion problems I previously discussed in my post “Conversion tip: Don’t block the product with window signs.

Let’s consider some of the many values a retail website provides for a multi-channel retailer:

  • Marketing vehicle
    As Lisa noted, the marketing value of our websites is immense. We are getting tons of traffic, and each engagement is an opportunity to enhance our brands. (Of course, if we’re not careful, the opposite is also true.) Websites are a highly efficient way to strengthen the Customer Engagement Cycle. Both online and offline marketing vehicles can direct customers to our sites to further enhance our messages. Our sites are also a great way to tell people about our stores on both a collective and an individual level.
  • Merchandising vehicle
    Customers come in droves to our sites to learn more about the products we sell, whether they intend to buy online, over the phone or in our stores. Our sites have to essentially be our best and most knowledgeable merchants. They have to lead customers to the right products for them and provide the right information for them to make a selection, regardless of the channel where the purchase takes place.  This is a huge, often untapped, opportunity for quality merchants to reach their customers and sell them the right products.
  • Customer research tool
    This is a bit of a double entendre. As mentioned above, our customers are certainly using our sites for their research. But we can also use our sites to learn more about our customers. There is a wealth of information to be had about what our customers are doing and what they desire. Not only can we see what they purchase, but we can also use web analytics to see what they look at. With tools like those provided by ForeSee Results (shameless plug), we can also know what they are thinking, what they are intending to do, and how they are perceiving our brands. All of this can be done fairly easily and inexpensively in ways that are either impossible or impossibly expensive in the physical world.
  • Customer relationship enabler
    We can continue to build relationships with our customers by applying what we’ve learned above to give them better experiences. The applied knowledge of our merchants combined with the long-lasting memory of our websites should allow us to constantly serve our customers better. As we focus on building those relationships with more personalized site experiences, more informed personal interactions via contact centers and in-store, and more relevant email and direct mail communications, we will build stronger loyalty with our customers.
  • Community builder
    Websites also give us ways to connect our customers with each other. Our brands can act as a central hub for like-minded customers to find each other and help each other find products that meet their needs or solve their problems. How great is that? We can make these connections both via our own sites and via social networks like Facebook. Either way, it’s another way for our brands to provide services for our customers. Our sites can also allow our brands to be more localized by providing additional vehicles for our stores to connect with their communities.
  • Sales driver — in-store and online
    And, of course, we can sell stuff. We can sell lots and lots of stuff online. Our sites are still not where they need to be for maximum usability, so we have plenty of opportunities to improve their ability to sell directly. But we also have lots and lots of opportunity to drive traffic into our stores. We can show inventory; we can let people buy or reserve online and pick up in-store; we can host coupons;  we can help people find a store close to them; we can provide reviews and recommendations to people standing in our stores (whether via kiosks or mobile phones). The possibilities are endless.

These site values are not mutually exclusive. Their value in combination is exponentially higher than any one individual value. Therefore, it’s critically important to consider our sites holistically when determining their place and priority in our strategic plans. We need to consider their combined value when we determine allocation of resources and organizational structure.

Too often, though, resources and executive attention are not apportioned to the site according to this additional value. And we often don’t even measure these additional value points (which might explain the lack of resources and executive attention). If our most important measures of our sites revolve solely around direct sales, we will continue to minimize the importance of all other values of our sites.

I believe the multichannel retailers with the brightest futures in this new decade will be those who fully embrace and leverage the multi-dimensional value of their websites.

What do you think? How is your site valued in your organization? What retailers do you think are most recognizing the additional value of their sites?


The Missing Links in the Customer Engagement Cycle

customer engagement cycleThe Customer Engagement Cycle plays a central role in many marketing strategies, but it’s not always defined in the same way. Probably the most commonly described stages are Awareness, Consideration, Inquiry, Purchase and Retention. In retail, we often think of the cycle as Awareness, Acquisition, Conversion, Retention. In either case, I think there are a couple of key stages that do not receive enough consideration given their critical ability to drive the cycle.

The missing links are Satisfaction and Referral.

Before discussing these missing links, let’s take a quick second to define the other stages:

Awareness: This is basic branding and positioning of the business. We certainly can’t progress people through the cycle before they’ve even heard of us.

Acquisition: I’ve always thought of this as getting someone into our doors or onto our site. It’s a major step, but it’s not yet profitable.

Conversion: This one is simply defined as making a sales. Woo hoo! It may or may not be a profitable sales on its own, but it’s still a significant stage in the cycle.

Retention: We get them to shop with us again. Excellent! Repeat sales tend to be more profitable and almost certainly have lower marketing costs than first purchases.

Now, let’s get to those Missing Links

In my experience, the key to a strong and active customer engagement cycle is a very satisfying customer experience. And while the Wikipedia article on Customer Engagement doesn’t mention Satisfaction as often as I would like, it does include this key statement: “Satisfaction is simply the foundation, and the minimum requirement, for a continuing relationship with customers.”

In fact, I think the quality of the customer experience is so important that I would actually inject it multiple times into the cycle: Awareness, Acquisition, Satisfaction, Conversion, Satisfaction, Retention, Satisfaction, Referral.

Of course, it’s possible to get through at least some of the stages of the cycle without an excellent customer experience. People will soldier through a bad experience if they want the product bad enough or if there’s an incredible price. But it’s going to be a lot harder to retain that type of customer and if you get a referral, it might not be the type of referral you want.

I wonder if Satisfaction and Referral are often left out of cycle strategies because they are the stages most out of marketers’ control.

A satisfying customer experience is not completely in the marketer’s control. For sure, marketing plays a role. A customer’s satisfaction can be defined as the degree to which her actual experience measures up to her expectations. Our marketing messages are all about expectations, so it’s important that we are compelling without over-hyping the experience. And certainly marketers can influence policy decisions, website designs, etc. to help drive better customer experiences.

In the end, though, the actual in-store or online experience will determine the strength of the customer engagement.

Everyone plays a part in the satisfaction stages. Merchants must ensure advertised product is in stock and well positioned. Store operators must ensure the stores are clean, the product is available on the sales floor and the staff are friendly, enthusiastic and helpful. The e-commerce team must ensure advertised products can be easily found, the site is performing well, product information in complete and useful,  and the products are shipped on time and in good condition.

We also have to ensure our incentives and metrics are supporting a quality customer experience, because the wrong metrics can incent the wrong behavior. For example, if we measure an online search engine marketing campaign by the number of visitors generated or even the total sales generated, we can absolutely end up going down the wrong path. We can buy tons of search terms that by their sheer volume will generate lots of traffic and some degree of increased sales. But if those search terms link to the home page or some other page that is largely irrelevant to the search term, the experience will be likely disappointing for the customer who clicked through.

In fact, I wrote a white paper a few months ago, Online Customer Acquisition: Quality Trumps Quantity, that delved into customer experience by acquisition source for the Top 100 Internet Retailers. We found that those who came via external search engines were among the least satisfied customers of those sites with the least likelihood to purchase and recommend. Not good. These low ratings could largely be attributed to the irrelevance of the landing pages from those search terms.

Satisfaction breeds Referral

Referrals or Recommendations are truly wonderful. As I wrote previously, the World’s Greatest Marketers are our best and most vocal customers. They are more credible than we’ll ever be, and the cost efficiencies of acquisition through referral are significantly better than our traditional methods of awareness and acquisition marketing. In my previously mentioned post, I discussed some ways to help customers along on the referral path. But, of course, customers can be pretty resourceful on their own.

We’ve all seen blog posts, Facebook posts or tweets about bad customer experiences. But plenty of positive public commentary can also be found.  Target’s and Gap’s Facebook walls have lots of customers expressing their love for those brands. Even more powerful are blog posts some customers write about their experiences.  I came across a post yesterday from entitled Tales of Perfection that related two excellent experiences the blogger had with Guitar Center and a burger joint called Arry’s. Both stories are highly compelling and speak to the excellent quality of the employees at each business. Nice!

————————————————–

Developing a business strategy, not just a marketing strategy, around the customer engagement cycle can be extremely powerful. It requires the entire company to get on board to understand the value of maximizing the customer experience at every touch point with the customer, and it requires a set of incentives and metrics that fully support strengthening the cycle along the way.

What do you think? How do you think about the customer engagement cycle? How important do feel the customer experience is in strengthening the cycle? Or do you think this is all hogwash?


Retail: Shaken Not Stirred by Kevin Ertell


Home | About