Category: metrics

The 3 Levels of Metrics: From Driving Cars to Solving Crimes

Business-MetricsYou can’t manage what you don’t measure. That’s a long-time business mantra espoused frequently by my good friend Larry Freed. And it’s certainly true. But in an e-commerce where we can effectively measure our customers’ every footstep, we can easily become overwhelmed with all that data. Because while we can’t manage what we don’t measure, we also can’t manage everything we can measure.

I’ve found it’s best to break our metrics down to three levels in order to make the most of them.

1. KPIs
The first and highest level of metrics contains the Key Performance Indicators or KPIs. I believe strongly there should be relatively few KPIs — maybe five or six at most — and the KPIs should align tightly with the company’s overall business objectives. If an objective is to develop more orders from site visitors, then conversion rate would be the KPI. If another objective is about maximizing the customer experience, then customer satisfaction is the right metric.

In addition to conversion rate and customer satisfaction, a set of KPIs might include metrics like average order value (AOV), market share, number of active customers,  task completion rate or others that appropriately measure the company’s key objectives.

I’ve found the best KPI sets are balanced so that the best way to drive the business forward is to find ways to improve all of the KPIs, which is why businesses often have balanced scorecards. The reality is, we could find ways to drive any one metric at the expense of the others, so finding the right balance is critical. Part of that balance is ensuring that the most important elements of the business are considered, so it’s important to have some measure of employee satisfaction (because employee satisfaction leads to customer satisfaction) and some measure of profitability.  Some people look at a metric like Gross Margin as the profitability measure, but I prefer something deeper down the financial statement like Contribution Margin or EBITDA because they take other cost factors like ad spend, operational efficiencies, etc. into account and can be affected by most people in the organization.

It’s OK for KPIs to be managed at different frequencies. We often talk about metrics dashboards, and a car’s dashboard is the right metaphor. Car manufacturers have limited space to work with, so they include only the gauges the most help the driver operate the car. The speedometer is managed frequently while operating the car. The fuel gauge is critically important, but it’s monitored only occasionally (and more frequently when it’s low). Engine temperature is a hugely important measure for the health of the car, but we don’t need to do much with it until there’s a problem. Business KPIs can be monitored in a similarly varied frequency, so it’s important that we don’t choose them based on their likelihood to change over some specific time period. It’s more important to choose the metrics that most represent the health of the business.

2. Supporting Metrics
I call the next level of metrics Supporting Metrics. Supporting Metrics are tightly aligned with KPIs, but they are more focused on individual functions or even individual people within the organization. A KPI like conversion rate can be broken down by various marketing channels pretty easily, for example. We could have email conversion rate, paid search conversion rate, direct traffic conversion rate, etc. I also like to look at True Conversion Rate, which measures conversion against intent to buy.

Supporting metrics should be an individual person’s or functional area’s scorecard to measure how their work is driving the business forward. Ensuring supporting metrics are tightly aligned with the overall company objectives helps to ensure work efforts throughout the organization are tightly aligned with the overall objectives.

As with KPIs, we want to ensure any person or functional area isn’t burdened with so many supporting metrics that they become unmanageable. And this is an area where we frequently fall down because all those metrics and data points are just so darn alluring.

The key is to recognize the all-important third level of metrics. I call them Forensic Metrics.

3. Forensic Metrics
Forensic Metrics are just what they sound like. They’re those deep-dive metrics we use when we’re trying to solve a problem we’re facing in KPIs or Supporting Metrics. But there are tons of them, and we can’t possibly manage them on a day-to-day basis. In the same way we don’t dust our homes for prints every day when we come home from work, we can’t try to pay attention to forensic metrics all the time. If we come home and find our TV missing, then dusting for prints makes a lot of sense. If we find out conversion rate has dropped suddenly, it’s time to dig into all sorts of forensic metrics like path analysis, entry pages, page views, time on site, exit links, and the list goes on and on.

Site analytics packages, data warehouse and log files are chock full of valuable forensic metrics. But those forensic metrics should not find their way onto daily or weekly managed scorecards. They can only serve to distract us from our primary objectives.

—————————————————–

Breaking down our metrics into these three levels takes some serious discipline. When we decide we’re only going to focus on a relatively small number of metrics, we’re doing ourselves and our businesses a big favor. But it’s really important we’re narrowing that focus on the metrics and objectives that are most driving the business forward. But, heck, we should be doing that anyway.

What do you think? How do you break down your metrics?

 

11 Ways Humans Kill Good Analysis

Failure to CommunicateIn my last post, I talked about the immense value of FAME in analysis (Focused, Actionable, Manageable and Enlightening). Some of the comments on the post and many of the email conversations I had regarding the post sparked some great discussions about the difficulties in achieving FAME. Initially, the focus of those discussions centered on the roles executives, managers and other decisions makers play in the final quality of the analysis, and I was originally planning to dedicate this post to ideas decision makers can use to improve the quality of the analyses they get.

But the more I thought about it, the more I realized that many of the reasons we aren’t happy with the results of the analyses come down to fundamental disconnects in human relations between all parties involved.

Groups of people with disparate backgrounds, training and experiences gather in a room to “review the numbers.” We each bring our own sets of assumptions, biases and expectations, and we generally fail to establish common sets of understanding before digging in. It’s the type of Communication Illusion I’ve written about previously. And that failure to communicate tends to kill a lot of good analyses.

Establishing common understanding around a few key areas of focus can go a long way towards facilitating better communication around analyses and consequently developing better plans of action to address the findings.

Here’s a list of 11 key ways to stop killing good analyses:

  1. Begin in the beginning. Hire analysts not reporters.
    This isn’t a slam on reporters, it’s just recognition that the mindset and skill set needed for gathering and reporting on data is different from the mindset and skill set required for analyzing that data and turning it into valuable business insight. To be sure, there are people who can do both. But it’s a mistake to assume these skill sets can always be found in the same person. Reporters need strong left-brain orientation and analysts need more of a balance between the “just the facts” left brain and the more creative right brain. Reporters ensure the data is complete and of high quality; analysts creatively examine loads of data to extract valuable insight. Finding someone with the right skill sets might cost more in payroll dollars, but my experience says they’re worth every penny in the value they bring to the organization.
  2. Don’t turn analysts into reporters.
    This one happens all too often. We hire brilliant analysts and then ask them to spend all of their time pulling and formatting reports so that we can do our own analysis. Everyone’s time is misused at best and wasted at worst. I think this type of thing is a result of the miscommunication as much as a cause of it. When we get an analysis we’re unhappy with, we “solve” the problem by just doing it ourselves rather than use those moments as opportunities to get on the same page with each other. Web Analytics Demystified‘s Eric Peterson is always saying analytics is an art as much as it is a science, and that can mean there are multiple ways to get to findings. Talking about what’s effective and what’s not is critical to our ultimate success. Getting to great analysis is definitely an iterative process.
  3. Don’t expect perfection; get comfortable with some ambiguity
    When we decide to be “data-driven,” we seem to assume that the data is going to provide perfect answers to our most difficult problems. But perfect data is about as common as perfect people. And the chances of getting perfect data decrease as the volume of data increases. We remember from our statistics classes that larger sample sizes mean more accurate statistics, but “more accurate” and “perfect” are not the same (and more about statistics later in this list). My friend Tim Wilson recently posted an excellent article on why data doesn’t match and why we shouldn’t be concerned. I highly recommend a quick read. The reality is we don’t need perfect data to produce highly valuable insight, but an expectation of perfection will quickly derail excellent analysis. To be clear, though, this doesn’t mean we shouldn’t try as hard as we can to use great tools, excellent methodologies and proper data cleansing to ensure we are working from high quality data sets. We just shouldn’t blow off an entire analysis because there is some ambiguity in the results. Unrealistic expectations are killers.
  4. Be extremely clear about assumptions and objectives. Don’t leave things unspoken.
    Mismatched assumptions are at the heart of most miscommunications regarding just about anything, but they can be a killer in many analyses. Per item #3, we need to start with the assumption that the data won’t be perfect. But then we need to be really clear with all involved what we’re assuming we’re going to learn and what we’re trying to do with those learnings. It’s extremely important that the analysts are well aware of the business goals and objectives, and they need to be very clearly about why they’re being asked for the analysis and what’s going to be done with it. It’s also extremely important that the decision makers are aware of the capabilities of the tools and the quality of the data so they know if their expectations are realistic.
  5. Resist numbers for number’s sake
    Man, we love our numbers in retail. If it’s trackable, we want to know about it. And on the web, just about everything is trackable. But I’ll argue that too much data is actually worse than no data at all. We can’t manage what we don’t measure, but we also can’t manage everything that is measurable. We need to determine which metrics are truly making a difference in our businesses (which is no small task) and then focus ourselves and our teams relentlessly on understanding and driving those metrics. Our analyses should always focus around those key measures of our businesses and not simply report hundreds (or thousands) of different numbers in the hopes that somehow they’ll all tie together into some sort of magic bullet.
  6. Resist simplicity for simplicity’s sake
    Why do we seem to be on an endless quest to measure our businesses in the simplest possible manner? Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. Our businesses are complex. Our websites are complex. Our customers are complex. The combination of the three is incredibly complex. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success. Great metrics might require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
  7. Get comfortable with probabilities and ranges
    When we’re dealing with future uncertainties like forecasts or ROI calculations, we are kidding ourselves when we settle on specific numbers. Yet we do it all the time. One of my favorite books last year was called “Why Can’t You Just Give Me the Number?” The author, Patrick Leach, wrote the book specifically for executives who consistently ask that question. I highly recommend a read. Analysts and decision makers alike need to understand the of pros and cons of averages and using them in particular situations, particularly when stacking them on top of each other. Just the first chapter of the book Flaw of Averages does an excellent job explaining the general problems.
  8. Be multilingual
    Decision makers should brush up on basic statistics. I don’t think it’s necessary to re-learn all the formulas, but it’s definitely important to remember all the nuances of statistics. As time has passed from our initial statistics classes, we tend to forget about properly selected samples, standard deviations and such, and we just remember that you can believe the numbers. But we can’t just believe any old number. All those intricacies matter. Numbers don’t lie, but people lie, misuse and misread numbers on a regular basis. A basic understanding of statistics can not only help mitigate those concerns, but on a more positive note it can also help decision makers and analysts get to the truth more quickly.

    Analysts should learn the language of the business and work hard to better understand the nuances of the businesses of the decision makers. It’s important to understand the daily pressures decision makers face to ensure the analysis is truly of value. It’s also important to understand the language of each decision maker to shortcut understanding of the analysis by presenting it in terms immediately identifiable to the audience. This sounds obvious, I suppose, but I’ve heard way too many analyses that are presented in “analyst-speak” and go right over the heard of the audience.

  9. Faster is not necessarily better
    We have tons of data in real time, so the temptation is to start getting a read almost immediately on any new strategic implementation, promotion, etc. Resist the temptation! I wrote a post a while back comparing this type of real time analysis to some of the silliness that occurs on 24-hour news networks. Getting results back quickly is good, but not at the expense of accuracy. We have to strike the right balance to ensure we don’t spin our wheels in the wrong direction by reacting to very incomplete data.
  10. Don’t ignore the gut
    Some people will probably vehemently disagree with me on this one, but when an experienced person says something in his or her gut says something is wrong with the data, we shouldn’t ignore it. As we stated in #3, the data we’re working from is not perfect so “gut checks” are not completely out of order. Our unconscious or hidden brains are more powerful and more correct than we often give them credit for. Many of our past learnings remain lurking in our brains and tend to surface as emotions and gut reactions. They’re not always right, for sure, but that doesn’t mean they should be ignored. If someone’s gut says something is wrong, we should at the very least take another honest look at the results. We might be very happy we did.
  11. Presentation matters a lot.
    Last but certainly not least, how the analysis is presented can make or break its success. Everything from how slides are laid out to how we walk through the findings matter. It’s critically important to remember that analysts are WAY closer to the data than everyone else. The audience needs to be carefully walked through the analysis, and analysts should show their work (like math proofs in school). It’s all about persuading the audience and proving a case and every point prior to this one comes into play.

The wealth and complexity of data we have to run our businesses is often a luxury and sometimes a curse. In the end, the data doesn’t make our businesses decisions. People do. And we have to acknowledge and overcome some of our basic human interaction issues in order to fully leverage the value of our masses of data to make the right data-driven decisions for our businesses.

What do you think? Where do you differ? What else can we do?

How to achieve FAME in analysis

focused handsIn retail, and in web retail in particular, we are drowning in data. We can and do track just about everything, and we’re constantly pouring over the numbers. But I sometimes worry that the abundance of data is so overwhelming that it often leads to a shortage of insight. All that data is worthless (or worse) if we don’t produce thoughtful analysis and then carefully craft communication of our findings in ways that enable decision makers to react to the data rather than try to analyze it themselves.

The most effective analyses I’ve seen have remarkably similar attributes, and they happen to work into a nice, easy-to-remember acronym — F.A.M.E.

Here, in my experience, are the keys to achieving FAME in analysis:

Focused

Any finding should be fact based and clear enough that it can be stated in a succinct format similar to a newspaper headline. It’s OK to augment the main headline with a sub-headline that adds further clarification, but anything more complicated is not nearly focused enough to be an effective finding.

For example, an effective finding might be, “Visitors arriving from Google search terms are converting 23% lower than visitors arriving from email.” An accompanying sub-heading might further clarify the statement with something like, “Unclear value proposition, irrelevant landing pages and high first time visitor counts are contributing factors.”

All subsequent data presented should support these headlines. Any data that is interesting but irrelevant to the finding should be excluded from the analysis. In other words, remove the clutter so the main points are as clear as possible.

Actionable

Effective findings and their accompanying recommendations are specific enough in focus and narrow enough in scope that decision makers can reasonably develop a plan of action to address them. The finding mentioned above regarding Google search visitors fits the bill, and a recommendation that focuses on modifying landing pages to match search terms would be appropriate. Less appropriate would be a vague finding like “customers coming from Google search terms are viewing more pages than customers coming from email campaigns” accompanied by an equally vague recommendation to “consider ways to reduce pages clicked by Google search campaign visitors.” Is viewing more pages good or bad? Why? The recommendation in this case insinuates that it’s bad, but it’s not clear why. What’s the benefit of taking action in quantifiable terms?

Truly actionable analysis doesn’t burden decision makers with connecting the data to executable conclusions. In other words, the thought put into the analysis should make the diagnosis of problems clear so that decision makers can get to work on determining necessary solutions.

Manageable

The number of findings in any set of analyses should be contained enough that the analyst and anyone in the audience can recite the findings and recommendations (but not all the supporting details) in 30 seconds. Sometimes, less is more. This constraint helps ease the subsequent communication that will be necessary to reasonably react to the findings and plan and execute a response. Conversely, information overload obscures key messages and makes it difficult for teams to coalesce around key issues.

Enlightening

Last, but most certainly not least, effective findings are enlightening. Effective analyses should present — and support with clear, credible data — a view of the business that is not widely held. They should, at the very least, elicit a “hmmm…” from the audience and ideally a “whoa!” They should excite decision makers and spur them to action.

————————————–

The FAME attributes are not always easy to achieve. They require a lot of hard thought, but the value of clear, data-supported insight to an organization is immense.

The most effective analysts I’ve seen achieve FAME on a regular basis. They have a thorough understanding of the business’ objectives, and they develop their insights to help decision makers truly understand what’s working and what’s not working. And then they lay out clear opportunities for improvement. That’s data-driven business management at its best.

What do you think? What attributes do you find key in effective analyses?

The Missing Links in the Customer Engagement Cycle

customer engagement cycleThe Customer Engagement Cycle plays a central role in many marketing strategies, but it’s not always defined in the same way. Probably the most commonly described stages are Awareness, Consideration, Inquiry, Purchase and Retention. In retail, we often think of the cycle as Awareness, Acquisition, Conversion, Retention. In either case, I think there are a couple of key stages that do not receive enough consideration given their critical ability to drive the cycle.

The missing links are Satisfaction and Referral.

Before discussing these missing links, let’s take a quick second to define the other stages:

Awareness: This is basic branding and positioning of the business. We certainly can’t progress people through the cycle before they’ve even heard of us.

Acquisition: I’ve always thought of this as getting someone into our doors or onto our site. It’s a major step, but it’s not yet profitable.

Conversion: This one is simply defined as making a sales. Woo hoo! It may or may not be a profitable sales on its own, but it’s still a significant stage in the cycle.

Retention: We get them to shop with us again. Excellent! Repeat sales tend to be more profitable and almost certainly have lower marketing costs than first purchases.

Now, let’s get to those Missing Links

In my experience, the key to a strong and active customer engagement cycle is a very satisfying customer experience. And while the Wikipedia article on Customer Engagement doesn’t mention Satisfaction as often as I would like, it does include this key statement: “Satisfaction is simply the foundation, and the minimum requirement, for a continuing relationship with customers.”

In fact, I think the quality of the customer experience is so important that I would actually inject it multiple times into the cycle: Awareness, Acquisition, Satisfaction, Conversion, Satisfaction, Retention, Satisfaction, Referral.

Of course, it’s possible to get through at least some of the stages of the cycle without an excellent customer experience. People will soldier through a bad experience if they want the product bad enough or if there’s an incredible price. But it’s going to be a lot harder to retain that type of customer and if you get a referral, it might not be the type of referral you want.

I wonder if Satisfaction and Referral are often left out of cycle strategies because they are the stages most out of marketers’ control.

A satisfying customer experience is not completely in the marketer’s control. For sure, marketing plays a role. A customer’s satisfaction can be defined as the degree to which her actual experience measures up to her expectations. Our marketing messages are all about expectations, so it’s important that we are compelling without over-hyping the experience. And certainly marketers can influence policy decisions, website designs, etc. to help drive better customer experiences.

In the end, though, the actual in-store or online experience will determine the strength of the customer engagement.

Everyone plays a part in the satisfaction stages. Merchants must ensure advertised product is in stock and well positioned. Store operators must ensure the stores are clean, the product is available on the sales floor and the staff are friendly, enthusiastic and helpful. The e-commerce team must ensure advertised products can be easily found, the site is performing well, product information in complete and useful,  and the products are shipped on time and in good condition.

We also have to ensure our incentives and metrics are supporting a quality customer experience, because the wrong metrics can incent the wrong behavior. For example, if we measure an online search engine marketing campaign by the number of visitors generated or even the total sales generated, we can absolutely end up going down the wrong path. We can buy tons of search terms that by their sheer volume will generate lots of traffic and some degree of increased sales. But if those search terms link to the home page or some other page that is largely irrelevant to the search term, the experience will be likely disappointing for the customer who clicked through.

In fact, I wrote a white paper a few months ago, Online Customer Acquisition: Quality Trumps Quantity, that delved into customer experience by acquisition source for the Top 100 Internet Retailers. We found that those who came via external search engines were among the least satisfied customers of those sites with the least likelihood to purchase and recommend. Not good. These low ratings could largely be attributed to the irrelevance of the landing pages from those search terms.

Satisfaction breeds Referral

Referrals or Recommendations are truly wonderful. As I wrote previously, the World’s Greatest Marketers are our best and most vocal customers. They are more credible than we’ll ever be, and the cost efficiencies of acquisition through referral are significantly better than our traditional methods of awareness and acquisition marketing. In my previously mentioned post, I discussed some ways to help customers along on the referral path. But, of course, customers can be pretty resourceful on their own.

We’ve all seen blog posts, Facebook posts or tweets about bad customer experiences. But plenty of positive public commentary can also be found.  Target’s and Gap’s Facebook walls have lots of customers expressing their love for those brands. Even more powerful are blog posts some customers write about their experiences.  I came across a post yesterday from entitled Tales of Perfection that related two excellent experiences the blogger had with Guitar Center and a burger joint called Arry’s. Both stories are highly compelling and speak to the excellent quality of the employees at each business. Nice!

————————————————–

Developing a business strategy, not just a marketing strategy, around the customer engagement cycle can be extremely powerful. It requires the entire company to get on board to understand the value of maximizing the customer experience at every touch point with the customer, and it requires a set of incentives and metrics that fully support strengthening the cycle along the way.

What do you think? How do you think about the customer engagement cycle? How important do feel the customer experience is in strengthening the cycle? Or do you think this is all hogwash?


Are web analytics like 24-hour news networks?

We have immediate access to loads of data with our web sites, but just because we can access lots of data in real time doesn’t mean we should access our data in real time. In fact, accessing and reporting on the numbers too quickly can often lead to distractions, false conclusions, premature reactions and bad decisions.

I was attending the web-analytics-focused Semphonic X Change conference last week in San Francisco (which, by the way, was fantastic) where lots of discussion centered around both the glories and the issues associated with the mass amount of data we have available to us in the world of the web.

Before heading down for the conference breakfast Friday morning (September 11), I switched on CNN and saw — played out in all their glory on national TV — the types of issues that can occur with reporting too early on available data.

It seems CNN reporters “monitoring video” from a local TV station saw Coast Guard vessels in the Potomac River apparently trying to keep another vessel from passing. They then monitored the Coast Guard radio and heard someone say, “You’re approaching a Coast Guard security zone. … If you don’t stop your vessel, you will be fired upon. Stop your vessel immediately.” And, for my favorite part of the story, they made the decision to go on air when they heard someone say “bang, bang, bang, bang” and “we have expended 10 rounds.” They didn’t hear actual gun shots, mind you, they heard someone say “bang.” Could this be a case of someone wanting the data to say something it isn’t really saying?

In the end, it turned out the Coast Guard was simply executing a training exercise it runs four times a week! Yet, the results of CNN’s premature, erroneous and nationally broadcast report caused distractions to the Coast Guard leadership and White House leadership, caused the misappropriation of FBI agents who were sent to the waterfront unnecessarily, led to the grounding of planes at Washington National airport for 22 minutes, and resulted in reactionary demands from law enforcement agencies that they be alerted of such exercises in the future, even though the exercises run four times per week and those alerts will likely be quickly ignored because they will become so routine.

In the days when we only got news nightly, reporters would have chased down the information, discovered it was a non-issue and the report would have never aired. The 24-hour networks have such a need for speed of reporting that they’ve sacrificed accuracy and credibility.

Let’s not let such a rush negatively affect our businesses.

Later on that same day, I was attending a conference discussion on the role of web analytics in site redesigns. Several analysts in the room mentioned their frustrations when they were asked by executives for a report on how the new design was doing only a couple of hours after the launch of new site design. They wanted to be able to provide solid insight, but they knew they couldn’t provide anything reliable so soon.

Even though a lot of data is already available a couple of hours in, that data lacks the context necessary to start drawing conclusions.

For one, most site redesigns experience a dip in key metrics initially as regular customers adjust to a new look and feel. In the physical retail world, we used to call this the “Where’s my stuff?” phenomenon. But even if we set the initial dip aside, there are way too many variables involved in the short term of web activity to make any reliable assessments of the new design’s effectiveness. As with any short term measurement, the possibilities for random outliers to unnaturally sway the measurement to one direction or another is high. It takes some time and an accumulation of data to be sure we have a reliable story to tell.

And even with time, web data collection is not perfect. Deleted cookies, missed connections, etc. can all cause some problems in the overall completeness of the data. For that matter, I’ve rarely seen the perfect set of data in any retail environment. Given the imperfect nature of the data we’re using to make key strategic decisions, we need to give our analysts time to review it, debate it and come to reasoned conclusions before we react.

I realize the temptation is strong to get an “early read” on the progress of a new site design (or any strategic issue, really). I’ve certainly felt it myself on many occasions. However, since just about every manager and executive I know (including myself) has a strong bias for action, we have to be aware of the risks associated with these “early reads” and our own abilities or inabilities to make conclusions and immediately react. Early reads can lead to the bad decisions associated with the full accelerator/full brake syndrome I’ve referenced previously.

We can spend months or even years preparing for a massive new strategic effort and strangle it within days by overreacting to early data. Instead, I wonder if it’s a better to determine well in advance of the launch — when we’re thinking more rationally and the temptation to know something is low — when we’ll first analyze the success of our new venture. Why not make such reporting part of the project plan and publicly set expectations about when we’ll review the data and what type of adjustments we should plan to make based on what we learn?

In the end, let’s let our analysts strive for the credibility of the old nightly news rather than emulate the speed and rush to judgment that too often occurs in this era of 24-hours news. Our businesses and our strategies are too important and have taken too long to build to sacrifice them to a short-term need for speed.

What do you think? Have you seen this issue in action? How do you need with the balance between quick information and thoughtful analysis?

Photo credit: Wikimedia Commons




Retail: Shaken Not Stirred by Kevin Ertell


Home | About