How to achieve FAME in analysis

focused handsIn retail, and in web retail in particular, we are drowning in data. We can and do track just about everything, and we’re constantly pouring over the numbers. But I sometimes worry that the abundance of data is so overwhelming that it often leads to a shortage of insight. All that data is worthless (or worse) if we don’t produce thoughtful analysis and then carefully craft communication of our findings in ways that enable decision makers to react to the data rather than try to analyze it themselves.

The most effective analyses I’ve seen have remarkably similar attributes, and they happen to work into a nice, easy-to-remember acronym — F.A.M.E.

Here, in my experience, are the keys to achieving FAME in analysis:


Any finding should be fact based and clear enough that it can be stated in a succinct format similar to a newspaper headline. It’s OK to augment the main headline with a sub-headline that adds further clarification, but anything more complicated is not nearly focused enough to be an effective finding.

For example, an effective finding might be, “Visitors arriving from Google search terms are converting 23% lower than visitors arriving from email.” An accompanying sub-heading might further clarify the statement with something like, “Unclear value proposition, irrelevant landing pages and high first time visitor counts are contributing factors.”

All subsequent data presented should support these headlines. Any data that is interesting but irrelevant to the finding should be excluded from the analysis. In other words, remove the clutter so the main points are as clear as possible.


Effective findings and their accompanying recommendations are specific enough in focus and narrow enough in scope that decision makers can reasonably develop a plan of action to address them. The finding mentioned above regarding Google search visitors fits the bill, and a recommendation that focuses on modifying landing pages to match search terms would be appropriate. Less appropriate would be a vague finding like “customers coming from Google search terms are viewing more pages than customers coming from email campaigns” accompanied by an equally vague recommendation to “consider ways to reduce pages clicked by Google search campaign visitors.” Is viewing more pages good or bad? Why? The recommendation in this case insinuates that it’s bad, but it’s not clear why. What’s the benefit of taking action in quantifiable terms?

Truly actionable analysis doesn’t burden decision makers with connecting the data to executable conclusions. In other words, the thought put into the analysis should make the diagnosis of problems clear so that decision makers can get to work on determining necessary solutions.


The number of findings in any set of analyses should be contained enough that the analyst and anyone in the audience can recite the findings and recommendations (but not all the supporting details) in 30 seconds. Sometimes, less is more. This constraint helps ease the subsequent communication that will be necessary to reasonably react to the findings and plan and execute a response. Conversely, information overload obscures key messages and makes it difficult for teams to coalesce around key issues.


Last, but most certainly not least, effective findings are enlightening. Effective analyses should present — and support with clear, credible data — a view of the business that is not widely held. They should, at the very least, elicit a “hmmm…” from the audience and ideally a “whoa!” They should excite decision makers and spur them to action.


The FAME attributes are not always easy to achieve. They require a lot of hard thought, but the value of clear, data-supported insight to an organization is immense.

The most effective analysts I’ve seen achieve FAME on a regular basis. They have a thorough understanding of the business’ objectives, and they develop their insights to help decision makers truly understand what’s working and what’s not working. And then they lay out clear opportunities for improvement. That’s data-driven business management at its best.

What do you think? What attributes do you find key in effective analyses?


  • By Vijay Kaundal, May 24, 2010 @ 3:27 pm

    Very well put Kevin! We can apply the FAME principles when the data capture is done and we are doing the analysis of the data we have. But how do you handle situations at the time of requirements definitions when you have Business Users who when asked,” What do you want to track?” Come back with a simple straight answer “Everything!!” 🙂

  • By John Lovett, May 27, 2010 @ 8:25 am

    Great construct Kevin! I love the FAME acronym and while I agree that it’s a lofty goal because delivering analysis in this manner is difficult, particularly the Enlightening part, as many recipients of analysis don’t get as excited about it as you and I do 😉

    But all of us can aspire to FAME! Great stuff.

    John Lovett

  • By Kevin Ertell, May 27, 2010 @ 12:48 pm

    Thanks for your comments, Vijay and John.

    Vijay: I think you make a very good point with your question. I definitely agree that decision makers who ask for analysis have important responsibilities towards the quality of the analysis, starting with requirements for data collection and continuing on through delivery and reaction to the analysis. In fact, in addition to your comment, I’ve had a number of email conversations about this post that will likely spark a follow-up post with some more of my thoughts on executive/manager responsibilities regarding quality of analysis. Stay tuned. 🙂

    John: I agree that recipients absolutely play a role in he quality of the analysis they get, whether they realize it or not. Getting to quality analysis is definitely a team effort, and if FAME can be delivered everyone will find great value in the insights.

    Thanks again for your comments.

  • By Tim Wilson, June 1, 2010 @ 11:51 am

    I like it! My only possible quibble is with the “E,” which, I think, requires limiting the scope of FAME to hypothesis-driven analysis. Data that is reported as metrics and in the service of *performance measurement* may not enlighten in some cases — it may simply show, “We’re delivering exactly what we expected.” The reason you’re reporting it is because there was a campaign or initiative that rolled out, and you need to know whether it’s working as intended (this could be your web site overall). In those situations, you still want the information to be Focussed, Actionable (although “no action needed” is an acceptable”action”), and Manageable… but “Enlightening” might be a stretch.

  • By Kevin Ertell, June 1, 2010 @ 1:43 pm

    Thanks for your comment, Tim.

    I think you make an excellent distinction. When I wrote the post, I would thinking more about full analyses as opposed to metrics reporting and I should have been clearer about that. I think your point about metrics reporting is right on.

  • By Charlie Ballard, June 16, 2010 @ 8:21 am

    Faily decent points and I liked the acronym. That said, I’m not sure I’d describe the “Visitors arriving from Google search terms are converting 23% lower than visitors arriving from email” as an actionable, effective finding.

    What do you do with this? If they’re all going to the same landing page, okay, maybe you try landing pages customized to each. But if people are already enrolled in your email program, they’re potentially just already more interested in your product than people searching for the first time. Different audiences simply have different existing propensities for converting, and this needs to be acknowledged.

    The biggest factor so many stakeholders seem to miss is “Okay, that group converted less, but were they still *efficient*?” In other words, if Email converts at a high rate, great, but if the Cost Per Acquisition on the lower-converting Google visitors is still acceptable, isn’t that okay? And shouldn’t both groups get new landing pages to try to bring down the overall program CPA?

    Finally — and I know this is a simple meme meant to just keep people focus — but I think we as an analytics community are doing too many simple “Top 4/10/15 Analytics Basics” and not enough thinking about the future, which to me is attribution.

    It’s this simple question: “How many of my Google visitors didn’t convert but did sign up for my high-converting Email program? And how do I therefore allocate credit for later conversions between these two channels?”

  • By Kevin Ertell, June 16, 2010 @ 8:59 am

    Thanks for your comments, Charlie.

    You make some excellent points, and I certainly won’t disagree. Further segmentation on the Google visitor finding to determine differences between new and existing customers would absolutely be in order to make the finding more actionable and effective.

    I also agree with your points about efficiency. Although, I would say what’s acceptable today may not be tomorrow in our never-ending quest for improved profitability. So finding ways to increase efficiency is always a plus. Of course, limited resources require us to prioritize all improvements to those with the most upside.

    I also completely agree with you on attribution. It’s a really difficult issue, to be sure, but a critically important one. I haven’t heard a great solution yet, unfortunately.

Other Links to this Post

  1. 11 Ways Humans Kill Good Analysis | Retail: Shaken Not Stirred by Kevin Ertell — June 8, 2010 @ 9:43 am

  2. FAME! You’re gonna live forever, you’re gonna learn how to fly! | — June 14, 2010 @ 3:51 pm

RSS feed for comments on this post. TrackBack URI

Leave a comment

Retail: Shaken Not Stirred by Kevin Ertell

Home | About