5 Dos and 5 Don’ts for Great Survey Reports

By Tim Parker

Surveys have an important role to play in thought leadership marketing. If they answer questions your clients are pondering in a field in which you have expertise and provide services, they create interest, credibility and leads. Also, if the results are novel and believable, journalists will often report on them as they’re easy to turn into stories. Journalists love easy, and their editors love numbers as they indicate that the author isn’t just opining.

But surveys also are easy to do badly. There are many survey reports that get no traction in the marketplace because they don’t say anything that anyone wants to read.

Here are five things to do to make sure your survey will be a hit:

 

1. Do make sure you pass the “so what?” test

Here are two key findings from recent surveys on social media that will catch people’s attention: “Companies with the greatest benefits from social media have multiple functions working together on it”; “The most effective consulting marketers train their consultants in how to use social media.” These findings are ones the audience is not likely to know, and tell them something they can do to improve their performance.

And here are some key findings (their claim, my italics) that no-one is going to care about: “71% of online adults use Facebook”; “Social media is a critical data source”; “Everyone is a mobile consumer.” These findings are irrelevant, or obvious, or both.

For your survey to grab people’s attention, you have to pick something interesting to investigate at the outset.

 

2. Do nail your hypotheses before you write the questions

A good survey should be easy for the respondent to answer. Perhaps because of this, people often think that good surveys are easy to write. They’re not. A good survey is one that asks questions that confirm or refute individual hypotheses (e.g. companies that manage risks well in emerging markets do certain things that others don’t) that sit under an overall hypothesis (e.g. emerging markets present a unique and potent combination of risks) that align with services the company can provide, and address a problem its clients have. These questions can’t be written in an afternoon. In fact, you have to work that whole sequence back the other way: client problem > firm’s services > overall hypothesis > individual hypotheses > questions. You have to do it that way, that is, if you want the data you collect to tell a story. And, of course, you do.

 

3. Do make it multidimensional

A single set of questions directed at an undifferentiated population can lead to interesting results. If, however, you can compare those results across industries, or geographies, or those companies which use CRM versus those that don’t, or those who are selling versus those being sold to, you can often make unexpected  discoveries. Questions that let you separate the results to make those comparisons are called banner questions. All surveys should have some.

4. Do distinguish the leaders from the laggards

Executives don’t really care what percentage of companies use an enterprise risk management system, or whether that percentage is growing (though of course ERM vendors do). They might care, however, what companies that manage risk really do differently from those that don’t, and whether they are more likely to use an ERM. You can answer that question if you build in one or more banner questions to distinguish the leaders from the laggards. For instance, ask how many incidents have caused losses over the past year, and then (perhaps normalizing for revenue or industry), compare the habits of the quartile with the least losses (your leaders) against the quartile with the most (those poor laggards). As a double check, you can ask respondents how well they think they manage risk and divide them that way, too.

 

5. Do make sure it’s statistically valid

Most surveys ask a sample of people (say 100 IT managers) and draw conclusions about all of them (tens of thousands in the US alone). How reliable the findings are depends critically on how big the sample is. A sample of 1,000 has a margin of error of ±3%, but a sample of 100 has a margin of error of ±10%. A sample of 20 has a margin of ±20% (unless there are only 20 altogether; for example, CEOs of large auto companies, in which case the error is ± zero, presuming the CEOs aren’t lying). So make sure you have enough respondents that your conclusions will be reliable, especially if you plan to cut the data with banner questions – that will always give smaller samples, each with a bigger margin of error.

And here are five things to avoid:  

 

6. Don’t ask questions we already know the answers to

If you care how many people use LinkedIn for work, ‘tis but the work of a moment to find the answer on Google.  Or do a mental review of 10 friends. In which case I am sure the answer will be 80%, or as close as makes no difference. We have seen a veritable torrent of surveys asking people if they use LinkedIn, Twitter, and Facebook for several years now. We don’t need to augment the flood.

 

7. Don’t ask them what they think

One of the weaknesses of surveys is that they are records of what people think is going on. Even if you ask folks something as concrete as how many customer complaints they had last month, you can only reasonably expect an estimate – no one is going to rummage through the files to find an exact number for your survey. That’s OK so long as you recognize that’s what you’re getting. But you will exacerbate this problem if you ask people what they think about something for which they have little or no data, such as how important social media will be next year.

It’s cheaper and just as informative to ask your kids.

 

8. Don’t ask self-serving questions

My pet peeve in this department is questions that ask people which way their budget is trending and whether they will outsource more next year. These questions are staples of companies that sell technology or augment clients’ staffs with their consultants. Recent examples; “CIOs at midmarket and large companies in Europe and the U.S. will spend 4.5 percent more on IT products and services this year,” and “The expected future state of all business functions will show an increase in outsourcing.” Vendors ask these questions to support their own budgeting processes. In addition they’ll flaunt the results in their reports if the numbers show an upward trend, both to please their own investors and make clients feel like laggards if they don’t spend more. I once heard a CIO say that these kinds of findings have an aroma he calls “vendor stink.”

 

9. Don’t rely on the data alone

A commentary based entirely on survey data makes pretty dry reading. That’s because about the best that a survey can do is tell you what is going on. It can tell you that consulting firms are cutting back on their social media spending, but it can’t tell you why. For that you have to pick a few respondents who did cut back and ask them. You might also support your discovery of a reversing trend with historical data and for that, if you don’t have a five-year history of doing the same survey, you might need secondary research. Combining all three – the survey, interviews, and some public data, you could pull together a pretty good story. Don’t cop out of the story and substitute a fad-ish infographic. Dull data doesn’t suddenly become interesting because a graphic designer got all creative with it. Check out this fine example if you don’t believe me.

 

10. Don’t give up ownership

Maybe you’d like to run a survey but you don’t have the resources so you’re thinking of engaging a third party to do it for you. There are several who can but who will insist on “co-branding” the report: CFO Magazine and EIU are two. But as Chris Koch of SAP said in an interview he did with us earlier this year, “The problem with that is that it only builds credibility for CFO or EIU. It creates goodwill . . . but not demand-generation because it’s not your survey, it’s the EIU’s; it’s not your thinking, it’s theirs.” Instead of a company that’s going to plaster its own name on your survey, use a panel research company such as ResearchNow or SSI that can take your survey design, build and field the survey, and return the results for you to analyze and report on. Their name need not appear on the report. Nor ours if you work with us. Thought leadership should make you look good, not your paid help.

In this age of SurveyMonkey and email, surveys are easy to construct and to field. But they are still hard to do well. People often hope or presume that they will discover insights when they trawl through their data. It doesn’t work that way. To get interesting insights you have to anticipate them, or at least have an idea where they might be lurking, when you design your survey.

There’s nothing in these 10 Do’s and Don’ts that has to do with how you analyze, present or communicate the results once you have them. This is all about the prep-work. As Abe Lincoln once said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” Sharpen well and you’ll not only fell a tree, you‘ll build a first-rate bridge to your customers, too.

Add new comment