• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

5 Circles Research

  • Overview
  • SurveysAlaCarte
  • Full Service
  • About
  • Pricing Gurus
  • Events
  • Blog
  • Contact

Questionnaire

Leave a Comment

Survey Tip: Pay Attention to the Details

Why survey creators need to pay more attention to the details of wording, question types and other matters that not only affect results but also how customers view the company. A recent survey from Sage Software had quite a few issues, and gives me the opportunity to share some pointers.

The survey was for follow up satisfaction after some time with a new version of ACT! Call me a dinosaur, but after experiments with various online services, I still prefer a standalone CRM. Still, this post isn’t really about ACT! – I’m just giving a little background to set the stage.

  • The survey title is ACT! Pro 2012 Customer Satisfaction Survey. Yet one of the questions asks the survey taker to compare ACT 2011 with previous versions. How dumb does this look?
    Image:Survey title doesn't match question
  • This same question has a text box for additional comments. The box is too small to be of much use, but also the box can’t be filled with text. All the text boxes in the survey have the the same problem.
    Image: Comment boxes should be big enough
  • If you have a question that should be multiple choice, set it up correctly.
    Image: Use multiple choice properly
    Some survey tools may use radio buttons for multiple choice (not a good idea), but this isn’t one of them. This question should either be reworded along the lines of “Which of these is the most important social networking site you use“, or – probably better – use a multiple choice question type.
  • Keep up to date.
    Image: Keep up to date with versions
    What happened to Quickbooks 2008, or more recent versions? It would have been better to simply have Quickbooks as an option (none of the other products had versions). If the version of Quickbooks was important (I know that integration with Quickbooks is a focus for Sage) then a follow up with the date/version would work, and would make the main question shorter.
  • There were a couple of questions about importance and performance for various features. I could nitpick the importance question (more explanation about the features or an option something like “I don’t know what this is” would have been nice), but my real issue is with the performance question. 20 different features were included in both importance and performance. That’s a lot to keep in mind, so it’s good to try to make the survey taker’s life easier by keeping the order consistent between importance and performance. The problem was that the order of the performance list didn’t match the first. I thought at first that the lists were both randomized separately, instead of randomizing the first list and using the same order for the second. This is a common mistake, and sometimes the survey software doesn’t support doing it the right way. But after trying the survey again, I discovered the problem was that both lists were fixed orders, different between importance and performance. Be consistent. Note, if your scales are short enough, and if you don’t have a problem with the survey taker adjusting their responses as they think about performance and importance together (that’s a topic of debate among researchers) you might consider showing importance and performance together for each option.
  • Keep up to date – really! The survey asked whether I used a mobile computing device such as a smartphone. But the next question asked about the operating system for the smartphone without including Android. Unbelievable!
    Image: Why not include Android in smart phone OS list?

There were a few other problems that I noted, but they are more related to my knowledge of the product and Sage’s stated directions. But similar issues to those above occur on a wide variety of surveys. Overall, I score this survey 5 out of 10.

These issues make me as a customer wonder about the competence of the people at Sage. A satisfaction survey is designed to learn about customers, but should also create the opportunity to make the customers feel better about the product and the company. However, if you don’t pay attention to the details you may do more harm than good.

Idiosyncratically,

Mike Pritchard

Filed Under: Methodology, Questionnaire, SurveyTip Tagged With: Survey Tips, Surveys

3 Comments

Poor question design means questionable results: A tale of a confusing scale

I saw the oddest question in a survey the other day. The question itself wasn’t that odd, but the options for responses were very strange to me.

  • 1 – Not at all Satisfied
  • 2 – Not at all Satisfied
  • 3 – Not at all Satisfied
  • 4 – Not at all Satisfied
  • 5 – Not at all Satisfied
  • 6 – Not at all Satisfied
  • 7 – Somewhat Satisfied
  • 8 – Somewhat Satisfied
  • 9 – Highly Satisfied
  • 10 – Highly Satisfied

What’s this all about?  As a survey taker I’m confused.  The question has a 10 point scale, but why does every numeric point have text (anchors). What’s the difference between 1, 2, 3, 4, 5 and 6 that all have the same anchoring text?   Don’t they care about the difference between 3 and 5?  Oh, I get it, this is really a 3 point scale disguised as a 10 point scale.

With these and other variations on the theme of “what were the survey authors thinking?”  on my mind I talked to a representative from the sponsoring company, AOTMP.  I was told that the question design was well-thought out and appropriate, being modeled on the well-known Net Promoter Score.   Well of course it is  – like an apple is based on an orange (both grow on trees).  But not really:

  • The Net Promoter question is for Recommendation, not Satisfaction.  There were a couple of other similar questions in the short survey, but nothing about Recommendation. Frederick Reichheld’s contention is that recommendation is the important measure and also incorporates satisfaction; you won’t recommend unless you are satisfied.
  • The NPS question uses descriptive text only at the end points (Extremely Unlikely to Recommend and Extremely Likely to Recommend).  It is part of the methodology to avoid text anywhere in the middle in order to give the survey taker the maximum flexibility.  That’s consistent with survey best practices.
  • The original NPS scale is from 0 to 10, not 1 to 10.  Maybe that’s a small point, although the 0 to 10 scale does allow for a midpoint which was part of the the NPS philosophy.

Other than the fact that this survey question isn’t NPS, what’s the big deal?  Well, this pseudo 10 point scale really doesn’t work.  The survey taker is likely to be confused about whether there is any difference between “3, Not at all Satisfied” and “4, Not at all Satisfied”. Perhaps the intention was to make it easier for survey takers, but either they’ll take more time worrying about the meaning, or just give an unthinking answer, and the survey administrator has no way of knowing.  Why not just use the 3 point scale instead?  I suppose you could, but then it would be even less like NPS. Personally, I like the longer scale for NPS.  I don’t use NPS on its own very much, but the ability to combine with other satisfaction measures with longer scales (Overall Satisfaction and Likelihood to Reuse) means that I’ve got the option of doing more powerful analysis as well as the simple NPS.  More importantly, I don’t have to try to persuade a client to stop using NPS as long as I include other questions using the same scale.  Ideally, I’d prefer to use a 7 or 5 point scale instead, but 10 or 11 points works fine – as long as only the end-points are anchored. For more on combining Net Promoter with other questions for more powerful analysis, check out “Profiting from customer satisfaction and loyalty research”

There’s no justification for this type of scale in my opinion.  If you disagree, please make a comment or send me a note.   If you want to use a scale with every point textually anchored, use the Likert scale with every point identified (but no numbers). Including both numbers and too many anchors will make the survey takers scratch their heads – not the goal for a good survey.

Perhaps the people who created this survey had read economist J.K. Galbraith’s comment without realizing it was sarcastic.- “It is a far, far better thing to have a firm anchor in nonsense than to put out on the troubled seas of thought.”

Idiosyncratically,
Mike Pritchard

Many thanks to Greg Weber of Priorities Research for clarifying the practice and the philosophy of the Net Promoter Score.

Filed Under: Methodology, Questionnaire Tagged With: Net Promoter, NPS, Questionnaire, Surveys

Leave a Comment

SurveyTip: Think about the number of pages in your survey

Have you seen surveys where every question, no matter how trivial, is on a different page?  Or how about surveys that are just a single long page with many questions?

Neither approach is optimal.  They don’t look great to your primary customer — the survey taker — perhaps reducing your response rate. What’s more, you may be limiting your options for effective survey logic.

Every question on a new page

The survey taker has to check the “Next” button too many times, with each click giving an opportunity to think about quitting.  Each new page requires additional information to be downloaded from the survey host, causing extra time delay.  If the survey taker is using dialup, or your survey uses lots of unique graphics, the additional delay is likely to be noticeable, but in any case you create an unnecessary risk of looking stupid.

One reason for surveys being created like this is is a hangover from early days of online surveying when limitations were common, and as a result surveyors may think it is a best practice.  Another possibility is leaving a default set in the online survey design tool for placing each question on a new page.  But, rather than just programming without thinking, try to put yourself in the mind of the survey taker, and consider how they might react to the page breaks.

Most surveys have enough short questions that can be easily combined to reduce the page count by 20% or more.

It is generally easy to save clicks at the end of the survey, by combining demographic questions, and this is a great way of reducing fatigue and early termination.  However, try hard to make improvements at the beginning also, to minimize annoyances before the survey taker is fully engaged.  If you have several screening questions there should be opportunities to combine questions early on.

Be careful that combining pages doesn’t cause problems with survey logic.  Inexpensive survey tools often require a new page to use skip patterns.  Even if you are using a tool with the flexibility of showing or hiding questions based on responses earlier in a page this usually requires more complex programming.

Everything on one long page

People who create surveys on a single long page seem to be under the impression that they are doing the survey taker a favor, as their invitations generally promote a single page as if that means the survey is short.  Surveys programmed like this tend to look daunting, with little thought given to engaging with the survey taker.  There might be issues for low bandwidth users (although generally these surveys are text heavy with few graphics, so the page loading time shouldn’t be much of an issue).

Single page surveys rarely use any logic, even when it would be helpful.  As described above it may more difficult to use logic on a single page.  I often recommend that survey creators build a document on paper for review before starting programming, but single page surveys often look like they started with a questionnaire that could have been administered on paper (even down to “if you answered X to question 2, please answer question 3“), but that misses the benefits of surveying online.  One benefit of surveying online that isn’t always well understood is being able to pause in the middle of a survey and return to it later.  This feature is helpful when you are sending complex surveys to busy people who might be interrupted, but it only works for pages that have been previously submitted.

One of the most extreme examples of overloading questions on pages I’ve seen recently printed out as 9 sheets of paper!  It also included numerous other errors of questionnaire design, but I’ll save them for other posts.

In the case of long pages, consider splitting up the questions to keep just a few logical questions together.  For some reason, these long page surveys are usually (overly) verbose so it may be best to just use one question per page, or, more productively, reviews by other people to distill the questionnaire to the most important elements with clear and concise wording.

To finish on a positive note, one of the best online surveys I’ve seen recently was a long page survey from the Dutch Gardens company.  There were two pages of questions, one with 9 questions and the second with 6, plus a half-page of demographics.  The survey looked similar to a paper questionnaire in being quite dense, but it didn’t look overwhelming because it made effective use of layout and varied question types to keep the interest level high.  None of the questions were mandatory, refreshing in itself.  And the survey was created with SurveyMonkey — it just goes to show what a low-end tool is capable of.  This structure was possible because the survey was designed without needing logic.

I hope that you’ll get some useful ideas from this post to build surveys with page structure that helps increase the rapport with your survey takers.

Idiosyncratically,
Mike Pritchard

Filed Under: Questionnaire, SurveyTip

Leave a Comment

SurveyTip: Randomizing question answers is generally a good idea

Showing question answers in a random order reduces the risk of bias from the position.  

To understand this, think of what happens when you are asked to choose a question by a telephone interviewer.  When the list of choices are presented for a single choice question, you might be think of the first option as more of a fit, or perhaps the last option is top-of-mind.   The problem is even more acute when the person answering the survey has to comment on each of several attributes, for example when rating how well a company is doing for time taken to answer the phone, courtesy, quality of the answer, etc.   As survey creators, we don’t know exactly how the survey taker will react to the order, so the easiest way is to eliminate the potential for problems by presenting the options in a random order.  Telephone surveys with reasonable sample sizes are almost always administered with question options randomized for this reason, using CATI systems (computer assisted telephone interviewing).

When we create a survey for online delivery, a similar problem exists.  It could be argued that the survey taker can generally see all of the options so why is a random order needed?  But the fact is that we can’t predict how survey takers will react to the order of the options.  Perhaps they give more weight to the option nearest the question, or perhaps to the one at the bottom.  If they are filling out a long matrix or battery of ratings, perhaps they will change their scheme as they move down the screen.  They might be thinking something like “too many highly rated, that doesn’t seem to fit how I feel overall, so I’ll change, but I don’t want to redo the ones I already did”.    Often there could be an effect from one option being next to another that might be minimized by separating them, which randomizing will do (randomly).   The results from these options being next to each other would likely be very different:

  • Has a good return policy
  • Has good customer service
  • Items are in stock
  • Has good customer service

Some question types and situations are not appropriate for random ordering.  For example:

  • Where the option order is inherent, such as education level or a word based rating question (Likert scale)
  • Where there is an ‘Other’ or ‘Other – please specify’ option.  It is often a good idea to offer an ‘Other’ option for a list of responses such as performance measures in case the survey taker believes that the list provided isn’t complete, but the ‘Other’ should be the last entry.
  • A very long list, such as a list of stores, where a random order is likely to confuse or annoy the survey taker.

As with other aspects of questionnaire development, think about whether randomization will be best for the questions you include.

Idiosyncratically,
Mike Pritchard

Filed Under: Questionnaire, Surveys, SurveyTip

Leave a Comment

Today’s tortured questionnaire wording

I just have to share this in the hope that a reader will be able to enlighten me.  What could this possibly mean?

Not a provider that I would think of at first, but I probably would not consider it

OK, let me give some context. This is from a survey on business internet services. The researcher wants to know what would be my likely consideration for each of several providers if I’m choosing a new one. The choices are as follows:

  • The only provider I’d ever consider
  • One of the providers I’d consider above others
  • One of the providers I’d consider above others
  • Not a provider that I would think of at first, but I might consider it
  • Not a provider that I would think of at first, but I probably would not consider it
  • A provider I would never consider

If I think about it, especially with the ordering they’ve offered, I guess the research company wants to know if I would be unlikely to consider it (somewhere between “might consider” and “would never consider”).  But was there an actual phrase that they were trying to come up with?  Beats me.

It’s hard to tell whether they are losing any useful data from this poor question wording, other than running the risk of respondents terminating from confusing.

I saw this issue 11% of the way through the survey, so I wondered how bad the rest would be.  Fortunately there were no other major problems.

Idiosyncratically,
Mike Pritchard

Filed Under: Questionnaire

Leave a Comment

When Validation Backfires

I just came across an interesting issue with validation in an online survey using a Van Westendorp pricing model.  Van Westendorp is one of the common ways to test pricing by directly questioning prospective purchasers.  This post isn’t about Van Westendorp, also known as the Price Sensitivity Meter (you can find plenty of references online, including  a starting point on Wikipedia) but you need to know a little to understand the issue.  Survey respondents are asked a series of questions about price perceptions, as follows:

  • At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would still consider buying it? (Expensive/High Side)
  • At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)
  • At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)
  • At what price would you consider the product to be a bargain—a great buy for the money? (Cheap/Good Value)

There is some debate about the order of questions, but in this example the questions were asked in the order shown.  The wording was slightly different.  Researchers are sometimes concerned about whether the respondents understand the questions correctly, especially since the wording is so similar (the Expensive, Cheap etc. designations are usually not inclined in the question as seen by a survey taker).   One way to address this concern is to highlight the differences.  Or you might point out that the questions are slightly different and encourage the respondent to read carefully.

The other approach is to apply validation that tests the numerical relationship.   Correctly entered numbers should be Too Cheap < Good Value  < Expensive < Too Expensive. (We usually ask these questions on separate pages so as to get independent thoughts from the respondents as far as possible, rather than letting them see the group of questions as one and making them consistent or nicely spaced).

In this case, the research vendor chose to validate, but messed up big-time.  When I entered a value for ‘Too Expensive’ that was higher than the value for ‘Expensive’, I was told to make sure your answer is smaller or equal to the previous answer.  Yes, they forced me to provide an invalid response!  I hope they caught the problem before the survey had gathered all the completes, but maybe they didn’t – given how fast online surveys often fill.  They probably had to field the survey again because the pricing questions were integral to the research objectives.

Why did this happen, and how can you prevent a similar problem in your surveys?

My guess is that the underlying cause was that debate about question order that I mentioned earlier.  The vendor probably had the questions switched when the validation was tested, and then changed the order before the survey was launched.

But the real message is that proper testing could have identified the issue in time to correct a very expensive error.  There is no excuse for what happened.  This doesn’t even fall into the class of problems that the pilot or soft-launch would be needed to catch.

So, test, test, and test again.   In particular, test using people who aren’t research professionals or experienced survey takers.

If you are creating your own surveys, don’t let this kind of problem stop you.  You can do just as good a job of testing as the big companies, and big companies aren’t immune.  This survey was delivered by one of the top 10 U.S. market research firms.  I won’t publish the company name here, but I’ll probably tell you if you catch me at one of my workshops (coming soon).

Idiosyncratically,

Mike Pritchard

Filed Under: Methodology, Questionnaire, Surveys, Testing

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Follow us on LinkedIn Subscribe to Blog via RSS Subscribe to Blog via Email
Mike did multiple focus groups for me when I was at Amazon, and I was extremely pleased with the results. Not only is Mike an excellent facilitator, he also really understood the business problem and the customer experience challenges, and that got us to excellent and very actionable results.
Werner KoepfSenior ManagerAmazon.com
I hired Mike to develop, execute and report on a market research project involving a potential business opportunity. I was impressed with his ability to learn the industry and subsequently develop a framework for the market research project. He was able to execute the research and collect data efficiently and effectively. Throughout the project, he kept me abreast of the progress to allow for any adjustments as needed. The quality and quantitative output of the results exceeded my expectations and provided me with more confidence in the direction of the business opportunity.
Mike ClaudioVice President Marketing and Business DevelopmentWizard InternationalSeattle
5 Circles Research has been a terrific research partner for our company. Mike combines a wealth of experience in research methodology and analytics with a truly strategic perspective – it’s a unique combination that has helped our company uncover important insights to drive business decisions.
Daniel WiserBrand ManagerAttune Foods Inc.
First, I thought it was near impossible to obtain good market information without a large scale, complex market study. Working with 5 Circle Research changed that. We were able to put together a comprehensive survey that provided essential information the company was looking for. It started with general questions gradually evolving to specifics in a fast pace, fun to take questionnaire. Introducing “a new way of doing things” like Revollex’ induction heating-susceptor technology can be challenging. The results provided critical data to help understand the market demand. High quality work, regard for schedule, thorough understanding of the issues are just a few aspects of an overall exceptional experience.
Robert PoltCEORevollex.com
Every conversation with Mike gave me new insight and useful marketing ideas. 5 Circles’s report was invaluable in deciding on the viability of our new product idea.
Greg HowePresidentCD ROM Library, Inc.
I have come to know both Mike and Stefan as creative, thoughtful, and very diligent research consultants. They were always willing to go further to make sure respondents remained engaged and any research results were applicable and of immediate use to us here at Bellevue CE. They were partners and thought leaders on the project. I am happy to recommend them to any public sector client.
Radhika Seshan, Ph.DRadhika Seshan, Ph.D, Executive Director of Programs Continuing Education Bellevue College
Since becoming our contracted consultant for market research services in 2010, 5 Circles Research has revolutionized our annual survey of consumer opinion in Washington. Through the restructuring of survey methodology and the application of new analytical tools, they have provided insights that are both wider in their scope and deeper in their relevance for understanding consumer values and behavior. As a result, the survey has increased its significance as a planning and evaluation tool for our entire state agency. 5 Circles does great work!
Blair ThompsonDirector of Consumer CommunicationsWashington Dairy Products Commission
Mike brings a tremendous balance of theoretical marketing research with a strong practical knowledge of marketing. He can tailor the research to the right level for your project. I have hired Mike multiple times and he has delivered each time. I would hire him again.
Rick DenkerPresidentPacket Plus
You know how your mechanic knows what’s wrong with your car when you just tell them what it sounds like over the phone? Well, my first conversation with Mike was like that — in like 10 seconds, he gave me an insight into my market research that was something I’d been struggling trying to figure out. A class like this will help you learn what you can do on your own. And, you’ll have a better idea of what a research vendor can do for you.
Roy LebanFounder and CTOPuzzazz
Great workshop! You know this field cold, and it’s refreshing to see someone focused on research for entrepreneurs.
Maria RossOwnerRed Slice

Featured Posts

Dutch ovens: paying a lot more means better value

An article on Dutch ovens in the September/October 2018 of Cook’s Illustrated gives food for thought (pun intended) about the relationship of between price and value. Sometimes higher value for a buyer means paying a lot more money – good news for the seller too. Dutch ovens (also known as casseroles or cocottes) are multipurpose, [Read More]

Profiting from customer satisfaction and loyalty research

Business people generally believe that satisfying customers is a good thing, but they don’t necessarily understand the link between satisfaction and profits. [Read More]

Customer satisfaction: little things can make a big difference

Unfulfilled promises by the dealer and Toyota of America deepen customer satisfaction pothole. Toyota of America and my local dealer could learn a few simple lessons about vehicle and customer service. [Read More]

Are you pricing based on cost rather than value? Why?

At Pricing Gurus, we believe that value-based pricing allows companies to achieve higher profitability and a better competitive position. Some companies disagree with that perspective, or feel they are stuck with cost-based pricing. Let’s explore a few reasons why value-based pricing is generally superior. [Read More]

Recent Comments

  • Mike Pritchard on Van Westendorp pricing (the Price Sensitivity Meter)
  • Marshall on Van Westendorp pricing (the Price Sensitivity Meter)
  • 📕 E mail remains to be the most effective SaaS advertising channel; Chilly emails that work for B2B; Figuring out how it is best to worth… - hapidzfadli on Van Westendorp pricing (the Price Sensitivity Meter)
  • Isabelle Spohn on Methow Valley Ski Trails gets pricing right
  • Microsoft Excel Case Study: Van Westendorp-un "Price Sensitivity Meter" modeli | Zen of Analytics on Van Westendorp pricing (the Price Sensitivity Meter)

Categories

  • Overview
  • Contact
  • Website problems or comments
Copyright © 1995 - 2023, 5 Circles Research, All Rights Reserved