23 min read
You’re ready to start a new research project, but how can you ensure you get the best and most accurate results? In this article, we tackle the most common types of survey bias, and provide best-practice advice on how to address them to ensure you get honest, accurate answers from your research.
When it comes to understanding your audience at scale, few options are as good as a market research report or offline or online survey.
Having information on customer preferences, behaviours, likes and dislikes, income, and demographics — the list goes on — helps businesses to create more tailored products and services, as well as captivating experiences.
But even if you create and distribute the surveys, how can you ensure that they’re fair, unbiased, and contain questions that are easy for respondents to answer? This is where ‘bias’ comes in — and reducing it is key to creating great surveys that gather data, encourage honest responses, and benefit your business.
Bias is defined as a “deviation of results or inferences from the truth, or processes leading to such a deviation” and it occurs in every survey. It’s impossible to eradicate bias as each person’s opinion is subjective. This includes the researcher, who thinks up the questions and plans the research, and the participants, who answer the questions and share their thoughts.
There are several ways survey bias can influence the accuracy and integrity of interviews, as well as the answers provided by participants. For example:
These are just a few ways survey response biases can creep into research projects. In this guide, we’ll share a few examples of the above and how you can reduce sampling bias and survey bias. First, how can survey bias influence your survey data, response rate, and survey results?
Survey bias can cause a plethora of problems for researchers, including:
Data issues: The data produced doesn’t accurately reflect the opinions of participants, as there are less than truthful responses, extreme responses, or inaccurate answers. This data won’t help you reach your goals.
Poor strategies and investment: As management and senior leaders base future business decisions on market research and survey insights, survey bias can affect how they invest money, time, and resources — potentially taking the wrong course of action.
Low return on investment: Poor insights lead to poor product performance. When you’re targeting the wrong customers as a result of survey biases, e.g. leading questions, you won’t get the information you need to improve your offerings and the overall experience.
Dissatisfaction: Stakeholders and investors will be dissatisfied with performance levels and may reduce market research budgets over the long term.
Inconclusive research: Surveys may need to be repeated to test whether the data or the researchers are at fault, which takes time, money, and resources.
Ultimately, good actions, progress, and innovation are based on good data quality. If the management can’t rely on or trust the research results for accuracy, then it’s a lose-lose situation for all people involved.
There is no single survey type that experiences more bias than another. Bias can affect all survey types, including:
Survey bias is a universal issue that researchers should be aware of and plan for before every research project. The best thing to do is to think about survey design and use the right survey tools to empower respondents to answer honestly. This way, you can get accurate, valuable survey results.
Selection bias creates inaccurate or unrepresentative data. This is because it’s gained in an unfair way that’s detrimental to the accuracy and goals of the research.
For example, you could select a non-random sample, a sample that has a crucial market segment unaccounted for, or a sample that doesn’t engage can all affect data results by providing too much, not enough, or the wrong kind of feedback. You could also choose to focus on samples that validate your own viewpoints and perspectives (confirmation bias), offering no new insights for your teams to act on.
Sampling bias, also known as selection bias, is when your sample is unrepresentative and will not provide the right feedback to support the goals of the survey research.
Some samples forget to include the right target customer market segment. This can lead to inaccurate data results. For example, if your brand makes toys for children and you’re wondering about their aesthetic appeal, surveying a sample made up of parents would tell you why the parents buy the toy, but not why the toy is considered appealing to children.
Sampling bias can also occur when the researcher creates a one-sided sample because they believe they know who the survey should target.
But while they might be correct, creating a one-sided sample may overstate the importance of respondents’ feedback, as well as miss the diverse viewpoints of other non-customer segments that may want to use your product or service.
Examples of sampling bias
Even with a perfect sample selection (no sampling bias), respondents may not answer the survey. But why?
Well, they may not like filling in surveys, or their email could be inactive (so make sure to keep your data up to date). They might not like your brand or don’t understand the purpose of your survey. Or they might just hang up the phone, or throw the survey in the bin.
Whatever the reason, your results won’t be indicative of the full sample. This means that, because of unresponsive sample members, you may miss out on crucial data that would help you analyse trends or identify correlations.
For every survey, there will be those who don’t answer. The idea is to keep this to a minimum, ideally a small percentage of the full sample survey size. If the percentage of unresponsive sample members is higher than average, you know that your results have a non-response bias.
Examples of non-response bias
What if your sample is in itself filled with the wrong kind of people, just because the right kind of people are no longer available to speak to (e.g. you’ve no longer got their details)? Survivorship bias is where you target the right customer market segment, but due to natural turnover, you only have the people who are left — the ‘survivors’.
These ‘survivors’ are more likely to be favourable and biased in their results. But to get the full picture, you have to hear from the people who are not around anymore, as they represent the full picture.
Example of survivorship bias
A brand is looking to understand why employee turnover is so high, so they do research with their current employees. However, the people that will give them the insights as to why they left are those that have left the organisation. As they aren’t part of the research sample, the results will have survivorship bias.
Response bias is when your sample provides responses based on the survey questions, but the answers they provide aren’t what they really believe or think. Instead, participants’ survey responses are based on the structure and language of the questions, leading them to answer in a particular way.
Some examples of response bias in action are:
For this response bias, some participants will choose an extreme answer value to answer a question that has a scale as an answer (e.g. Likert scale). This will increase the response bias if the question is phrased in a way that suggests that the right answer is an extreme-ended one.
This is more common where the researcher has failed to make the question neutral, or if the question has a ‘closed’ yes or no response that forces you to be extreme in your approach.
Examples of extreme bias
This type of response bias occurs when the researcher creates questions that are not specific enough, or don’t evoke a strong enough response for respondents to pick an extreme either way.
As a result, participants pick a neutral position on a Likert answer scale. This doesn’t help the overall results of the research, as you would like to have a mixture of extreme and neutral responses that tell you more about your participants’ varied views.
Example of neutral bias
On a scale of 1 to 5, how do you feel about these animals?
Dog, cat, bear, lion, goldfish
(Scale of 1-5, where 1 is Hate and 5 is Love)
For pet owners, or non-pet owners, the answers will be neutral as, without experience of owning each one, it’s unlikely they’ll have an extreme view.
Acquiescence bias (also known as the yes bias, the friendliness bias, and the confirmation bias) tends to be one of the more commonly recognised response biases by researchers.
This bias is the tendency for survey respondents to agree with the survey questions, without their response being a true reflection of their own position or beliefs. This is because it’s easier to say yes and agree — to please a researcher or complete a survey — than to hold a disagreeable position.
This occurs when the question is phrased in a way that asks the participant to confirm a statement, or when the question is answered with opposing pairs, such as ‘Agree / disagree’, ‘True / false’, and ‘Yes / no’.
Examples of acquiescence bias
Question order bias, or order-effects bias, occurs when related questions are placed in a certain order. For example, once a participant answers one question positively or negatively, the participants feel they have to answer any follow-on, related questions the same way.
This is a bias based on the participant’s desire to be consistent with their answers, whereas, in reality, there could be different answers to a set of questions on one topic.
Examples of question order bias
Survey takers may want to appear more socially desirable or attractive to the interviewer as people are careful about how they appear to others. From a survey perspective, this could be respondents answering uncharacteristically or lying to appear in a positive light.
The researcher’s choice of topic could be the source of the issue, or it could be the participant’s insecurity or comfort with the topic that affects their answers.
Examples of social desirability bias
The last type of survey bias is created by the actions of the interviewer. The way that a question is asked, or the way the interviewer makes a participant feel in the survey, can impact what results they receive back.
As the reliability of the data is on the line, the interviewer owes it to themselves to do their best to remove bias, though they may not even realize what they’re doing.
When a participant is doing a survey, they are aware that they’re in an interview setting and may act differently because of that. If you recall how nerve-wracking interviews are for a new job, you’ll see why you might say something that wasn’t accurate or wholly true because of the pressure on you.
As such, researchers will get biased responses from surveys that are incredibly formal or hosted in an uncomfortable setting. To help respondents and get accurate data and valuable data — researchers need to help participants forget that they are being interviewed and asked survey questions.
Examples of demand characteristic bias
Reporting bias arises when the research team decides on the publication of the research based on the positive or negative outcome, from the analysis of the data.
Examples of reporting bias
A healthcare research team found that they can’t make a case that their medical painkiller cream decreases pain when used on test participants. The brand may choose not to publish the results sharing this information, which is unethical and doesn’t represent the facts based on the research.
Given how prevalent bias in surveys is, what can you do about it to protect your survey work, but to make sure you get the right answers back from survey takers? Answers based on their beliefs, needs, and views?
Here are some suggestions that will help prevent survey response bias:
You might have sampling bias in your marketing list, or you might have inadvertently created questions that lead the participant to a specific answer. But why take the risk when you can see the hundreds of questions on offer with our free survey templates?
But you can go one step further. With our integrated all-in-one solutions, you can get all your surveys, customer and participant data in one place.
With the ability to improve your survey quality using AI and create research surveys by just dragging and dropping the right modules, you have everything you need. What’s more, you’ll benefit from analytics and dashboard reporting, giving you both an at-a-glance and comprehensive view of responses.
Our survey and panel management tools can help you reach the right audiences around the world, right when you need them. And with the inbuilt intelligence assisting with personalisation, you can boost response rates and show the customer that they’re front of mind.
After all, 13,000 of the world’s best brands that use our software can’t be all wrong!
But if you want to know where to start with your survey questions, we can help. Find out what kind of questions our experts have created for use in your surveys to reduce the risk of survey bias.
eBook: The Qualtrics Handbook of Question Design