As a communications measurement consultant, I always hear:
- “I want to do a survey, but I have no idea how many employees I need to survey and what a good response rate would look like.”
- “How do I know if enough people completed my survey for it to be valid?”
- “What in the world is a stratified sample? Help!”
Here are answers to the top three questions about survey samples:
1. How large does my sample need to be?
In survey research, a “sample” is a portion or subset of a larger group called a population. A population is the “universe” to be sampled. As most researchers know, a good sample is one that is representative of the population and exhibits similar characteristics—basically a miniature version of the population.
Conducting research with a sample is quick, efficient and much less expensive than with a total population (i.e., every employee in your company). Of course, if you have the capability and the budget, it is always best to do a “census” and survey everyone in the population.
This method allows everyone to have an opportunity to respond to the survey. In today’s real research world, however, most surveys are distributed to a “random sample.” This means that every “nth” person in the population is surveyed. The “n” is determined by the total population and how big you want your sample size to be. For example, if you had a total population of 1,000 and wanted 100 people in your sample, you would randomly select every 10th person to receive a survey.
“But how big should my sample be in order to give a valid estimate of the total population?” you ask. The answer to this important question is that it depends on how precise you want your results to be. Two factors affect preciseness: margin of error (also called “confidence interval”) and confidence level. Margin of error is how close your sample estimate is to the true population number and is usually stated as a plus/minus number, such as +/- 3%. It is directly related to sample size—the bigger the sample, the smaller the margin of error and the more precise the results.
Confidence level refers to the degree of certainty one has that the “results found are truly the results.” The confidence level for research is usually set at 95%. This means that 95 out of 100 times, the results found are accurate within the margin of error. Statistical calculations are needed to determine what sample size you’d need for your population and desired margin of error.
2. What is stratified sampling, and do I need to do it?
Simple random sampling is easy and useful, but sometimes you need to be sure that you have adequate proportions of people with certain characteristics in your sample. For example, let’s say you want to compare how different age groups feel about a new social media network you’ve just launched within your company. To compare the groups most accurately, roughly the same number of respondents should be in each age group. In cases like these, a “stratified sample” can be used. A stratified random sample is one in which the population is first divided into subgroups (“strata”) and then a random sample is selected from each subgroup.
3. What’s a good response rate for a survey? Is there a rule of thumb I should use?
Although external surveys (with customers, external stakeholders or the general public) usually hover around a 10–12% response rate, employee surveys such as the ones used in communication audits are generally a bit higher. A “good” response rate is widely considered to be 20–25%. A response rate of more than 30% is deemed “excellent” (meaning that 30% of the employees you sent a survey to completed it). A return rate below 20% in employee research may be considered invalid due to a self-selection bias and non-representative sampling. (Perhaps you’re hearing only from a select few of the survey recipients, and their responses might not be accurately generalized to the majority who did not respond.)
Tips to improve your response rate:
- Promote the survey ahead of time. Let employees know it’s coming, why you’re doing it and what you’ll do with the results.
- Include a letter from a high-level executive or influential person (e.g., CEO, union leader in organized settings).
- Always share the results; if you don’t, employees won’t bother to take the time to fill out future surveys.
- Assure confidentiality—whether it’s a paper or an online survey, have the completed surveys collected by, and sent directly to, a third-party data-processing firm, if possible.
- Include clear instructions and objectives with a due date—14 to 21 days is usually enough time—and don’t forget to send reminders.
- Be careful with humor; if you don’t take the survey seriously, employees may not either.
- Try to ask only questions you’ll take action on, and assess only what can really change.
- Offer department or location incentives, such as a free lunch or happy hour to the department/location with the highest return rate.
- Recruit ambassadors at each location to promote the survey.
Katrina Gill is an affiliate consultant for Ragan Consulting Group. She has more than a decade of diverse research experience, from the planning and development of projects through the presentation of results and recommendations for action.