Every year we see multiple surveys being conducted around various areas of digital marketing, but sometimes when looking at the number of respondents, we think: “Nah… This is just too small of an audience to be indicative of the overall trend!” or “350 people they’ve surveyed can’t be sufficient enough of a number to make the conclusions they’re making for the whole industry!”
In some cases, the above reactions may be right, while in others a statistically significant sample size may actually be very different from what we may imagine it should be.
Definition first:
Statistical significance — put simply, this is the likelihood that a finding or a result is caused by something other than just chance. Usually, this is set at less than 5% probability (p< 0.05), meaning that the result is at least 95% likely to be accurate (or that this result would be produced by chance no more than 5% of the time). [source]
Now let’s turn to our statistically significant sample size. How do we determine how large it has to be (to ensure our survey findings are most likely to be accurate than not)?
The following table from a paper by Bartlett, Kotrlik & Higgins answers the question:
Read about the differences between continuous and categorical data here and here. Also, see the original Organizational Research: Determining Appropriate Sample Size in Survey Research manuscript (PDF file).
Pingback: Three Hard Truths About A/B Testing | ConversionXL
Pingback: Three Hard Truths About A/B Testing | DougShockley.com
Pingback: Split Testing(A/B tests ) | Make Money Anytime from Anywhere