Here’s a common situation.
Your grant application or committee requires sample size estimates. It’s not the calculations that are hard (though they can be), it’s getting the information to fill into the calculations.
Every article you read on it says you need to either use pilot data or another similar study as a basis for the values to enter into the software.
You have neither.
No similar studies have ever used the scale you’re using for the dependent variable.
And while you’d love to run a pilot study, it’s just not possible. There are too many practical constraints — time, money, distance, ethics.
What do you do?
Don’t Use Previous Research for the Effect Size
First, before you do anything else, make sure you know which steps you need data for.
It’s not calculating the effect size. You should base your effect size estimate on the minimal meaningful effect size. Not the effect others have gotten.
But you do need previous data for a few estimates. A common need is a correlation among predictors. Or the effect of control variables. These needs will depend on exactly which statistical test you’re basing the sample size estimates on.
One estimate you always need for the power calculation is a standard deviation of the response variable. If you have no previous data for this estimate, your power calculations will be just guesses.
How similar does the previous data have to be?
Well, you can’t expect previous data to be collected under the exact same conditions. If they were, you wouldn’t need to run your study. The same dependent variable run under other conditions or on another population is likely to be the best you can find.
Keep in mind that the closer the conditions and population are to yours, the better the estimates will be. But even approximate values will be helpful.
Good Sample Size Estimates Are Important
Remember the point is not to just fill in a blank on the grant application.
That may solve your immediate problem, but it will come back to haunt you when nothing is significant in your study. Even the large meaningful effects.
What about Standardized Effect Sizes?
You could cheat and use a standardized effect size — say a Goldilocks-style medium effect of d=.5.
Many researchers like this option because it eliminates having to come up with both a meaningful effect size and a standard deviation.
If you’re up against a wall, this may be your only choice, and I have advised researchers to do this in specific circumstances. But it’s not good practice. It’s pretty much random guessing.
Your sample size estimates should be better than random guesses. Just keep that in mind as you’re selecting your sample.
What to Do Instead
If your estimates of standard deviations or other needed parameters are imprecise, it’s totally fine to report a range of sample size estimates or effects.
For example, you could say something like this:
Based on an alpha of .05, sample size estimates indicated that we will be able to detect a correlation between .28 to .41 with a sample size of 80, based on standard deviation estimates of 8 to 12 at a power of .80. As no previous research was available on which to base these standard deviation estimates, these estimates should be considered vague approximations. Since a correlation as low as .28 would be considered clinically relevant, 80 participants is a conservative estimate to detect a statistically significant result.
In the interest of transparency, you must state that you don’t have any data to base the standard deviation on and are giving wide, but plausible ranges.
This makes it clear that you aren’t confident about a precise value. But you’ve thought about it, looked at the best- and worst-case scenarios, and come up with some likely ranges.
Girma Gilano says
this really helpful for confusion causing headaches for researchers trying to solve to calculate sample size in the absence of previous evidence and unable to conduct pilot.