Deep breath…
@CaptainHarley and @Nullo, I am responding to you before I read the study. I do not know what its conclusions are, so I can promise you both that I am not taking an ideological stance here. These are just facts.
You both referred to the small sample size of the study, stating that it includes 90 people. That is generally not considered a small sample size, and in statistics this is not a matter of opinion. Let me explain why.
Suppose you are a scientist, and you have invented a new drug to treat cancer. You decide to make two groups of patients, one who will get the drug, and one who will not.
At first, you only have 1 person in each group. So you give the drug to 1 person, and it doesn’t work.
Can you conclude from this that the drug doesn’t work? Probably not, because here, the sample size is too small. It can’t tell you anything.
So how many patients do you need to test it on? 2? 20? 200? How do you decide? Just random guessing?
Of course not. There is a statistical technique called a power analysis that lets you decide how many people you need to include in your study. A power analysis has 3 variables. If you put in 2 of them, you can get the third out. The three variables are:
1. Sample size. This is usually the variable you don’t already know, that you use the power analysis to calculate.
2. Alpha level. This is an arbitrary but very important a priori value that you use to decide your level of statistical significance (I can explain more about what that means if you are interested).
3. Effect size. This, in the cancer example, is how well the drug works.
The alpha-level is usually a fixed value (almost always 0.05). So the sample size you need to draw a conclusion goes up or down depending on the effect size.
This means that if you have a very big effect size—your cancer drug works extremely well, or it works extremely NOT well—then you can draw a conclusion with a small sample size.
But if your cancer drug is only a little bit effective (or a little bit ineffective)—then you need a larger sample size in order to draw a conclusion either way.
What this means is that if you can draw a statistically significant conclusion with a small sample size, you have a very large effect. Let me restate that: if a study is saying something conclusive with a small sample size, that means that what the study found is a very big difference between groups.
A small sample size and a positive (statistically significant) finding means the study has more strength and more statistical power. A study has a more meaningful conclusion if it can get there with a small sample size.
So the only time a sample size can be “too small” is when it is “too small” to give any statistically significant results. If a study has a statistically significant result, then by definition the sample was large enough.