I have three answers to this. First, we sometimes make hypotheses based on ample evidence that simply turn out to be wrong. For instance, the evidence that hormone replacement therapy would be beneficial was overwhelming, and then when it was finally tested in a double blind, placebo controlled, randomized clinical trial, this was found not to be the case.
Second, when we do scientific research, it is impossible to discover the truth. We (usually) use parametric statistics to estimate the likelihood that, based on the samples we’re observing, an effect will be seen in the general population. When the likelihood is statistically very high, we can tentatively conclude that the effect is real.
But sometimes, even when the statistical likelihood of an effect being real is very high, the effect is still not real. It is impossible to know for 100% certain if an effect is real, or if it just looks like it is based on the samples you happen to be working with. So this is why we need to replicate findings before we can make meaningful conclusions.
Third, as technology gets better, we can make more accurate judgments about the things we investigate, and these might turn out to undermine things we believed previously.
I think there is a really big problem with the public perception of scientific research, and I’m not sure why that is the case (although I did once try to find out!)
Keep in mind that for every finding that is contradicted, there are dozens (hundreds? thousands?) more that are not. It seems unfair to damn all of science for doing precisely what it sets out to do: reject hypotheses that are ultimately not supported by the data.
I also want to try to address some of the misconceptions cited above:
1. “Corporate meddling.” It would be asinine for me to try to argue that no corporation has ever financed a research study, or that they have not attempted to influence the outcomes of these studies. However, this is extremely uncommon. A great deal of research is publicly funded, and privately funded research usually involves the granting agency handing you a pile of money and then asking you what you find. They do not control whether you publish your findings, or what the study says when you do publish it.
There have been a few high-profile cases that involved drug companies doing research on their own drugs and “massaging” the data. This kind of thing is very strongly frowned upon by the scientific community, to the extent that a very prominent scientist in my field is basically shunned now because she was involved with ghostwriting an article for a pharmaceutical company. She will not be fired from her university, but she has lost all scientific credibility.
2. “Most health studies are simplistic and don’t take into account the complexity of human beings and how they interact with the environment. The experiments are often poorly thought out and the results inadequately analysed and so come to meaningless conclusions.”
I see absolutely no justification for this statement. I read probably a dozen scientific articles per day and in my life have come across fewer than 10 that I would describe this way. Research studies are designed by experts in their field. Each study is designed and executed over a period of months to years by people who have been studying that issue (and only that issue) for years to decades. How and why they would waste their time creating a poorly designed study, failing to analyze it correctly, submitting it for peer review, and somehow passing peer review with their poorly designed, incorrectly analyzed study is completely incomprehensible to me.
3. “Correlation does not imply causation.” Actually, correlations do imply causation, they just don’t prove it. Cigarette smoking correlates with lung cancer, and cigarette smoking also causes lung cancer. Correlational studies were avoided for a long time in scientific research precisely because of this issue, but performed correctly they can be very informative. That said, correlations are not often used in scientific research. The most commonly used statistical tests involve an analysis of variance, which measures whether the variance between two groups is greater than the variance within each group.
I do not think statistics are a lie. I also do not think they are the truth. They are a tool, and they are the very best tool we have for determining causal relationships in behavioral and health sciences.