“Science is broadly understood as collecting, analyzing, publishing, reanalyzing, critiquing, and reusing data.” Wikipedia,

In other words, science is a process. More specifically, science is a self-correcting process for deepening our understanding of the world. It is a process that comes with safeguards to minimize error. Data is the direct outcome of that process. Knowledge is a proposition about data that is widely accepted as true by some group. Call them experts or scientists. To trust science with qualifications is to leave open the possibility that they might be wrong. Or, there might be more to the story. This is more an attitude of openness than of doubt (i.e., skepticism), an acknowledgement of human fallibility, the sheer difficulty of knowing all the relevant facts, and a willingness to question received wisdom. It’s an attitude that keeps the scientific process going.

With those thoughts in mind, more on this subject from my 2021 post, Trust in Science?:

An open science collaboration of researchers conducted replications of 100 studies published in three top psychology journals. Of the original studies, 97% had significant results. Of the replications, just 36% had significant results. Per the study authors, “collectively these results offer a clear conclusion: A large portion of replications produced weaker evidence for the original findings” (Collaboration, O.S., 2015). Replication studies have also been done in economics, neuroscience, evolutionary biology, ecology, and organic chemistry. All arrived at the same clear conclusion.

The study “The prevalence of statistical reporting errors in psychology (1985–2013)” documented statistical errors in psychology papers published by eight major psychology journals between 1985-2013. The authors found errors in half the papers. One in eight contained errors sufficiently serious to question the conclusions of these papers. Statistical findings of significance were more likely to be erroneous than findings of nonsignificance, raising the possibility of systematic bias in favor of significant results. However, the authors warn that an initial finding of statistical error is not enough to dismiss a paper’s conclusions. Rather, “the final verdict on whether a result is erroneous should be based on careful consideration by an expert” (Nuijten et al , 2016).

In another study, participants who reported trust in science were more likely to believe and disseminate false claims that contain scientific references than false claims that do not. The study authors conclude that ‘trust in science, although desirable in many ways, makes people vulnerable to pseudoscience” (O'Brien, Palmer, et al., 2021).

Retraction Watch has catalogued over 22,000 retracted articles in the scientific literature since the 1970s. Its database provides detailed information on retractions in a broad range of scientific fields, including biology, neuroscience, ecology, climatology, psychology, health, nutrition and more. Among the 100+ reasons given for retractions in the database: bias and lack of balance, conflict of interest, errors in analysis, plagiarism, falsification of data, and fake peer review.

We could trust science more if we could trust scientists more – that is, if scientists were less prone to bias, error, fraud, hype, and other sins. But trust in science is not reducible to trust in scientists. That’s because data can’t be made to tell the truth – at best, it can only suggest a possible truth. And the same data may lead to opposite conclusions. Case in point:

A German psychologist named Martin Schweinsberg gave 49 researchers a copy of a dataset consisting of 3.9 million words of text from nearly 8,000 comments made on an online forum for chatty intellectuals. He asked them to explore two seemingly straightforward hypotheses. In the end, 37 analyses were deemed sufficiently detailed to include. As it turned out, no two analysts employed exactly the same methods, and none got the same results. The problem was not that any of the analyses were “wrong” in any objective sense. The differences arose because researchers chose different definitions of what they were studying, and applied different statistical techniques. (“Data don’t lie, but they can lead scientists to opposite conclusions”/The Economist)

So if trust in science means believing what some or many scientists conclude is the truth, the answer is no, we shouldn’t trust in science. But that doesn’t mean we should distrust science as a matter of principle. It does mean we need to get better at evaluating scientific evidence…and then draw our own tentative conclusions.  

References:

Collaboration, O. S. (2015). "Estimating the reproducibility of psychological science." Science 349(6251): aac4716. https://science.sciencemag.org/content/349/6251/aac4716.abstract

“Methods and madness: Data don’t lie, but they can lead scientists to opposite conclusions”.  The Economist  July 31, 2021

Nuijten, M.B., Hartgerink, C.H.J., van Assen, M.A.L.M. et al. The prevalence of statistical reporting errors in psychology (1985–2013). Behav Res 48, 1205–1226 (2016). https://doi.org/10.3758/s13428-015-0664-2

O'Brien, T. C., R. Palmer, et al. (2021). "Misplaced trust: When trust in science fosters belief in pseudoscience and the benefits of critical evaluation." Journal of Experimental Social Psychology 96: 104184. https://doi.org/10.1016/j.jesp.2021.104184

“Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science” by Stuart Ritchie. Metropolitan Books. Published 2020. " **** Highly recommended!