7 Comments

This paper is written from the perspective of "analytic epistemology and philosophy of language that concerns itself with the conditions under which particular claims are properly put forward."

Notably absent is any analysis from an ethical standpoint-- and as a medical ethicist, I find this quite concerning. In addition, as Dr. Aftab rightly notes, the Dang & Bright paper is premised on the notion that there is a bright line between claims made in the scientific literature, aimed at other scientists ("Public avowals") and claims made as part of "public scientific testimony.” The authors seem oblivious to the increasingly "porous boundaries" between these two idealized categories, and to the rapid dissemination of such "public avowals" to popular venues read widely by the general public.

This elision leads the authors to ignore the ethical consequences of false and misleading studies that are published in scientific and medical journals. A prime example is the now discredited and retracted article on the supposed link between autism and vaccination, published by Wakefield and colleagues. As Rao and Andrade note:

"In 1998, Andrew Wakefield and 12 of his colleagues[1] published a case series in the Lancet, which suggested that the measles, mumps, and rubella (MMR) vaccine may predispose to behavioral regression and pervasive developmental disorder in children. Despite the small sample size (n=12), the uncontrolled design, and the speculative nature of the conclusions, the paper received wide publicity, and MMR vaccination rates began to drop because parents were concerned about the risk of autism after vaccination.[2] Almost immediately afterward, epidemiological studies were conducted and published, refuting the posited link between MMR vaccination and autism.[3,4]

[Rao TS, Andrade C. The MMR vaccine and autism: Sensation, refutation, retraction, and fraud. Indian J Psychiatry. 2011 Apr;53(2):95-6. doi: 10.4103/0019-5545.82529. PMID: 21772639; PMCID: PMC3136032.]

It is disappointing to see Dang & Bright engaging--even if "playfully"--in this sort of abstracted analysis without even a nod to the ethical ramifications of their thesis. A physician co-author might have pointed out that while recourse to "analytic epistemology and philosophy of language" is perfectly fine, a thesis utterly divorced from considerations of the public good does no great service to philosophy or science.

Ronald W. Pies, MD

Expand full comment

The dichotomy believe/disbelieve is way too simplistic. A scientist can regard his results as supporting his hypothesis on a range from possible to probable to certainty. A conclusion that is presented as possible is neither believed nor disbelieved. One may question whether Einstein regarded his theory of relativity as certain before its explanatory power was demonstrated through photographs of a 1917 eclipse.

Expand full comment

Isn't the main problem with this idea that it assumes scientists are actually willing to change their mind? Scientists/academics often believe their hypothesis for quite personal reasons (source: total anecdote, with apologies), and continue to argue for their truth even in the face of contradictory evidence (I do have an example of someone in mind when I make this statement but I'm going to keep that off the internet). With funding increasingly reliant on making wild overblown predictions on how much your research will help the world, academics may think it's too high a cost (literally financially and egotistically) to admit they were wrong.

Expand full comment

I think the general idea is that a few scientists being stubborn and unwilling to change their minds shouldn’t prevent the relevant scientific community as a whole from updating their thinking. And when science works well, it does happen. But often a handful of scientists can wield a great deal of influence and authority that can hold back the whole scientific community.

Expand full comment

Dang and Bright's central argument seems compatible with Adam Mastroianni's idea of "strong-link science," where you encourage wild experimentation and throwing lots of stuff at the wall to see what sticks, with the assumption that much of it will be wrong or useless but that's fine as long as we get some really good stuff too.

https://www.experimental-history.com/p/science-is-a-strong-link-problem

Perhaps the difference is that Mastroianni's vision is fairly anarchistic with researchers going off in all directions to do their own thing with fewer institutional barriers, whereas these authors envision an even *more* cohesive scientific community with stronger norms and deliberative institutions (but lots of diversity). And presumably in strong-link science the researchers still believe in their results; it's just the results might be really weird. But maybe it also means sharing weird stuff they don't even believe?

In a way, their paper turns the whole "file drawer problem" on its head: here it's not covering up null or disappointing results (even when they appear to be justified), so much as covering up results that don't appear to be justified! I'm inclined to think their paper works best as a thought experiment pushing us to consider what conditions and norms would need to *already* be in place, in order to sustain such an ideally "dynamic, iterative process of collective inquiry" where wrong findings don't undermine science but advance it. Maybe it's like Taleb's antifragility idea, where the more science can absorb wrong or unjustified findings without collapsing, the more resilient it is; the more it depends on findings being true, the more fragile it becomes.

I'm a little confused by your hedging comment though. You say you agree with Dethier that "it is indeed preferable for scientists to appropriately hedge and justify their claims." But I thought Dethier was saying scientists hedge too much with the public and should speak more definitively when their audience is non-experts. Did you mean scientists should hedge less with the public and more with scientists; or more with the public and less with scientists; or something else?

Expand full comment

Thanks Chris! Great comments. I agree that there is a definite synergy with Adam Mastroianni’s view of science, and also agree that Mastroianni so far hasn’t emphasized collective norms of the sort emphasized by Helen Longino.

I didn’t read Dethier as saying that. I understood him as saying that scientists should assert what can be appropriately justified, even when talking to other scientists, and this would often involve hedging. This is from the conclusion of his paper:

“context-sensitivity explains why some scientific (and philosophical) conclusions seem to be appropriately asserted even though they are not known, believed, or justified on the available evidence: scientists who assume that their audience contains only scientists make very specific assumptions about the common ground, assumptions that are rational in that they allow for efficient scientific communication. I then showed these assumptions can go wrong when scientists end up with an audience that is composed partly of non-experts, with the result that the relevant assertions are no longer appropriate…”

Expand full comment

I see - thanks for clarifying about Dethier. And I didn't mean to make Mastroianni's strong-link science sound so ad hoc and loosey-goosey; I could imagine it being very rigorous about truth standards and assessment of evidence, but less rigorous about the weirdness factor and some collective norms.

Expand full comment