I want to talk about “conspiracy theories” that preoccupy many psychiatric critics, and I want to do that by relying on some very helpful points made by Scott Alexander (
).When I say “conspiracy theories,” I'm thinking of ideas that have been rejected by the mainstream scientific community and are not supported by the current scientific consensus, but are enthusiastically endorsed in some communities. People in these communities tend to think that the mainstream scientific community’s rejection of these ideas is due to corruption, incompetence, or ignorance.
Examples of this sort would be the belief that ivermectin is efficacious in the treatment or prophylaxis of COVID-19 infection, and various sorts of anti-vaccination attitudes, such as the notion that the COVID-19 vaccines are ineffective and harmful.
A belief related to psychiatry that I’d put in the same category would be the notion that psychiatric medications across the board actively make the psychiatric illness worse. That is, antipsychotics make schizophrenia worse, or antidepressants make depression worse, etc. And not just in some subset of patients but for the average patient or for the majority of patients. I would also include in this category the view that antidepressants have no clinically relevant efficacy in the treatment of depression, and the only reason they are still prescribed to anyone (and why they are still recommended by practice guidelines) is because the medical community is corrupt, incompetent, or ignorant.
[Post-publication clarification: I’m using the term “conspiracy theory” in a way that is broader than standard use. Because of this, some people think my use of the term is problematic. Joe Pierre pointed out that most definitions of conspiracy theories involve “the likes of a secret plot, malevolent intentions, and a cover-up,” and he doesn't think that ideas such as “ivermectin works” or “antidepressants don’t work” qualify for that. However, he elaborates, if the claim was that the psychiatric profession knew that antidepressants didn't work or caused harm, but recommended and prescribed them anyway to generate profits for Pharma, that would qualify as a conspiracy theory. He suggests that “misbelief” or “antipsychiatry beliefs” would be better characterizations of what I am describing here. I think he is basically right. It does seem to me that the term has acquired a more expansive meaning over time, which is reflected in how Scott Alexander used the term. Since I was building on Alexander’s work, I retained it. But I am willing to accept that some other term for this phenomena would be more appropriate.]
I want to highlight 3 points made by Alexander.
#1. Conspiracy theories are supported by facts that can seem compelling even to many smart people, and conspiracy theories are fueled by the same sorts of biases that other people, including mainstream scientists, are vulnerable to.
From Contra Kavanagh On Fideism:
“my conclusion is that conspiracy ecosystems fall prey to the exact same biases that all of us have, including experts and correct people. But experts and correct people have slightly less of them, have better self-correction mechanisms, and manage to converge on truth, whereas conspiracy theorists have slightly more of them and shoot off into falsehood… We all struggle with the same tendencies. The trick is in understanding and controlling them.”
“These facts [that compellingly support conspiracy theories] have alternative explanations, but these alternatives are less compelling and harder to explain.”
From Trying Again On Fideism:
“… good conspiracy theories have lots of convincing-sounding evidence in their favor, and may sound totally plausible to a smart person reasoning normally. When people shrug off conspiracy theories easily, it’s either because the conspiracy theory isn’t aimed at them - the equivalent of an English speaker feeling smug for rejecting a sales pitch given entirely in Chinese - or because they’re biased against the conspiracy theory with a level of bias which would also be sufficient to reject true theories. Sure, everything went well this time - they were able to resist believing the theory - but one day they’ll encounter a sales pitch in English, on a topic where it accords with their biases. Then they’ll be extra-double-doomed because of how sure they are that they’re immune to propaganda.”
#2. “Trapped priors” ensure that believers remain stuck in conspiracy theories
From Trapped Priors As A Basic Problem Of Rationality:
In normal Bayesian updating of beliefs, some one biased can start off wrong, but with each new fact they learn, they move towards the correct position. But when someone has “trapped priors,” they hold on to their beliefs so rigidly that “they start out wrong, and each new fact they learn, each unit of effort they put into becoming more scientifically educated, just makes them wronger.”
and
“This pattern - increasing evidence just making you more certain of your preexisting belief, regardless of what it is - is pathognomonic of a trapped prior.”
#3. The response of the mainstream community makes it all worse
Scott Alexander writes in Contra Kavanagh On Fideism about how he believed in a “conspiracy theory” about Atlantis (the lost continent that hosted an advanced civilization) as a teenager, and he describes how the response of the skeptical community was basically to mock the people who believed the theory without offering much of an explanation for the evidence (such as the existence of underwater “pyramids”) that had been offered by proponents such as Hancock.
“I’m not angry at Graham Hancock. I see no evidence he has ever been anything but a weird, well-meaning guy who likes pyramids a little too much. But I feel a burning anger against anti-conspiracy bloggers, anti-conspiracy podcasters, and everyone else who wrote “lol imagine how stupid you would have to be to believe in Atlantis” style articles.
Either these people didn’t understand the arguments for and against Atlantis, or they did. If they didn’t, they were frauds, claiming expertise in a subject they knew nothing about. If they did, then at any moment they could have saved me from a five year wild-goose-chase - but chose not to, because it was more fun to insult me.”
and
“I agree that many people are unreasonable and don’t respond to reasonable explanations. I think sometimes this is genetic or something and can’t be helped, but other times it comes after a hundred different experiences where you want reasonable explanations and don’t get them and also people are jerks to you and you learn that the establishment can’t be trusted… If I had had to suffer through a few more skeptics calling me racist because I wanted to know why there were giant underwater pyramids, I probably would have believed in Atlantis even harder, out of spite, and never talked myself out of it. And then when ivermectin came along, I would have thought “Scientists? Experts? They’re the guys who are so dumb they can’t even figure out Atlantis existed when there are giant underwater pyramids right in front of their eyes. Screw them, I’m listening to Bret Weinstein.””
“My model of the PR here - of the overall milieu and psychological factors that turn people into conspiracists - is that they spot some giant underwater pyramids, compelling-seeming facts that appear to point toward conspiracy. These facts have alternative explanations, but these alternatives are less compelling and harder to explain. Realistically some people are going to get caught up in the conspiracy’s superior first-level compellingness and you can’t help them. But other people are on the fence and can be talked down. This is the job of the pro-mainstream-anti-conspiracy people. Instead of doing their job, these people:
ignore them
insult them
tell them there’s “no evidence” for their beliefs, when they have just gotten back from a scuba dive to see the giant underwater pyramids.
tell them that they shouldn’t look at any evidence, looking at evidence makes you a bad person.”
These same dynamics play out in the case of antipsychiatry conspiracies as well.
1. Many intelligent and smart people are vulnerable to these ideas. In fact, evidence-based bros might be particularly susceptible, because they already harbor a deep mistrust of the experience of clinicians and patients and are unable to use that for error-correction, and they are super-aware of the flimsy nature of most scientific research.
Several things can make these ideas seem more compelling than they actually are: when these ideas are publicly endorsed by some prominent academics and high-profile journalists; when there are some studies, reviews, or meta-analyses that support these ideas; when these ideas are contextualized within a narrative of the ubiquity of shady scientific practices such as publication bias, p-hacking, misreporting outcomes, selective reporting, ghost writing, and even outright fraud; and when individuals are primed by experiences of or observations of iatrogenic harm.
2. The partisan and polarized nature of online discourse enculturates individuals to a thinking style that encourages confirmation bias and an attitude of condescension towards opponents that rapidly produces trapped priors.
3. The mainstream psychiatric community responds by ignoring or insulting them, and by failing to offer adequate explanations for “the conspiracy’s superior first-level compellingness” in a manner and in venues that are accessible to an average person trying to look for them. And when the folks who are dissatisfied with this lack of response have to repeatedly suffer through mainstream folks calling them “antipsychiatrists,” they start believing in these ideas even harder, out of spite, and learn that the so-called experts cannot be trusted.
I experienced this first-hand in an attenuated sort of way. The story never reaches the same sort of intensity as Alexander’s story, but it may be illustrative nonetheless.
For instance, as a psychiatry-trainee and even later as a psychiatrist, I struggled with making sense of the data around the clinical efficacy of antidepressants. The research evidence suggesting a lack of efficacy seemed quite compelling to me, and when I’d discuss this with colleagues, the responses were highly inadequate and didn’t do much to explain the puzzling findings. Folks were content to shrug and continue on with practice as usual. It was highly frustrating. I can easily imagine scenarios where I started off with priors that were just slightly off — say, I had taken an SSRI in the past and had found it to be ineffective and harmful, or if I had been a non-clinical academic with a contrarian mindset — and then these priors would become trapped because of the repeatedly negative experiences. Fortunately for me, after several years of trying to figure this out on my own, I encountered some experts who understood these debates really well and pointed me in the right direction. (I wrote the post The Case for Antidepressants in 2022 as a way of helping others like me.)
In the case of psychiatric-medications-make-mental-illness-worse arguments, I had to do a lot more figuring out on my own. Again, I found the bulk of mainstream response to be inadequate. Becoming deeply familiar with the research literature myself, engaging with some specific instances of it, discussing these issues with close colleagues, triangulating research with clinical experience, and then later on teaching trainees on these issues was all quite helpful. Part of the solution was appreciating the ways in which psychiatric medications can and do negatively affect the functioning of many individuals (the grain of truth in the argument), and distinguishing that question from the question of the worsening of illness itself. And part of it was appreciating the methodological issues on a gut level that can produce a very compelling impression of worsening outcomes in longitudinal data. (As Scott Alexander writes, “I learned enough about geology to understand on a gut level how natural processes can sometimes produce rocks that are really really artificial-looking…”) (See: Making Sense of the Literature on Antipsychotics and Long-Term Functioning)
My relationship to these ideas was always marked by a strong ambivalence and my clinical experience kept me grounded, so as a result, unlike Alexander, I don’t quite feel a burning anger against psychiatrists who respond to folks grappling with these questions with “lol, how stupid you have to be to believe this,” but I do feel terribly annoyed and frustrated.
We can do better. We need to do better.
A challenge is that many folks already have trapped priors and there is little to be gained by engaging with them endlessly. It only polarizes the discussion further, so I avoid doing that as much as possible. But for every person with trapped priors, I believe there are many more on the fence, silent but observing the interactions, and often times, it is our behavior that will determine the trajectory their beliefs will take.
See also:
What a thought provoking and clear piece of writing. Thank you! The rationalist account of belief change in the face of evidence, and conspiracy theories-- when that fails, is impressive. I suspect though it is an incomplete one. It privileges a cognitive, mechanistic explanation over the emotional/social functions of conspiracy theories. The idea of "trapped priors" sounds quite similar to the "backfire effect" where protected values/beliefs get stronger in the face of contradicting evidence. Strongly held beliefs develop in a social matrix. There is a community of believers similar to one out there. One's mind is embedded in a network of minds so to speak. For this network or community, issues of deeply held values, social identity, perceived power imbalances are important in fostering and maintaining "conspiracy theories." Years ago, I read somewhere that cognition/thinking developed as an almost ornamental function of the human mind. While the implications of this may seem less optimistic, it is important to take into account the social milieu in which conspiracy theories develop, if there is going to be a meaningful chance of addressing them.
I agree that how, we as professionals, respond determines the outcome of these conversations. There should not be any safe-guarding "acceptable discourse" outside which the response would be confusion and ridicule.
On an only tangentially related note, I will take a trip down memory lane to share my own experience from my training days on how NOT to deal with conspiracy theories. Sometime in 2007/08, when I was a first year resident, there was grand rounds on HIV/AIDS at my training academic institution, and a certain doctor asked about the origins of the HIV virus, and if there is any truth to the allegation that the early epidemic in Africa coincided with the mass vaccination campaigns by the World Health Organization. Now, I did not know anything about this issue. I am still not sure what to make of these stories. I remember, however, how the some of the professors and the presenter responded. They behaved as if the most ridiculous question was asked: as if it was not worth their time. To this day, I could vividly see their reaction. Their confusion, irritation, and embarrassment. The message was, "do not ask such stupid questions."
Thanks for a very useful discussion of conspiracy theories (CTs), Awais. Dr. Joe Pierre and I discuss these in detail in several articles, including my recent posting on Psychiatric Times:
https://www.psychiatrictimes.com/view/false-flag-conspiracy-theories-psyche-society-and-the-internet
It's important to emphasize that CTs are not, in most cases, "delusions" in the sense most psychiatrists would use that term; rather, they usually represent over-valued and poorly evidenced beliefs that have been strongly reinforced by certain cultural subgroups, typically via social media and the internet.
It is also important to note that CTs can arise on either side of the political spectrum, and that some CTs (though probably a minority) eventually prove to be true.
Finally, it hardly needs emphasizing that psychiatrists who treat patients espousing CTs should avoid patronizing or disparaging the patients; rather, the psychiatrist should keep an open mind; understand and empathize with whatever fears may underlie the patient's CT; and gently provide corrective information or appropriate "counter-narratives" for the patient to consider.
For more on this, please see:
https://www.medscape.com/viewarticle/945290
Best regards,
Ron
Ronald W. Pies, MD