Machinery of Misbelief
A psychiatrist tackles the psychology of false beliefs in the misinformation age
This is a review of the book “False: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True” by Joe Pierre, MD, published by Oxford University Press, 2025.
If you’ve ever tried to reason with an uncle who swears the election was stolen, or felt that sinking dread when a friend shares a viral post about miracle cures targeting root causes, you know the territory Joe Pierre is mapping in False. A professor of psychiatry at UCSF and an experienced academic clinician, Pierre takes apart the mechanics of belief in things that aren’t true with the eye of a clinician and the curiosity of a social scientist. It is an intellectually rigorous, urgent exploration of why human beings believe things that aren’t true. The book draws from psychology, psychiatry, cognitive science, and social theory, and it speaks directly to the uneasy question of the fragility of truth.
Pierre sets himself a formidable task: to trace the full spectrum of false belief, from clinical delusions to everyday distortions, and to explain why a species capable of scientific accomplishments such as putting men on the moon also struggles to agree on basic facts. He draws on cognitive psychology, psychiatry, and media studies to produce a diagnosis of the “post-truth” condition.
Various chapters of the book explore and explain key psychological phenomena related to misbeliefs. Chapters focus on topics such as overconfidence and the Dunning-Kruger effect, confirmation bias, motivated reasoning, identity-protective cognition, epistemic relativism, the erosion of authority in a fragmented media ecosystem, the “disinformation industrial complex” and the exploitation of psychological vulnerabilities by state and corporate actors.
Pierre is at his best when integrating clinical insights with cognitive science. His use of case vignettes, particularly his long-standing engagement with patients suffering from delusional disorder, grounds the text in real-world psychiatric experience. He also handles complex cognitive biases and behavioral economic theories (e.g., base rate neglect, heuristic-driven errors) with clarity and pedagogical skill.
Susceptibility to falsehood is an everyday feature of the normal human mind. The very mental faculties that make us brilliant, such as pattern recognition, intuitive inference, emotional salience, also render us systematically prone to error. And in the attention economy, these design features can generate all sorts of problematic dynamics. From harmless quirks to violent delusions, from UFO enthusiasts to QAnon, from climate change denial to vaccine refusal, they are expressions along a continuum of false beliefs with overconfidence at one end and psychosis at the other.
Pierre outlines a characterization of belief pathology in terms of degrees of conviction, preoccupation, extension, and distress. We all fall prey to cognitive distortions. We all tell ourselves stories that insulate identity from dissonance. What marks clinical delusions, he argues, is not their falsity per se, but their unshakeability and disruptive impact. On Pierre’s account, while both a QAnon adherent who withdraws from family and stockpiles supplies and a patient convinced they are under CIA surveillance might hold beliefs that are, in a loose sense, “delusion-like,” the two occupy qualitatively different territory. Delusions proper, Pierre argues, can be reliably distinguished from conspiracy-driven beliefs by features such as their self-referential structure and reliance on subjective rather than objective reasoning.
The first half of the book is dedicated to unpacking the psychological architecture that supports false belief. Pierre covers the usual suspects (confirmation bias, motivated reasoning, the Dunning-Kruger effect) with clinical texture and empirical grounding. He tackles concepts such as “identity protective cognition,” a term borrowed from legal scholar Dan Kahan, which refers to our instinct to reason in ways that protect group identity rather than objective accuracy.
The metaphor of the “flea market of opinion” is a memorable frame for the contemporary information ecosystem. Here, amid algorithmic amplification, epistemic gatekeeping collapses. Loudness, confidence, and tribal resonance outperform nuance. In this bazaar, everyone’s an expert and no one’s accountable. The result is a public sphere saturated with persuasive nonsense.
Chapters on disinformation, conspiracy theories, and “bullshit” (speech indifferent to truth) build momentum toward the book’s second half, which tackles remedies.
Pierre stresses structural and institutional reforms alongside personal virtues, but with regard to proposed solutions addressing cognitive failings, Pierre focuses on what he calls the “Holy Trinity of Truth Detection”: intellectual humility, cognitive flexibility, and analytical thinking. These, he argues, are the dispositions most protective against the adoption of “epistemically suspect beliefs.” They can be seen as theoretical virtues and habits cultivated through education, CBT-style metacognition, and guided dialogue.
The prescriptions are laid out across five levels: individual, educational, media, platform, and civic. At the individual level, Pierre advocates for the trinity of virtues. At the educational level, he recommends media literacy curricula modeled after Finland’s widely cited national strategy. For information systems, he endorses “prebunking” (inoculation against falsehoods before exposure) rather than slow and mostly ineffective debunking. For platforms and policy, he calls for more content moderation, accountability mechanisms, and the flagging of repeat misinformation spreaders. Pierre insists that moderation is not censorship, though he knows this assertion can be politically fraught. For civic life, he recommends deliberative engagement across divides, creating space for reasoned disagreement and re-humanization.
These remedies are based on theoretical conceptualization of the mechanisms at play while drawing support from empirical research, and Pierre does a good job highlighting what works (e.g., the modest but reproducible effects of pre-bunking) and what doesn’t (e.g., fact-checking alone). Because personal bias is entangled with structural dynamics, the book insists on both psychological and social reforms. However, even the social reforms discussed tend to converge on the end goal of developing epistemic humility and cognitive resilience in the individual.
Pierre’s account consistently situates misbelief at the intersection of normal cognition and environmental context. He rejects “mass psychosis” tropes and the tendency to pathologize individuals, instead stressing the interplay between personal cognitive habits and systemic factors like polarization, distrust, and media ecosystems. While “the Holy Trinity of Truth Detection” is aimed at individual epistemic resilience, Pierre acknowledges that deeper structural change is also needed, and that neither level of intervention can succeed without the other. In this way, False offers an integrated view of the cognitive terrain and the political and institutional landscapes in which misinformation thrives.
Pierre defines misinformation. somewhat in passing, “misinformation—that is, information that’s wrong or false.” (p 51) and this definition is presented in the context of differentiating between objective reporting and opinion. Opinions that are wrong or false are not necessarily misinformation in Pierre’s view, provided their status as opinions is acknowledged. The meaning of misinformation used by Pierre is that of false factual claims, which is narrower than “any information that happens to be wrong.” Pierre is less concerned with defending a particular notion of “misinformation” and he focuses on clear examples of false beliefs, such as PizzaGate conspiracy theory and false claims that COVID-19 vaccines caused sterility or contained microchips.
Pierre also emphasizes the difference between misinformation and disinformation, defining disinformation as “information that’s deliberately created and spread for its false counternarrative.” (p 65) Pierre recognizes that the line between misinformation and disinformation is blurred, since intentions and genuineness of beliefs are hard to judge, but maintains that this distinction is important, “amounting to the difference between believing things that are merely wrong and believing lies.” (p 65) Pierre devotes an entire chapter to discussing the “disinformation industrial complex” and sets up the metaphors as follows:
“Within the disinformation food chain, “apex predators” who are part of a “disinformation industrial complex” sit at the top, fomenting mistrust in institutions of epistemic authority and creating false information based on some underlying motive, usually to obtain either financial profit or political power… In the middle of the food chain are the “mesopredators” or “prosumers” who both consume disinformation and pass it on as misinformation while also creating novel disinformation themselves. Finally, the “prey” or “pure consumers” lie at the bottom of the food chain as passive receptacles of misinformation and disinformation alike, sharing it with other consumers.” (p 67; citations removed from quotation for readability.)
Pierre's discussed examples of apex predators include Alex Jones and InfoWars (notorious in particular for his baseless claims that Sandy Hook Elementary School 2012 shootings were a staged operation), the wellness and lifestyle company Goop, and Dr. Kelly Brogan. In Pierre’s analysis,
“From a societal standpoint, there’s good evidence that our current vulnerability to disinformation is more a result of mistrust than gullibility, ignorance, or stupidity as is often claimed.” (p 66)
and while some of this mistrust is a consequence of the failures of mainstream institutions, breeding mistrust in facts and experts has been a specific strategy weaponized by political actors.
One way of appreciating the position False occupies in the contemporary intellectual discourse around the phenomena of misinformation and post-truth world is by comparison with the work of philosopher
. I first encountered Williams through his work on the marketplace of rationalizations (Williams, 2023). I have enthusiastically followed his Substack blog “Conspicuous Cognition” since its launch in January 2024, and his writings have had a tremendous influence on how I personally think about these issues. It was natural then that while reading False, I often viewed Pierre’s arguments through the lens of Williams’s ideas.Williams is dissatisfied with how “misinformation” and “disinformation” are typically defined and studied by scholars of misinformation. Williams’s argument is that clearly fabricated stories (a la Alex Jones) are relatively rare and their impact is confined to a small segment of society. On an expanded definition that includes any falsehood or even true but misleading information, “misinformation” indeed has more societal impact but this also makes objective characterization of misinformation far harder, since truth (especially in politics) is complex, contested, and often uncertain. This creates a dilemma: researchers can either target the politically significant but definitionally slippery forms of misleading communication or preserve relative objective impartiality, but not both.
I would add that I also personally find the definition of “misinformation” as “information that’s wrong or false” or “false factual claims” as philosophically inadequate, primarily because the domain of assertions that are wrong or false is far greater than the domain of things we can reasonably call misinformation. Our scientific understanding of the world is constantly progressing and many of our scientific explanations that are considered true today will be proven wrong tomorrow. There are ways to add further qualifications to the definition to bolster it, and to Pierre’s credit, he goes to great lengths to explain how scientific understanding changes over time, “science is always ready to be wrong,” (p 158) and trusting science is not an argument from authority. And I completely agree with Pierre when he writes, “scientific consensus based on such evidence shouldn’t be as easily discounted as it often is. It’s one thing when a lone scientist or a handful of scientists makes a claim that’s debatable; it’s another when nearly the entire field is in agreement.” (p 159) At the same time, these insights do not translate into an updated definition of “misinformation.” Fundamentally, there is no escaping the messiness of delineating the domain of “false information” in contexts where truth is complex, contested, evolving, and uncertain, and the problem is all the more acute in situations where an expert consensus does not exist.
There is no escaping the messiness of delineating the domain of “false information” in contexts where truth is complex, contested, evolving, and uncertain, and the problem is all the more acute in situations where an expert consensus does not exist.
This definitional problem, even if accepted, is not fatal to Pierre’s project. Pierre works with a relatively broad definition of misinformation, but his focus remains on clear instances of misinformation that are immune to boundary disputes. Whereas Williams maintains that unmistakable instances of misinformation and disinformation affect a relatively small portion of society, Pierre argues that such misinformation and disinformation can have and have had an outsized influence by decisively shaping outcomes such as political elections. Pierre writes,
“belief in modern conspiracy theories related to vaccines, the integrity of democratic elections, and climate change has a much greater practical relevance to our daily lives and a much greater potential for harm… belief in disinformation-promoting conspiracy theories about these pivotal issues of our time is already contributing to human lives lost through neglect of proper medical care and failure to take action on attempts to halt or reverse global warming.” (p 91)
Williams acknowledges that harmful disinformation exists and can sometimes be clearly identified, but stresses that misinformation research is not an objective or apolitical endeavor, and its experts share the same biases and fallibility as anyone else. As an example, he cites an exposé on the Global Disinformation Index’s disputed decision to label outlets like Unherd as disinformation sources. This doesn’t contradict or undermine Pierre per se, whose examples are unambiguous and who never claims that the field of disinformation is free of bias, but it shows how implicitly portraying judgments of misinformation and disinformation as obvious and infallible can erode trust in experts, and that the field must embrace the same humility and self-awareness that Pierre recommends generally.
Both Pierre and Williams recognize in their own ways that cognitive biases and naive realism are the human default. Both agree that disinformation is a symptom of societal dysfunctions such as institutional distrust and political sectarianism. A core point by Williams is that most social epistemology frames the central puzzle as to why people believe false things. Williams argues that just as poverty is the historical default and wealth is the true puzzle, ignorance and misperception are the human default; the real epistemic puzzle is how and why people ever come to hold true beliefs. Williams would argue that the epistemic default is distorted, locally adaptive, self-serving cognition; accurate beliefs about complex, non-local matters arise only through rare alignments of institutions, norms, and incentives.
False takes it for granted that we are living in a “post-truth” world, while Williams finds the very idea frustrating. “For the love of God, stop talking about "post-truth"” is the exasperated title of one of Williams’s posts in which he argues,
“There was never a golden age of objectivity, and today’s epistemological problems result from competing visions of reality, not a conflict over the value of truth.” (Williams)
False was written before Williams’s critiques of misinformation studies on Substack took off. What I want to convey here is that my reading of False was shaped in part by Williams’s influence, and as a reviewer, that seems like important information to communicate. By the time I picked up Pierre’s work, concepts like “misinformation” and “post-truth” had already been thoroughly problematized in my mind (for better or worse) thanks to Williams, stripped of the solidity Pierre seems to assume. That Pierre’s project remains stable, and his thesis retains both vitality and viability, is a testament to his scholarly rigor and the excellence of this book.
A version of this review was published online in Psychiatric Times on Aug 29, 2025.
See also: