“Mixed Bag” is a series where I ask an expert to select 5 items to explore a particular topic: a book, a concept, a person, an article, and a surprise item (at the expert’s discretion). For each item, they have to explain why they selected it and what it signifies. — Awais Aftab
Philip R. Corlett, PhD is an Associate Professor of Psychiatry and Psychology, in the Wu Tsai Institute for Human Cognition at Yale University, and directs the Belief, Learning, and Memory Lab. He is trained as a cognitive neuroscientist and his scientific research is focused on delusions and hallucinations, in serious mental illnesses like schizophrenia but also in other psychiatric and neurological illnesses, as well as in attenuated forms in the non-psychotic general population. You can find him on twitter: @PhilCorlett1.
Corlett: My career has involved a mixed bag of methods; including designing behavioral tasks that translate across species, computational explanations of behavior in human patients and experimental animals, psychopharmacological models of psychotic symptoms, and imaging meta-analyses.
Despite the array of methods, I am attracted to parsimony – the idea that we can deconstruct ineffable and profoundly human experiences into simple constituent building blocks whose biology we can interrogate with the tools of modern preclinical neuroscience.
I also strive for consilience: the unity of knowledge across fields and levels of analysis.
With a mixed bag, one can search for consilient findings elsewhere in the bag.
There is no better feeling than when that works out.
The account that I have contributed to most is the prediction error model of delusions; which suggests that people form delusions under the influence of an errant learning signal in their brains, that drives them to attend to and learn about merely coincident stimuli, thoughts, and percepts, and imbue them with delusional meaning and significance. More recently this has expanded into the predictive coding account of psychosis, which seeks to explain hallucinations too, not solely in terms of prediction errors but also errant perceptual predictions.
In the next phase of my career, I am to make this work practically useful – we are trying to relate the signals and behaviors to treatment effects (both current and novel, for example Julia Sheffield is testing whether volatility beliefs and prediction errors change with a CBT intervention that targets worry, with promising initial results), and, more broadly, trying to render psychotic symptoms more relatable and understandable by studying them alongside psychosis-like phenomena in the general population.
Book: “The Man Who Mistook his Wife For a Hat” by Oliver Sacks (1985)
Corlett: As an undergraduate, I started out studying Molecular Biology. I was not very good at it. I am not gifted at the bench, and, although I enjoyed learning about Francois Jacob and Jacques Monod, and their rebellious adventures in and out of the lab, Molecular Biology did not capture me.
I was living in halls, surrounded by social scientists. They would stay up deep into the night having conversations about expansive topics like death and time.
‘How might I, a scientist, have something to contribute?’, I wondered.
One floormate suggested I read Oliver Sacks. I did. I was enraptured. I loved how he weaved the personal narratives of his cases in with extremely creative thinking about the challenges they were experiencing. This book provided a pathway from biology (specifically brain damage) to the social sciences, through the lens of cognitive neuropsychology. It started me wondering about delusions and how to explain them, in a manner that was intellectually satisfying, but that honored their deeply personal phenomenology. So began a quest to collect as many phenomenological descriptions of what it is like to experience mental symptoms as I could find. Between lectures I would sit in the Psychopathology Library and pore over texts on obscure neurological cases, and psychiatric symptoms.
I was able to change tack and take up Experimental Psychology, under the mentorship of Mike Aitken. Mike, the material, the Department, and the Faculty felt like home, and I feel extremely grateful to have found that fit.
I am not a clinician, so much of what I learned about patients, prior to starting graduate school, was gleaned from the accounts I found in the library. I particularly enjoyed the First Person Accounts published monthly in Schizophrenia Bulletin.
There is, of course, no substitute for talking to people about their experiences. I am very lucky that I got to do that in graduate school, and I continue to be able to today.
Concept: Prediction Error
Corlett: My final year undergraduate project with Jeff Dalley, David Theobold, and Trevor Robbins, involved manipulating dopamine levels in the striatum of rats after a training session where they learned that a light predicted food rewards. Increasing dopamine levels made them learn that association more strongly, even if that learning caused behaviors that led the rewards to be withheld. It struck me, given what I was learning about schizophrenia (that it too was associated with excess dopamine), that this might explain why delusions were so strongly held. During that time, I went to a Departmental talk about dopamine and learning signals by Wolfram Schultz.
With Tony Dickinson, and Pascal Waelti, Wolfram Schultz was demonstrating that the dopamine signal in the midbrain appeared to embody a particular learning mechanism, hitherto only inferred from behavioral experiments. This quantity was prediction error; the mismatch between what you expect and what you experience, which can provide an impetus to new learning.
Dopamine signal in the midbrain appeared to embody a particular learning mechanism, hitherto only inferred from behavioral experiments. This quantity was prediction error; the mismatch between what you expect and what you experience, which can provide an impetus to new learning.
I wondered what would happen if prediction errors were signaled when they shouldn’t be. Might that drive learning, attention, and inference, toward things that ought to be discounted and ignored? Might it be the reason that delusions form? How might that inform how delusions are maintained? I have been trying to answer those questions ever since. I have been extremely lucky to have found a mentor in Paul Fletcher, who continues to inspire and guide me, with the clarity and creativity of his thinking and his careful and compassionate clinical ear.
Paul and I set out to test the prediction error hypothesis in humans using functional neuroimaging, drug models, and patient studies. We found a brain signal for prediction error during belief updating. We found that patients with first episode psychosis did indeed evince aberrant prediction errors which correlated with the severity of their delusions, and that acute ketamine induced psychosis and aberrant prediction errors.
At Yale, I have been fortunate to be able to continue that work with John Krystal, and to complete a revolution of translation; from ideas inspired by animal studies, through patient work, and now back into animals, under the mentorship of Jane Taylor. I have also been fortunate to learn from and work with Ralph Hoffman (now sadly deceased) and Al Powers, extending the conditioning account to hallucinations with some success.
Person: Jeffrey Gray (and David Hemsley)
Corlett: I really like music. I am particularly fond of the Rolling Stones, and their fusion of rock and roll with country music on Exile on Main Street. In an interview about that record, Keith Richards describes how he searched his musical heritage and inspirations; “Find the cat that turned you on, listen to the things that turned him on, find who turned those people on”.
I tried to do that with prediction error theory.
There was a false start, with Shitij Kapur’s incentive salience theory, which is not about prediction error at all.
However, the exchange of letters between Kapur and two British Neuropsychologists, led me to earlier manifestations of the prediction error theory.
Jeffrey Gray and David Hemsley and their colleagues had outlined a circuit model of prediction error theory in Behavioral and Brain Sciences in 1991. Jeffrey was a linguist and translator during World War II. He translated Pavlov from the original Russian and became interested in the clinical applications of Pavlovian theory. He worked with David, a clinical neuropsychologist, to apply Pavlovian theory to hallucinations and delusions. I, along with many colleagues and collaborators, have tried to continue that tradition, emboldened by the new experimental and analytic methods we now have at our disposal.
Article: “Schizophrenic psychology, associative learning and the role of forebrain dopamine” (1976)
Corlett: It is hard to limit myself to just one paper. I might have chosen the BBS paper highlighted above. I might have picked the wonderful work of Chris Frith or Karl Friston or Klaas Stephan, Philipp Sterzer, Katharina Schmack, Guillermo Horga, or Rick Adams, all of whom have inspired and guided the development of prediction error theory, and I am happy to say have become my friends in the process.
Following the Keith Richards approach, I must note the earliest manifestation of a learning based model of delusions, published by Robert Miller in Medical Hypotheses in 1976.
This paper is remarkable, not just because of its prescience, but because Robert wrote it to try to reconcile his own experience of psychosis through the lens of his understanding of the dopamine system as a neuroscientist.
Surprise Item: A Nemesis
Corlett: Given the topic, I wanted to keep my surprise surprising.
It’s a nemesis.
Having a nemesis has been immensely generative.
It has kept my focus on what really matters as we have tried to build the theory and its evidence base.
Max Coltheart is my nemesis.
Along with Martin Davies, Robyn Langdon, Ryan McKay and others, Max is responsible for the two-factor theory of delusions. Two-factor theory is perspicuous and compelling. It argues that delusions are explicable in terms of two deficits, a problem in perception (which delivers the content of delusions), and a second belief evaluation deficit, which explains why delusions are so incorrigible. Two-factor theory is beloved by philosophers. I think that it is wrong. Likewise, two-factor theorists think I am wrong. I am sure most people wonder what all the fuss is about. People have tried to reconcile the prediction error and two-factor theories by positing that one factor and the other is its precision (the measure by which a prediction error is weighted in belief-updating). These hybrid models are simply prediction error accounts and do not satisfy the tenets of two-factor theory. There is more at stake than merely explaining delusions. The prediction error theory and two-factor theory have very different fundamental assumptions about how the mind maps to the brain, how to conceive of the interactions between mental operations more broadly.
There is more at stake than merely explaining delusions. The prediction error theory and two-factor theory have very different fundamental assumptions about how the mind maps to the brain, how to conceive of the interactions between mental operations more broadly.
For me, two-factor theory has provided a great foil for thinking about prediction errors and delusions, and I am grateful for the opportunity to have learned from (argued, and disagreed with) Max and others.
See previous posts in the “Mixed Bag” series