Lost and Found in Translation: From JAMA Psychiatry to Mad in America
“… rationalisation markets provide a helpful framework for understanding why certain information can often be so misleading even when it is accurate. To the extent that pundits or media organisations exist not to inform, but to rationalise, their insidious impact often lies not in the strict falsity of their content but in the way in which it is integrated and packaged to support appealing but misguided narratives.”
Daniel Williams, in a blog explaining the idea of rationalization markets (2022)
Let’s look at an instructive example of how information is transmitted from academic journals to online platforms.
A polished viewpoint by Kenneth E. Freedland and Charles F. Zorumski was published online in JAMA Psychiatry on March 22, 2023, titled “Success Rates in Psychiatry.”
They make the following points:
Treatment success rates, in RCTs as well as clinical services, are “invaluable metrics for tracking and communicating progress toward better outcomes” in medicine
“Unfortunately, success rate trends are rarely reported in psychiatric journals or in other mental health or behavioral medicine journals. This makes it difficult to determine whether psychiatric treatment outcomes are improving over time, stagnating, or perhaps even regressing.”
Cardiologists, oncologists, and other medical specialists can point to temporal trends in success rates. Similar data are hard to find for psychiatric disorders.
Specific success rates (SSRs): the proportion of patients who have a successful outcome after receiving a specific treatment for a certain condition.
Aggregate success rates (ASRs): the proportion of patients in a defined population with a certain condition who have a successful outcome regardless of the number or types of treatments they receive.
SSRs can be used “in clinical research to evaluate whether efforts to refine or optimize an existing intervention are improving its SSR, or in clinical practice to compare different treatments for the same condition.”
ASRs can be “useful for evaluating outcomes in clinical service settings in which different patients may receive different treatments for the same condition or in which the same patient may receive multiple treatments (simultaneously or sequentially) for the same problem.”
Funding agencies, journals, and scientific organizations should encourage tracking temporal trends in SSRs.
“It would be expensive to develop and maintain psychiatric success rate reporting systems, and their viability would depend on the cooperation and support of a variety of stakeholders. Nevertheless, it would be worth the effort and expense to develop and maintain these systems.”
“Such reporting systems would enable health services researchers to document the progress that has been made toward better psychiatric treatment outcomes, reveal areas in which more progress is needed, and show whether psychiatric research is translating into better clinical outcomes.”
The bottom line of their article is that success rates are important to track over time, and psychiatric research and psychiatric services have done a poor job keeping a track of them. Doing so will require institutional cooperation and resources, but it will be worth it. We’ll be able to track the progress we have made and we’ll have a better idea of where progress is needed.
All well and good. It’s an excellent and well-written piece, and I am in agreement.
First thing to note, the headline:
JAMA Psychiatry: No Evidence that Psychiatric Treatments Produce “Successful Outcomes”
Umm, ok. Not quite what the article said, but let’s read further.
In a viewpoint article in JAMA Psychiatry, researchers reveal that psychiatry is unable to demonstrate improving patient outcomes over time.
We can immediately see a conflation in the title and subtitle here. “Psychiatric profession has not done a good job tracking success rates over time” gets conflated with “There is no evidence to suggest that patients outcomes in psychiatry are improving over time, and we should be skeptical that they are.” These are quite different sorts of assertions.
Then the report goes further. If we are talking about improving over time, why not go all the way back?
“Are mental health outcomes today—in this era of Prozac, ECT, CBT, and so forth—better than they were in the era of, say, insulin coma therapy and lobotomy? Or even better than in the early 1800s, when Quakers introduced “moral therapy”?”
There is a pretty wild extrapolation! A call for implementation of temporal tracking of psychiatric success rates is being interpreted here to suggest that we cannot say with any confidence that outcomes of psychiatric care are better now than they were in the 1800s. Just because we haven’t tracked temporal outcomes in the specific manner suggested by Freedland and Zorumski doesn’t mean that we cannot make reasonable inferences from existing RCTs, observational data, and clinical experience. Not only do we have multiple treatment modalities (pharmacotherapies, psychotherapies, neurostimulation, lifestyle modifications, community services, etc.) and multiple treatments within each modality that have demonstrated efficacy in RCTs, we can combine these treatments as well as use them sequentially to increase response and remission rates. This is not being disputed by Freedland and Zorumski, who state “Stepwise approaches can produce cumulative success rates that are considerably higher than their constituent SSRs” and cite the results of STAR*D in support.
After suggesting that psychiatric treatments aren’t any better in terms of efficacy than moral therapy of the 1800s, the report then chides the authors for not acknowledging that psychiatric treatments are actually making mental illnesses worse. It says, “Antidepressants have been shown to increase the risk that depression will run a more chronic course…” and “In the long term, antipsychotics—on the whole—lead to worse outcomes for people diagnosed with schizophrenia and other psychotic disorders…”
There is of course no acknowledgement to the casual reader that these assertions presented as facts here are highly controversial claims with little acceptance in the scientific community.
And then the article concludes:
“As such, this paper highlights the fact that there is no evidence that psychiatric interventions do more good than harm.”
Actually no, the paper highlights no such thing. Temporal tracking of success rates is not about the risk-benefit ratio of psychiatric interventions. That’s something else entirely. We can use existing RCTs and observational studies to address the question of benefit vs harm. Nor do Freedland and Zorumski address this particular issue. This conclusion, as stated, is a misleading representation of the paper.
I have presented a comparison of the original paper and its coverage online, and I have chosen to keep my comments to a minimum. I think the comparison speaks for itself. On a final note, I want you to briefly consider two things as these are relevant to appreciating why popular critical mental health discourse is the way it is:
The original article in JAMA Psychiatry is not open access. You need access through an academic institution, or a personal subscription, or you need to pay to read it. The piece in Mad in America is free for everyone to read, and is promoted through their social media network.
There are likely many people out there for whom research reports such as the one above are the primary source of information about the current state of psychiatric practice and research.
Psychiatry at the Margins is a reader-supported publication. You can subscribe to receive new posts with a free or paid subscription. To support my work and this newsletter, consider becoming a paid subscriber.