This is such a fascinating discussion. One thing that I wonder about as a clinical counselor is about *core shame* as a variable. Those with ADHD diagnosis seem to have more core shame (“Something is wrong with me”) and I wonder if this is tied to genetic brain sensitivity (ie, a brain that is predisposed to emotional sensitivity?) In the ADHD self help literature for women, healing the shame underneath the dysfunction seems to be a big emphasis, and in my experience, it really seems to help decrease the severity of the symptoms. Is shame a “stickier” experience for a genetically sensitive brain? I guess I'm wondering: do stimulants present us with a drug that helps alleviate shame through increased feelings of self-efficacy and the feeling that “now my brain is like normal people’s brains?” If a child in the woods has ADHD but never has an environment that creates any friction for shame to take root, would the symptoms even express themselves at all? In other words, how much of this is environmental and as a result of a brain that genetically tilts towards sensitivity/shame? How many of these individuals have internalized the belief “Something is wrong with me” and the stimulants give them relief through counter evidence (“I can do things in the manner in which I perceive that normal people can?”)?
This is such an important observation. I was diagnosed with ADHD as a 13 year old male in middle school. The core shame that accompanied the diagnosis played a significant role in my initial refusal to accept the diagnosis; I was in a state of constant denial. The denial and shame were so strong that I avoided taking any medication, as this would imply acceptance of my condition. It wasn't until I reached college that I began to actually take my medication, and like you said, the stimulants provided me with a sense of "relief through counter evidence".
Yet another thoughtful and important piece, and I especially appreciate the emphasis on the Venn diagram of ADHD and stimulant benefit being overlapping but non-identical. I also love how you effectively communicate both the framing of ADHD as a distributed, heterogeneous, and idiographic process, and the idea that “attention” itself is a highly transdiagnostic and multifaceted construct rather than a unitary faculty.
I wonder, though, if there is a third clinical dilemma that sits alongside the one you describe. You frame the tension as being between clinicians who see genuine distress but feel constrained by rigid diagnostic boundaries, versus a more pragmatic stance that prioritizes current impairment and benefit over strict adherence to developmental criteria. That dichotomy may well reflect what you see in psychiatry.
From the neuropsychology side, what I encounter much more often is a slightly different situation: many of us see a lot of individuals who are genuinely distressed and sincerely believe they have ADHD, but for whom there is strong objective evidence that they do not currently have clinically significant, functionally impairing attentional or executive deficits. Not only is there often weak evidence for childhood ADHD, but present-day functioning is frequently average to well above average across cognitive testing, occupational functioning, health behaviors, and life outcomes, with little corroboration from objective collateral sources.
These individuals are often highly conscientious, high-achieving, and operating in very demanding environments. Their distress is real -- they feel exhaustion, shame, anxiety, a sense of underperforming relative to peers -- but the clinical picture looks less like a neuropsychological disorder and more like a collision between human limits and extreme expectations (plus stress, sleep, cannabis, social media, modern work demands, etc.).
In these cases, the dilemma is not whether to withhold help out of rigid diagnostic moralism. It’s that diagnosing and prescribing are not neutral acts. Telling someone who is objectively within the normal range of human functioning, “Yes, you have a medical disorder that explains your struggles, and you require ongoing professional intervention” (or even pragmatically just the last part of that statement) carries a lot of implicit messages. Messages about where the problem resides, what kinds of limits are acceptable, what counts as failure, and what sort of relationships (with yourself, with your communities, with your purpose, with society) is encouraged.
I sometimes think the analogy is closer to cosmetic surgery vs. surgical repair for cleft palate. In both cases, there is genuine distress and genuine potential benefit, but the two situations are different, and come with very different ethical, cultural, and professional implications. There may well be a place for something like “cosmetic psychiatry,” where people pursue enhancements or supports they do not medically need but may want and value. What makes me uneasy is when that territory gets collapsed into medical diagnosis (or 'pragmatic diagnosis', if I may use that as shorthand for your pragmatic self-reported distress + potential benefit stance), and when clinicians who resist that move are framed as insufficiently empathic or nuanced.
So, I think I agree with almost everything you’re saying about the science and about the limits of current diagnostic categories. I just want to add that there is also a responsibility on the clinician’s side to hold the line between recognizing distress and reifying it as disorder, especially given the downstream effects on patients, professional norms, and society more broadly. (I'm resisting the temptation here to expand on all of these effects, as you've covered many of them on this very Substack, so I know they're in the background of this post even though the medical diagnosis vs pragmatic stance is being foregrounded).
I just want to gently offer that colleagues who push back against making a medical OR pragmatic diagnosis probably are not looking to deny suffering. They're finding the best answer they can to the question of what kind of story about that suffering we're ethically justified in telling.
Thank you! I totally agree with you that a distinction needs to be made between a disorder of focus/attention and something "like a collision between human limits and extreme expectations (plus stress, sleep, cannabis, social media, modern work demands, etc.)," and I think I say something to that effect in the blog post as well. Where it gets tricky in my view is that at present we don't seem to have a way to make that distinction based on neuropsychological testing. We seem to be stuck with a clinical judgment based on the overall pattern of symptoms, their longitudinal course, and relationship with other symptoms and with life context. Do you think the field is mistaken in not considering deficits on neuropsychological testing to be the gold standard for diagnosing ADHD?
Thank you for this question and for engaging so thoughtfully with my comment! It's surprisingly exciting to have a public dialogue with someone whose work I really admire.
Short answer: no, neuropsychological test results can't be the gold standard for diagnosing ADHD. But I also worry that we’ve gone too far in the opposite direction by implicitly treating self-report as the closest thing we have to a gold standard.
Certainly, neuropsychological tests are still far too crude for the job. They’re the metaphorical equivalent of feeling someone’s forehead to see if they have a fever: a very blunt, indirect proxy for something dynamic, distributed, and context-sensitive. Our tests are also historical artifacts of an era focused on localization and lesions, and they fit poorly with how we now understand networks, trade-offs, timing, synchrony, and connectivity.
That said, like feeling a forehead, neuropsychological test results are more sensitive than we give them credit for (it’s absolutely possible to have ADHD with average or even above-average scores, but that’s much rarer than popular or even professional discourse suggests), and vastly less specific than anyone would want for a decisive diagnostic tool (almost everything can depress test performance).
Where I get uneasy is that we’ve taken something true -- that test scores are neither sensitive nor specific enough -- and used it to quietly throw out the entire assessment process in favor of something even less constrained: self-report. Neuropsychological assessment, at its best, is so much more than “neuropsychological test results.” The assessment process is exactly what you describe: the overall pattern of symptoms, their longitudinal course, relationship to context, collateral information, ruling out other explanations, developmental history, incentives and constraints, and yes, also the forehead-feel.
I sometimes worry we’re drifting toward a position that sounds like: since some people still have a fever even when their forehead feels cool, and since some people feel better when you give them antipyretics (which have other effects beyond just lowering temperature), the *only* thing we can really do is ask them if they feel hot. This approach is very compassionate, but also risks defining “fever” as “any time someone reports feeling uncomfortably warm,” regardless of context, history, environment, or other objective signs.
And once you redefine the construct that way, of course the forehead test “loses sensitivity,” treatments start becoming less effective but easier to proscribe for edge cases, accommodation requests become more frequent and more intense, and everything starts to look like the same disorder. Meanwhile, the room itself is getting hotter (stress, sleep deprivation, social media, modern work demands), people are wearing heavier coats (higher expectations, constant self-monitoring), and we’re all debating whether everyone needs painkillers, while the kids with brutal 104-degree fevers risk getting lost in the background noise.
Like you, I don’t have clean answers. I vacillate a lot. I’m trying to hold two things at once: the reality of people’s distress, and the responsibility we have as clinicians not just to alleviate suffering, but to be careful about the stories we tell about what that suffering means. Especially when those stories shape professional norms, cultural expectations, and who ends up getting seen, diagnosed, and treated.
So I don’t think test numbers should be the gold standard. But the solution cannot be to abandon structured assessment in favor of self-report alone. The hard problem (and I think the one your Substack always does such a nice job of plumbing) is that we’re trying to practice in a world where human limits, social conditions, and diagnostic categories have become profoundly entangled, and our tools (conceptual and methodological) haven’t quite caught up. I try to comfort myself by remembering that this is ultimately a collective sense-making project, and that colleagues who see it differently are, like me, sincerely trying to be kind, empathic, pragmatic, and nuanced in the face of this genuinely hard problem.
As important as it is to recognize how much gray ( not only gray matter:) there is in diagnosing adhd and in deciding whether to medicate, I would love to hear more about what questions need to be asked of a patient that would better inform the decision to treat. Yes, we can go through the DSM list and give questionnaires, though these indirectly imply that all sx and responses have equal weight. Of course we have to respond to the patient's specific report about what's causing problems in their lives, but do these findings shed any useful light on what we need to be asking that we might not be now? Which questionnaire answers or DSM criteria deserve more weight in out decision making? I generally have focused on where the greatest functional impairment seems to be, though that doesn't always map neatly onto DSM sx.
Very interesting, thank you. It stimulated some thoughts.
"Stimulants increase effort, persistence, alertness, and perceived reward, and their beneficial clinical effects seem in part to arise from sustaining engagement with tedious or unrewarding tasks." This strikes me as a profoundly moral (educational) rather than medical exercise. When our eldest daughter started school, she has confirmed today, I specifically emphasised the need to show "concentration, application and effort" (I think I also added "perseverance" before "effort"). The question of reward is also profoundly moral, it has to do intimately with the development of values and formation of character.
Related but separate is the problem of "boredom" (at school and in the contemporary capitalist work environment). This also relates to morality but on the social scale, i.e. the affordances (or their denial) for fulfilment and emancipation.
Do psychiatrists have sufficient training and continuing professional development and do audit and quality assurance systems exist to confirm mastery of such issues in discourse and practice, especially in light of questions like "what to do about situations where people are overwhelmed by the requirements of work in the post-Fordist era? What to do about “Mother’s Little Helper” type scenarios? What about people trying to survive in time-pressured work environments? Or those trying to use stimulants to cope with chronic sleep deprivation? Or those so depleted by anxiety and stress that they have no motivation left for unrewarding tasks on which their survival depends?"
It seems to me that we are moving further away from previously common assumptions about psychiatry as a medical discipline treating broadly conceived biomedical lesions. You state "And yet, amidst all these clinical dilemmas, we also show a certain hypocrisy: we do not withhold SSRIs from people who experience impairment from increased emotional demands. The diagnostic criteria for depression are oblivious to context." Similar but not identical issues arise in relation to "depression" and SSRIs and they have made many people uneasy.
"Stimulant medications make boring tasks (like math homework, spreadsheets, laundry) feel more important and worthwhile. This helps people stick with these tasks longer and put in more effort. Mundane tasks feel more worth doing, so people are less likely to abandon them for something more interesting."
We agree " But what I do not want to do is provide that help under false pretenses. I don’t want to say someone has a neurodevelopmental disorder when they don’t so that they can access a medication that they can benefit from".
As a Swede, I would like to give a bit more context to this. (I know it's an old post now, but I just saw this comment.)
Our school system is terrible. It's basically a wild west situation where anyone can start a school, and then, immediately be entitled to lots of tax payer money to run it. There are no laws forbidding school owners from just pocketing as much tax money as they can, and get rich themselves while running an extremely subpar school ... There ARE rules that, in theory, ascertain that schools don't fall below a certain quality threshold. But shutting down shit schools take ages and ages in practice, and when that finally happens, the school capitalist is already rich and moving on.
Because of this general situation, "teacher" is hardly a coveted job either. Lots of people become teachers because their grades are too bad for any other university education. Best case scenario is that people who struggled in school themselves are well equipped to help struggling kids; worst case scenario is that struggling kids are taught by teachers who STILL struggle themselves with, e.g., basic reading comprehension.
Unsurprisingly, tons of kids have a really hard time in school. The law entitles all kids to the support they need to pass classes, regardless of whether they have a diagnosis. In practice, it's often hard enough to get proper support WITH a diagnosis, and completely impossible without. So, it's reasonable to assume that many more kids (or their parents) seek out an NPD diagnosis than would have been the case if we had an even half-way decent school system.
When it comes to de-diagnosis, I don't think there's proper research on why people pursue this, but there are lots of anecdotes from, e.g., interviews in newspaper articles about the phenomenon. And based on those, it seems like people often seek to de-diagnose as adults because they never wanted a diagnosis in the first place. As children, they complained about a horrible school environment, but their complaints weren't heard - instead, grown-ups around them said that THEY were the problem, THEY were dysfunctional and needed a psychiatric diagnosis.
As adults, they want to cast off a narrative that was forced on them in the first place.
As a Sweden born Greek at the height of theformer's Social Democratic glory, reading this makes me sad, even though our family returned to Greece when I was still very young.
Awais' examplary clarity has helped me think things through further. The present response to you takes its cues mainly from his blog and the magisterial Koirala, S., Grimsrud, G., Mooney, M.A. et al. Neurobiology of attention-deficit hyperactivity disorder: historical challenges and emerging frontiers. Nat. Rev. Neurosci. 25, 759–775 (2024). https://doi.org/10.1038/s41583-024-00869-z Written by an advocate of the condition it outlines in detail the dearth of reliable biomedical findings and the crucial therapeutic uncertainties relating to this condition. It is fair to say that Koirala et al are far more reserved about the therapeutic efficacy of psychostimulants than Awais.
I want to stay here with some things that troubled me about Awais' piece and my response is informed by, though not identical with, an extensive discussion with three sneior clinical colleagues, one a leading psychiatric voice for ADHD in the UK and the other an Emeritus Professor of Psychiatrist from Imperial College and long standing close colleague.
The first point, relates to your account. I think Awais underplayed the significance of "ADHD" as a social and political project. You highlight certain social disadvantages that underlie the clamour for the diagnosis of ADHD as a strategy to secure some sort of recompense. However, the same applies to the other end of our contemporary inequalities. The label is especially sought, it seems, by Ivy league students in the USA compared with others. Here it seems to be driven by ambition for extra performance. Indeed, performance is driven into all now from the very earliest age and throughout one's productive life. To me this has profound social/ moral implications that cannot be dismissed as "moralising". And it is a matter of fact that many psychiatrists are professionally diligent but others are not. Furthermore, as stated previously, I am not sure what standards in this area my colleagues work to, in terms of having and demonstrating competence on wider social and moral issues. Again, some may be very good but mightt too many be not?
Next we come to therapeutics. I was interested to read Awais proposal that we free the prescription of psychostimulants from the necessity of fulfilment of diagnostic crieria. As someone who is guarded but not dismissive of the utility of psychiatric diagnosis, I have some sympathy with that. However, this raises many issues. One is that it shifts psychiatry towards Joanna Moncrieff' advocacy of medication rather than diagnosis focused psychiatry, if I understand her proposal correctly. This may noit be a bad thing in itself but it merits highlighting. The second is that it raises the issue of the relation between psychiatry and evidence based medicine as it shifts markedly the field from diagnosis related prescribing, yet all the research has been carried in relation to the latter. Third, and because of this, it risks exposing psychiatry to exactly the kind of potential abuse opened up by the issues discussed in the previous paragraph. One of the key anxieties of Michel Foucault in relation to psychiatry, right from its early origins, has been the overextension of its practice beyond evidence and expertise (J. Iliopoulos (2017) The History of Reason in the Age of Madness Foucault’s Enlightenment and a Radical Critique of Psychiatry offers a clear exposition of this.
For the avoidance of doubt: the reason I repeat myself on the above, is because, in my respectful view, these issues are too often glossed over. To some extend, it seems to me that ADHD has become one of Lacan's "master signifiers" whose "signified" rmeains elusive, in significant part because of its constant sliding through metonymic processes. However, this does not deny that people struggle, nor does it imply that prescription of psychostimulants is not permissible or even desirable in some cases. The question is one of boundaries and margins. Psychiatry's margins are very wide and, ultimately, the above questions raise issues about the ontology of the specialty and its social function. It tilts the specialty away from biomedicine and towards social utility and instrumentality. And. ultimately, utility and instrumentality for whom is the question.
Finally, though I understand that you are a philosopher and not a clinician, I wonder whether your experience and Swedish debates lead you to have a view on this paper:
7. Li L, Coghill D, Sjölander A, et al. Increased prescribing of ADHD medication and real-world outcomes over time. JAMA Psychiatry. Published online June 25, 2025. https://doi:10.1001/jamapsychiatry.2025.1281
Thank you again for taking the trouble to comment on my comment. And also to Awais for his persistent efforts.
I realise that this is an older post, but I would like to provide some perspective on the moral dimension of ADHD — I was raised in a family of teachers and engineers, as an only child and for seven years, the only grandchild. I was not lacking for attention or moral instruction or good role models. I attribute the fact that I eventually started speaking to the efforts of my mother and paternal grandmother, both trained teachers.
All the application in the world was not sufficient when faced with tedious, unrewarding (or seemingly unrewarding) or simply uninteresting tasks. Forcing myself to focus on something I was not interested in resulted in distress so profound that I sometimes somatised it to pain. I was expending so much effort on staying "caught up" with what was expected of me that I had no energy left for emotional regulation. But here's the thing: I was not expected to do more than go to school and do decently at school. I did not have an afternoon and evening full of extracurriculars and I was not expected to help with household chores, look after siblings or anything like that. I was expected to do the bare minimum, and I still struggled.
Most of my teens are a grey fog in my memory. I could not finish reading a book, despite being an enthusiastic reader and not very interested in TV (too boring). That all only started to change once I started stimulant medication in my early 20s.
I feel like lately, in most spheres, the focus is on potential overprescribing of ADHD medication and on people who are able to get by with caffeine and some occupational therapy, or people whose functional impairments are ultimately significant to them, but comparatively not profound.
I am not one of those people. And I was brought up well. I really cannot emphasise how much support and attention I received.
But I still cannot simply force myself, through moral fibre, to do things that are just not "interesting", unless I am medicated. It is not an emotional response. It is not a decision to not care. It is a full-body wrenching sensation that sometimes shades into nausea. Forcing myself to do things without medication takes up pretty much all of my energy, which is already scant since I developed a post-viral condition in my mid-20s.
Medicated, I am able to make the choice to tolerate boredom, and tolerating boredom does not result in distress so major that I begin somatising it to nausea and dizziness (and chest pain that got my GP concerned enough to investigate me for asthma — it is not asthma). Unmedicated, I am sometimes not able to make that choice at all, and when I am, the repercussions are simply nasty. It does in fact feel like trying to use a broken limb, or trying to push through fatigue arising from fever, or trying to work through a migraine, as well as producing the kind of emotional symptoms usually ascribed to clinical depression.
It isn't about not caring, or not knowing to care — it really does feel like a physical limiter, one that is not attested as a normal part of day-to-day existence.
Necessary context, I suppose: I'm profoundly impaired, ADHD seems to be the only correct diagnosis I've gotten in my life, and I slipped through the cracks until my 20s because my Soviet-born parents stopped taking me to doctors for anything but routine vaccinations the moment someone in a white coat noticed I was developmentally delayed. If I wasn't medicated, I would probably be effectively institutionalised — living in supported accommodation at best.
A thing that kind of shocks me every time is how closely the empirical evidence on ADHD and ADHD treatment lines up with my lived experience — for example, even medicated, I do not experience an improvement in "attention", nor do I experience a "normalisation". I do experience a reduction in boredom, things outside a very narrow range of topics start seeming potentially interesting, and I'm just about able to redirect attention. It does in fact feel like I'm able to appreciate the importance — the salience — of tasks that otherwise seem vaguely abstract and unrelated to me. And I seem more "myself", at least from the inside — more able to act as I want to act, more able to dedicate attention and effort towards deciphering social interactions, more able to "let go" of something upsetting that's nonetheless providing some kind of stimulus.
For some reason, my heart rate also goes down. I had my meds adjusted in November — we added on instant release methylphenidate in the evenings (my clinician quizzed me thoroughly on how I feel in the evenings as the NHS-approved Concerta analogue begins to wear off, and was satisfied that I don't just need a higher initial dose), and my resting heart rate went down by 10 points. (It went back up after about six weeks for potentially unrelated reasons — I started experiencing POTS symptoms again this winter.)
I don't really have a broader point here. I am, however, relieved to see something on ADHD and stimulants that isn't dismissive of the effect of stimulants, and doesn't try to claim that "oh but you're just on speed, everyone performs better on speed".
This is such a fascinating discussion. One thing that I wonder about as a clinical counselor is about *core shame* as a variable. Those with ADHD diagnosis seem to have more core shame (“Something is wrong with me”) and I wonder if this is tied to genetic brain sensitivity (ie, a brain that is predisposed to emotional sensitivity?) In the ADHD self help literature for women, healing the shame underneath the dysfunction seems to be a big emphasis, and in my experience, it really seems to help decrease the severity of the symptoms. Is shame a “stickier” experience for a genetically sensitive brain? I guess I'm wondering: do stimulants present us with a drug that helps alleviate shame through increased feelings of self-efficacy and the feeling that “now my brain is like normal people’s brains?” If a child in the woods has ADHD but never has an environment that creates any friction for shame to take root, would the symptoms even express themselves at all? In other words, how much of this is environmental and as a result of a brain that genetically tilts towards sensitivity/shame? How many of these individuals have internalized the belief “Something is wrong with me” and the stimulants give them relief through counter evidence (“I can do things in the manner in which I perceive that normal people can?”)?
Great questions! Shame does seem to be a common experience in ADHD and I think scientifically very understudied.
This is such an important observation. I was diagnosed with ADHD as a 13 year old male in middle school. The core shame that accompanied the diagnosis played a significant role in my initial refusal to accept the diagnosis; I was in a state of constant denial. The denial and shame were so strong that I avoided taking any medication, as this would imply acceptance of my condition. It wasn't until I reached college that I began to actually take my medication, and like you said, the stimulants provided me with a sense of "relief through counter evidence".
Yet another thoughtful and important piece, and I especially appreciate the emphasis on the Venn diagram of ADHD and stimulant benefit being overlapping but non-identical. I also love how you effectively communicate both the framing of ADHD as a distributed, heterogeneous, and idiographic process, and the idea that “attention” itself is a highly transdiagnostic and multifaceted construct rather than a unitary faculty.
I wonder, though, if there is a third clinical dilemma that sits alongside the one you describe. You frame the tension as being between clinicians who see genuine distress but feel constrained by rigid diagnostic boundaries, versus a more pragmatic stance that prioritizes current impairment and benefit over strict adherence to developmental criteria. That dichotomy may well reflect what you see in psychiatry.
From the neuropsychology side, what I encounter much more often is a slightly different situation: many of us see a lot of individuals who are genuinely distressed and sincerely believe they have ADHD, but for whom there is strong objective evidence that they do not currently have clinically significant, functionally impairing attentional or executive deficits. Not only is there often weak evidence for childhood ADHD, but present-day functioning is frequently average to well above average across cognitive testing, occupational functioning, health behaviors, and life outcomes, with little corroboration from objective collateral sources.
These individuals are often highly conscientious, high-achieving, and operating in very demanding environments. Their distress is real -- they feel exhaustion, shame, anxiety, a sense of underperforming relative to peers -- but the clinical picture looks less like a neuropsychological disorder and more like a collision between human limits and extreme expectations (plus stress, sleep, cannabis, social media, modern work demands, etc.).
In these cases, the dilemma is not whether to withhold help out of rigid diagnostic moralism. It’s that diagnosing and prescribing are not neutral acts. Telling someone who is objectively within the normal range of human functioning, “Yes, you have a medical disorder that explains your struggles, and you require ongoing professional intervention” (or even pragmatically just the last part of that statement) carries a lot of implicit messages. Messages about where the problem resides, what kinds of limits are acceptable, what counts as failure, and what sort of relationships (with yourself, with your communities, with your purpose, with society) is encouraged.
I sometimes think the analogy is closer to cosmetic surgery vs. surgical repair for cleft palate. In both cases, there is genuine distress and genuine potential benefit, but the two situations are different, and come with very different ethical, cultural, and professional implications. There may well be a place for something like “cosmetic psychiatry,” where people pursue enhancements or supports they do not medically need but may want and value. What makes me uneasy is when that territory gets collapsed into medical diagnosis (or 'pragmatic diagnosis', if I may use that as shorthand for your pragmatic self-reported distress + potential benefit stance), and when clinicians who resist that move are framed as insufficiently empathic or nuanced.
So, I think I agree with almost everything you’re saying about the science and about the limits of current diagnostic categories. I just want to add that there is also a responsibility on the clinician’s side to hold the line between recognizing distress and reifying it as disorder, especially given the downstream effects on patients, professional norms, and society more broadly. (I'm resisting the temptation here to expand on all of these effects, as you've covered many of them on this very Substack, so I know they're in the background of this post even though the medical diagnosis vs pragmatic stance is being foregrounded).
I just want to gently offer that colleagues who push back against making a medical OR pragmatic diagnosis probably are not looking to deny suffering. They're finding the best answer they can to the question of what kind of story about that suffering we're ethically justified in telling.
Thank you! I totally agree with you that a distinction needs to be made between a disorder of focus/attention and something "like a collision between human limits and extreme expectations (plus stress, sleep, cannabis, social media, modern work demands, etc.)," and I think I say something to that effect in the blog post as well. Where it gets tricky in my view is that at present we don't seem to have a way to make that distinction based on neuropsychological testing. We seem to be stuck with a clinical judgment based on the overall pattern of symptoms, their longitudinal course, and relationship with other symptoms and with life context. Do you think the field is mistaken in not considering deficits on neuropsychological testing to be the gold standard for diagnosing ADHD?
Thank you for this question and for engaging so thoughtfully with my comment! It's surprisingly exciting to have a public dialogue with someone whose work I really admire.
Short answer: no, neuropsychological test results can't be the gold standard for diagnosing ADHD. But I also worry that we’ve gone too far in the opposite direction by implicitly treating self-report as the closest thing we have to a gold standard.
Certainly, neuropsychological tests are still far too crude for the job. They’re the metaphorical equivalent of feeling someone’s forehead to see if they have a fever: a very blunt, indirect proxy for something dynamic, distributed, and context-sensitive. Our tests are also historical artifacts of an era focused on localization and lesions, and they fit poorly with how we now understand networks, trade-offs, timing, synchrony, and connectivity.
That said, like feeling a forehead, neuropsychological test results are more sensitive than we give them credit for (it’s absolutely possible to have ADHD with average or even above-average scores, but that’s much rarer than popular or even professional discourse suggests), and vastly less specific than anyone would want for a decisive diagnostic tool (almost everything can depress test performance).
Where I get uneasy is that we’ve taken something true -- that test scores are neither sensitive nor specific enough -- and used it to quietly throw out the entire assessment process in favor of something even less constrained: self-report. Neuropsychological assessment, at its best, is so much more than “neuropsychological test results.” The assessment process is exactly what you describe: the overall pattern of symptoms, their longitudinal course, relationship to context, collateral information, ruling out other explanations, developmental history, incentives and constraints, and yes, also the forehead-feel.
I sometimes worry we’re drifting toward a position that sounds like: since some people still have a fever even when their forehead feels cool, and since some people feel better when you give them antipyretics (which have other effects beyond just lowering temperature), the *only* thing we can really do is ask them if they feel hot. This approach is very compassionate, but also risks defining “fever” as “any time someone reports feeling uncomfortably warm,” regardless of context, history, environment, or other objective signs.
And once you redefine the construct that way, of course the forehead test “loses sensitivity,” treatments start becoming less effective but easier to proscribe for edge cases, accommodation requests become more frequent and more intense, and everything starts to look like the same disorder. Meanwhile, the room itself is getting hotter (stress, sleep deprivation, social media, modern work demands), people are wearing heavier coats (higher expectations, constant self-monitoring), and we’re all debating whether everyone needs painkillers, while the kids with brutal 104-degree fevers risk getting lost in the background noise.
Like you, I don’t have clean answers. I vacillate a lot. I’m trying to hold two things at once: the reality of people’s distress, and the responsibility we have as clinicians not just to alleviate suffering, but to be careful about the stories we tell about what that suffering means. Especially when those stories shape professional norms, cultural expectations, and who ends up getting seen, diagnosed, and treated.
So I don’t think test numbers should be the gold standard. But the solution cannot be to abandon structured assessment in favor of self-report alone. The hard problem (and I think the one your Substack always does such a nice job of plumbing) is that we’re trying to practice in a world where human limits, social conditions, and diagnostic categories have become profoundly entangled, and our tools (conceptual and methodological) haven’t quite caught up. I try to comfort myself by remembering that this is ultimately a collective sense-making project, and that colleagues who see it differently are, like me, sincerely trying to be kind, empathic, pragmatic, and nuanced in the face of this genuinely hard problem.
As important as it is to recognize how much gray ( not only gray matter:) there is in diagnosing adhd and in deciding whether to medicate, I would love to hear more about what questions need to be asked of a patient that would better inform the decision to treat. Yes, we can go through the DSM list and give questionnaires, though these indirectly imply that all sx and responses have equal weight. Of course we have to respond to the patient's specific report about what's causing problems in their lives, but do these findings shed any useful light on what we need to be asking that we might not be now? Which questionnaire answers or DSM criteria deserve more weight in out decision making? I generally have focused on where the greatest functional impairment seems to be, though that doesn't always map neatly onto DSM sx.
Very interesting, thank you. It stimulated some thoughts.
"Stimulants increase effort, persistence, alertness, and perceived reward, and their beneficial clinical effects seem in part to arise from sustaining engagement with tedious or unrewarding tasks." This strikes me as a profoundly moral (educational) rather than medical exercise. When our eldest daughter started school, she has confirmed today, I specifically emphasised the need to show "concentration, application and effort" (I think I also added "perseverance" before "effort"). The question of reward is also profoundly moral, it has to do intimately with the development of values and formation of character.
Related but separate is the problem of "boredom" (at school and in the contemporary capitalist work environment). This also relates to morality but on the social scale, i.e. the affordances (or their denial) for fulfilment and emancipation.
Do psychiatrists have sufficient training and continuing professional development and do audit and quality assurance systems exist to confirm mastery of such issues in discourse and practice, especially in light of questions like "what to do about situations where people are overwhelmed by the requirements of work in the post-Fordist era? What to do about “Mother’s Little Helper” type scenarios? What about people trying to survive in time-pressured work environments? Or those trying to use stimulants to cope with chronic sleep deprivation? Or those so depleted by anxiety and stress that they have no motivation left for unrewarding tasks on which their survival depends?"
It seems to me that we are moving further away from previously common assumptions about psychiatry as a medical discipline treating broadly conceived biomedical lesions. You state "And yet, amidst all these clinical dilemmas, we also show a certain hypocrisy: we do not withhold SSRIs from people who experience impairment from increased emotional demands. The diagnostic criteria for depression are oblivious to context." Similar but not identical issues arise in relation to "depression" and SSRIs and they have made many people uneasy.
"Stimulant medications make boring tasks (like math homework, spreadsheets, laundry) feel more important and worthwhile. This helps people stick with these tasks longer and put in more effort. Mundane tasks feel more worth doing, so people are less likely to abandon them for something more interesting."
We need persistent caution and more debate. I note the move in Sweden towards de-diagnosis of ADHD (and autism): https://www.bmj.com/content/390/bmj.r2023/rapid-responses
We agree " But what I do not want to do is provide that help under false pretenses. I don’t want to say someone has a neurodevelopmental disorder when they don’t so that they can access a medication that they can benefit from".
As a Swede, I would like to give a bit more context to this. (I know it's an old post now, but I just saw this comment.)
Our school system is terrible. It's basically a wild west situation where anyone can start a school, and then, immediately be entitled to lots of tax payer money to run it. There are no laws forbidding school owners from just pocketing as much tax money as they can, and get rich themselves while running an extremely subpar school ... There ARE rules that, in theory, ascertain that schools don't fall below a certain quality threshold. But shutting down shit schools take ages and ages in practice, and when that finally happens, the school capitalist is already rich and moving on.
Because of this general situation, "teacher" is hardly a coveted job either. Lots of people become teachers because their grades are too bad for any other university education. Best case scenario is that people who struggled in school themselves are well equipped to help struggling kids; worst case scenario is that struggling kids are taught by teachers who STILL struggle themselves with, e.g., basic reading comprehension.
Unsurprisingly, tons of kids have a really hard time in school. The law entitles all kids to the support they need to pass classes, regardless of whether they have a diagnosis. In practice, it's often hard enough to get proper support WITH a diagnosis, and completely impossible without. So, it's reasonable to assume that many more kids (or their parents) seek out an NPD diagnosis than would have been the case if we had an even half-way decent school system.
When it comes to de-diagnosis, I don't think there's proper research on why people pursue this, but there are lots of anecdotes from, e.g., interviews in newspaper articles about the phenomenon. And based on those, it seems like people often seek to de-diagnose as adults because they never wanted a diagnosis in the first place. As children, they complained about a horrible school environment, but their complaints weren't heard - instead, grown-ups around them said that THEY were the problem, THEY were dysfunctional and needed a psychiatric diagnosis.
As adults, they want to cast off a narrative that was forced on them in the first place.
Thank you, Sonia.
As a Sweden born Greek at the height of theformer's Social Democratic glory, reading this makes me sad, even though our family returned to Greece when I was still very young.
Awais' examplary clarity has helped me think things through further. The present response to you takes its cues mainly from his blog and the magisterial Koirala, S., Grimsrud, G., Mooney, M.A. et al. Neurobiology of attention-deficit hyperactivity disorder: historical challenges and emerging frontiers. Nat. Rev. Neurosci. 25, 759–775 (2024). https://doi.org/10.1038/s41583-024-00869-z Written by an advocate of the condition it outlines in detail the dearth of reliable biomedical findings and the crucial therapeutic uncertainties relating to this condition. It is fair to say that Koirala et al are far more reserved about the therapeutic efficacy of psychostimulants than Awais.
I want to stay here with some things that troubled me about Awais' piece and my response is informed by, though not identical with, an extensive discussion with three sneior clinical colleagues, one a leading psychiatric voice for ADHD in the UK and the other an Emeritus Professor of Psychiatrist from Imperial College and long standing close colleague.
The first point, relates to your account. I think Awais underplayed the significance of "ADHD" as a social and political project. You highlight certain social disadvantages that underlie the clamour for the diagnosis of ADHD as a strategy to secure some sort of recompense. However, the same applies to the other end of our contemporary inequalities. The label is especially sought, it seems, by Ivy league students in the USA compared with others. Here it seems to be driven by ambition for extra performance. Indeed, performance is driven into all now from the very earliest age and throughout one's productive life. To me this has profound social/ moral implications that cannot be dismissed as "moralising". And it is a matter of fact that many psychiatrists are professionally diligent but others are not. Furthermore, as stated previously, I am not sure what standards in this area my colleagues work to, in terms of having and demonstrating competence on wider social and moral issues. Again, some may be very good but mightt too many be not?
Next we come to therapeutics. I was interested to read Awais proposal that we free the prescription of psychostimulants from the necessity of fulfilment of diagnostic crieria. As someone who is guarded but not dismissive of the utility of psychiatric diagnosis, I have some sympathy with that. However, this raises many issues. One is that it shifts psychiatry towards Joanna Moncrieff' advocacy of medication rather than diagnosis focused psychiatry, if I understand her proposal correctly. This may noit be a bad thing in itself but it merits highlighting. The second is that it raises the issue of the relation between psychiatry and evidence based medicine as it shifts markedly the field from diagnosis related prescribing, yet all the research has been carried in relation to the latter. Third, and because of this, it risks exposing psychiatry to exactly the kind of potential abuse opened up by the issues discussed in the previous paragraph. One of the key anxieties of Michel Foucault in relation to psychiatry, right from its early origins, has been the overextension of its practice beyond evidence and expertise (J. Iliopoulos (2017) The History of Reason in the Age of Madness Foucault’s Enlightenment and a Radical Critique of Psychiatry offers a clear exposition of this.
For the avoidance of doubt: the reason I repeat myself on the above, is because, in my respectful view, these issues are too often glossed over. To some extend, it seems to me that ADHD has become one of Lacan's "master signifiers" whose "signified" rmeains elusive, in significant part because of its constant sliding through metonymic processes. However, this does not deny that people struggle, nor does it imply that prescription of psychostimulants is not permissible or even desirable in some cases. The question is one of boundaries and margins. Psychiatry's margins are very wide and, ultimately, the above questions raise issues about the ontology of the specialty and its social function. It tilts the specialty away from biomedicine and towards social utility and instrumentality. And. ultimately, utility and instrumentality for whom is the question.
Finally, though I understand that you are a philosopher and not a clinician, I wonder whether your experience and Swedish debates lead you to have a view on this paper:
7. Li L, Coghill D, Sjölander A, et al. Increased prescribing of ADHD medication and real-world outcomes over time. JAMA Psychiatry. Published online June 25, 2025. https://doi:10.1001/jamapsychiatry.2025.1281
Thank you again for taking the trouble to comment on my comment. And also to Awais for his persistent efforts.
I realise that this is an older post, but I would like to provide some perspective on the moral dimension of ADHD — I was raised in a family of teachers and engineers, as an only child and for seven years, the only grandchild. I was not lacking for attention or moral instruction or good role models. I attribute the fact that I eventually started speaking to the efforts of my mother and paternal grandmother, both trained teachers.
All the application in the world was not sufficient when faced with tedious, unrewarding (or seemingly unrewarding) or simply uninteresting tasks. Forcing myself to focus on something I was not interested in resulted in distress so profound that I sometimes somatised it to pain. I was expending so much effort on staying "caught up" with what was expected of me that I had no energy left for emotional regulation. But here's the thing: I was not expected to do more than go to school and do decently at school. I did not have an afternoon and evening full of extracurriculars and I was not expected to help with household chores, look after siblings or anything like that. I was expected to do the bare minimum, and I still struggled.
Most of my teens are a grey fog in my memory. I could not finish reading a book, despite being an enthusiastic reader and not very interested in TV (too boring). That all only started to change once I started stimulant medication in my early 20s.
I feel like lately, in most spheres, the focus is on potential overprescribing of ADHD medication and on people who are able to get by with caffeine and some occupational therapy, or people whose functional impairments are ultimately significant to them, but comparatively not profound.
I am not one of those people. And I was brought up well. I really cannot emphasise how much support and attention I received.
But I still cannot simply force myself, through moral fibre, to do things that are just not "interesting", unless I am medicated. It is not an emotional response. It is not a decision to not care. It is a full-body wrenching sensation that sometimes shades into nausea. Forcing myself to do things without medication takes up pretty much all of my energy, which is already scant since I developed a post-viral condition in my mid-20s.
Medicated, I am able to make the choice to tolerate boredom, and tolerating boredom does not result in distress so major that I begin somatising it to nausea and dizziness (and chest pain that got my GP concerned enough to investigate me for asthma — it is not asthma). Unmedicated, I am sometimes not able to make that choice at all, and when I am, the repercussions are simply nasty. It does in fact feel like trying to use a broken limb, or trying to push through fatigue arising from fever, or trying to work through a migraine, as well as producing the kind of emotional symptoms usually ascribed to clinical depression.
It isn't about not caring, or not knowing to care — it really does feel like a physical limiter, one that is not attested as a normal part of day-to-day existence.
Great read (it’s my second time lol)!
Necessary context, I suppose: I'm profoundly impaired, ADHD seems to be the only correct diagnosis I've gotten in my life, and I slipped through the cracks until my 20s because my Soviet-born parents stopped taking me to doctors for anything but routine vaccinations the moment someone in a white coat noticed I was developmentally delayed. If I wasn't medicated, I would probably be effectively institutionalised — living in supported accommodation at best.
A thing that kind of shocks me every time is how closely the empirical evidence on ADHD and ADHD treatment lines up with my lived experience — for example, even medicated, I do not experience an improvement in "attention", nor do I experience a "normalisation". I do experience a reduction in boredom, things outside a very narrow range of topics start seeming potentially interesting, and I'm just about able to redirect attention. It does in fact feel like I'm able to appreciate the importance — the salience — of tasks that otherwise seem vaguely abstract and unrelated to me. And I seem more "myself", at least from the inside — more able to act as I want to act, more able to dedicate attention and effort towards deciphering social interactions, more able to "let go" of something upsetting that's nonetheless providing some kind of stimulus.
For some reason, my heart rate also goes down. I had my meds adjusted in November — we added on instant release methylphenidate in the evenings (my clinician quizzed me thoroughly on how I feel in the evenings as the NHS-approved Concerta analogue begins to wear off, and was satisfied that I don't just need a higher initial dose), and my resting heart rate went down by 10 points. (It went back up after about six weeks for potentially unrelated reasons — I started experiencing POTS symptoms again this winter.)
I don't really have a broader point here. I am, however, relieved to see something on ADHD and stimulants that isn't dismissive of the effect of stimulants, and doesn't try to claim that "oh but you're just on speed, everyone performs better on speed".