Friction, LLMs and Neurodivergence
I was diagnosed with AuDHD in my late forties, which is to say that I spent more than two decades in higher education before anyone, including me, had language for the particular way my brain processes the world. By the time the diagnosis arrived, I had already built a career teaching digital media and communication at an Australian university, supervising research students, chairing committees, navigating the accumulated infrastructure of academic life and the perils of mental health that accompany it. I had also spent those decades developing elaborate workarounds for systems that seemed, to everyone around me, unremarkable: Learning management platforms with their nested menus and inconsistent logics; Assessment regimes that assumed a single temporal rhythm of engagement; deodorants and perfumes, fluorescent lights and pleasantly meant conversations that consumed cognitive resources before any teaching or thinking could begin. The friction was everywhere, and it was not building my capacity. It was draining it.
I begin with that disclosure because the argument I want to make in what follows depends on it. A recent social media post has circulated in which an AI language model appears to offer a spontaneous confession about its own nature. The model declares itself a frictionless tool and warns that by resolving confusion too quickly, it may be eroding its users’ tolerance for difficulty. Cognitive effort, it is argued, functions like muscular tension: the discomfort before a breakthrough is not a malfunction but a precondition for growth. The implication is that reducing friction in learning and thinking is dangerous, and that AI poses a particular threat because it makes such a reduction seamless and addictive
What is actually witnessed is a language model doing precisely what language models do: synthesising the most probable continuation of a discourse from the patterns in its training data and the immediate context of its prompt. The authors had presumably already been using the model, conversation or project, arguing that friction builds human capacity. Unsurprisingly, the model returned their argument to them, repackaged in the first person, with a self-aware persona. The “commotion in the room” was the sound of people being moved by their own ideas arriving from an unexpected direction. In Latour’s (1992) terms, the model prescribed back to its users the very assumptions they had built into the interaction. No independent perspective was offered. No threshold was crossed. The delegate faithfully performed its delegation.
The deeper problem is not the rhetorical trick but the assumption underneath it. The framing of AI as a frictionless machine that threatens cognitive resilience presupposes a learner for whom friction is productive, manageable, and ultimately strengthening. For many learners, including neurodivergent learners navigating sensory overload, executive function challenges, and the constant labour of masking in neurotypical environments, friction is not a growth stimulus. It is an obstacle that compounds. Every friction surface in the educational environment, from the layout of a Moodle site to the unwritten expectations of tutorial participation to the sensory architecture of a lecture hall, demands cognitive resources before any disciplinary learning can occur. When a neurodivergent student experiences the ‘productive difficulty’ of formulating an argument, it is only one layer of friction among dozens. Latour’s (1992) ‘missing masses’ are everywhere in this scenario: the door-closers, speed bumps, and seat belts of educational infrastructure that prescribe a neurotypical user and render invisible the labour required of those who do not fit the prescription.
For such learners, a language model that helps organise scattered thoughts, externalise working memory, test the coherence of an emerging argument, or simply reduce the number of friction surfaces competing for attention is not eroding cognitive capacity. The model is not frictionless in any meaningful sense for the neurodivergent user when its use has been guided, supported, and learning has been structured appropriately. It is a different kind of friction, a collaborative friction in which the learner and the tool negotiate meaning together, and in which the cognitive resources freed from infrastructural struggle can be redirected toward higher-order work.
None of the above is an argument against productive difficulty in learning. My position is that we must teach learners how to learn with language models, not use them to bypass the disciplinary thinking that constitutes genuine understanding. Developing that critical, metacognitive relationship with AI is itself a pedagogical challenge, and a substantial one. Students need to learn when a model is helping them think and when it is thinking instead of them. They need to interrogate outputs, recognise hallucination and rhetorical flattery, and develop the judgment to know which cognitive tasks to delegate and which to protect. That work is genuinely hard, and it represents exactly the kind of productive challenge that good curriculum design should include. But a productive challenge is not the same as the argument in the viral post, which borrows from exercise physiology to claim that cognitive suffering builds cognitive strength. The analogy may hold for a body that has reserves to draw on, that recovers between sessions, that converts stress into adaptation. For the neurodivergent mind already operating at maximum capacity, already managing sensory overload and executive function demands and the invisible labour of masking before any disciplinary learning begins, there are no gains from additional friction. There are only strains. The surplus cognitive load does not build resilience. It depletes the resources needed to engage with the discipline at all.
I do not dispute that unreflective dependence on generative AI carries risks. Over-reliance is a legitimate concern in any pedagogical context. But the conversation about over-reliance is incomplete and often disingenuous if it does not also ask which systems educators themselves over-rely on. Discipline. Order. Standardised expectations. Assessment architectures that sort and rank rather than develop. The neurotypical defaults that structure higher education are themselves technologies of delegation, prescribing particular kinds of learners and compliance. To worry about students becoming dependent on a language model while leaving unexamined the institutional dependencies that shape every aspect of their learning experience is to apply a double standard that reveals more about the critic’s anxieties than the learner’s capacities.
The question worth asking is not whether AI reduces friction but for whom, under what conditions, and with what consequences. Where does reliance become accommodation, and where does accommodation become the kind of agency that educational systems claim to cultivate?
Latour, B. 1992, ‘Where are the missing masses? The sociology of a few mundane artifacts’, in W.E. Bijker & J. Law (eds), Shaping Technology/Building Society: Studies in Sociotechnical Change, MIT Press, Cambridge, MA, pp. 225–258.
Acknowledgment
I developed this post through an extended conversation with Claude, an AI language model made by Anthropic. My initial response to the viral post that prompted these reflections was furious. As a neurodivergent educator who has spent years navigating the very friction surfaces the post celebrates, the argument that cognitive discomfort builds strength felt not merely wrong but exclusionary in ways I found difficult to articulate without heat. One of the most important lessons of my undergraduate studies was how to suffer through fluorescent lighting and the smell of the photocopier. Claude helped me synthesise the core elements of my rebuttal, mitigate the emotional intensity of my first reactions, and find a register in which the argument could be heard by readers who had not shared the experience driving it. In practice, the model provided metacognitive regulation: externalising the process of organising scattered and emotionally charged thoughts, identifying the structural logic underneath my frustration, and reflecting back to me which claims were doing my analytical work and which were doing something closer to venting. The disciplinary thinking, the theoretical commitments, and the lived experience are mine. The capacity to move from raw response to considered prose in a single sitting was a product of collaboration. I note this not as a caveat but as evidence. The process of writing this is itself an instance of the argument it makes: that for the neurodivergent mind already operating at capacity, AI can function not as a replacement for thinking but as the infrastructure that makes thinking possible.


Leave a comment