The study, first published in June 2025 by MIT’s Media Lab, examined the brain activity of fifty-four subjects when writing an essay, one group using Large Language Model (LLM) AI, one group using Google’s search engine, and one group writing independently. Electroencephalograms (EEGs) from the first stage of writing immediately showed that the LLM group exhibited lower attentional engagement and weak executive control when compared to both the Google and brain-only groups.
Even at the first stage of the study, MIT’s researchers have demonstrated how the effects of LLM reliance are immediate and restrictive: the subjects who wrote using generative AI engaged significantly less with the writing process. Before even considering the subsequent implications for individual resilience, memory, and so forth, we should all be concerned that the integrity of writing is being eroded. Historian Lynn Hunt famously argued for the importance of writing as a key stage in the development of our brains, rejecting the idea that writing is simply the “transcription of thoughts already consciously present”. Rather, the process of writing involves an active attempt at sorting through and developing embryonic ideas, and then furthering them through the writing process. The physical act of writing is an extension of thinking, not a consequence of it, and therefore the rise in LLMs being used for writing already indicates that cognitive engagement is on the decline. MIT’s study shows this exact phenomenon; when asked by the researchers about their essays, the group who used LLMs for writing were unable to give substantiated, detailed explanations about the work they produced.
This leads to the secondary cognitive impact of using LLMs that the study seeks to prove. When the subjects were instructed to do secondary edits of their essays, the group that used generative AI struggled significantly more to remember their essay, and the researchers argue that this shows a bypassing of deep memory processes, showing weaker alpha and theta brain waves than the non-AI group. Here is where the research gets really interesting. In the editing stage of the study, the Google and brain-only groups were permitted to use LLMs to edit and adjust their essays, while the original AI group had to edit without any third party assistance. While using AI to do the writing had significant impacts on the cognitive engagement of the subjects, using AI for editing had surprisingly low impact on the subjects. Once they had done the actual writing themselves, they were able to maintain their engagement and brain function when using AI for more minor tasks.
So, amidst all the panic about the impact of generative AI and LLMs on cognitive function, what does the study actually show us?
It does remain true that the use of LLMs to perform entire tasks is linked to a decrease in cognitive engagement. Outside of scientific research, this phenomenon is being reported on in droves by teachers, professors, and other professionals who witness the increasing use of generative AI to perform tasks meant for humans. While generative AI can be used to support, rather than complete a task, teachers are reporting a rapid increase in work entirely done by LLMs in school settings. In a report published by the Turing Institute, fifty-seven per cent of teachers who were aware of student use of LLMs reported that students were turning in entirely AI-generated work as their own. This then translates into concerns about student learning. In the same report, half (forty-nine per cent) of teachers agreed that student use of generative AI was leading to poorer quality work, as well as poorer student behaviour and classroom engagement.
This idea is corroborated by researchers at Harvard, who concluded that in the workplace, when tasks are completed by generative AI, workers then experienced a decrease in engagement. When studied, workers who used AI subsequently demonstrated an average eleven per cent drop in intrinsic motivation.
However, these studies show the issue with using generative AI models to entirely complete tasks. Teachers and researchers have accurately identified the issue with people relying on LLMs to write, or produce work, but this does not correlate to all generative AI use being necessarily damaging towards brain function.
In the same Harvard study, productivity increased drastically when workers were aided by AI. Returning to the MIT Media Lab research, the subjects who used AI as an assistant to their own work, rather than as an author of its own digital creation were able to maintain their mental processing skills. Their ability to create deep memory processes was not inhibited by secondary use of AI, rather, once the subjects had done the hard work of writing or creating by themselves, or with use of Google, the use of AI to perform minor edits, or proofread, was not substantially harmful to their cognitive function.
So, how should we protect our brains going forward? Continuous engagement with our own ideas is essential. This is referred to as metacognition, and is essential to maintaining peak cognitive function. In any area of life where we are challenged, it is best to sit with these challenges before immediately turning to AI. Regarding work such as writing, the ability to work through or sit with obstacles is what builds up our mental resilience. By engaging in metacognition first, more neural pathways are created which in turn strengthens our memory. Immediately yielding to AI bypasses this active recall. Instead, AI should be used as a secondary tool to engage with, and when used as an assistant rather than a generator, we can reap the rewards of the technological age without sacrificing our cognitive abilities.