Before books became widespread, texts of importance were memorised. This is why certain scholars and monks were revered as walking libraries.

Socrates was famously wary of the written word.
Plato, his student, wrote in 370 BCE that “this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.”
New technology — printing, television, calculators, computers — has always raised the legitimate fear that we will lose something vital in the process, whether this be the ability to organise information, perform complex math, or simply tell a story directly to an audience.
How is that sense of risk playing out in the age of artificial intelligence? What changes should we prepare for, in a world where therapists use AI to spot trends in patient data; people use it to write love letters and apology notes; researchers, to summarise source material; and filmmakers seek to use it in place of actors, cinematographers, sets and editors?
What happens when a machine replaces not processes or mechanical tasks, but human facilities such as thought, creativity and problem-solving?
The biggest difference with AI is that it isn’t just helping the human do something faster or more efficiently, says Vitomir Kovanović, a professor of learning analytics at Adelaide University. It is largely replacing human effort in these areas, offering a completed product as good as or better than those a human might produce.
One might think: They said the same things about books, the written word, and the assembly line. Well, look around. For better or worse, we don’t memorise things any more; we make barely anything by hand.
What would the equivalent be, in a prompt-reliant age?
Well, this isn’t a story about employment or industry evolutions (though those stories have appeared and will continue to appear across HT, including in Wknd). Right now, we’re exploring the impact on your mind.
MIND GAMES
A growing body of research indicates that there are hidden cognitive costs to relying too heavily on AI.
As we delegate thought, cogitation and argument, we are incurring what researchers call “cognitive debt”. As with financial debt, the short-term ease of deferring payment of one’s dues (or in this case, putting off mental effort) comes with incremental costs. “Over time, the debt grows so large that it can alter how one thinks and analyses complex information,” says Kovanović.
About seven months ago, a study on such impacts was concluded at Massachusetts Institute of Technology (MIT) and released as a preprint. As part of this study, a group of 54 people aged 18 to 39 were divided into three groups and asked to write essays on different subjects. The first group was asked to use the help of ChatGPT; the second was asked to draw information from Google; a third was asked to complete the assignment with no external help at all.
In EEG scans, the researchers found that, over four months, the group using ChatGPT consistently showed lower levels of brain activity and “consistently underperformed at neural, linguistic, and behavioral levels”. Their essays were found to be less creative and more formulaic, by the English teachers doing blind evaluations.
By the end of experiment, the researchers found that this group was also doing less than they had at the start; from writing essays based on information and perspective given to them by ChatGPT, they were now simply copying and pasting the chatbot’s output.
Meanwhile, where those with no access to AI could quote widely from their essays, quote parts verbatim, and discuss at length the points they had made, 83% of the ChatGPT group was unable to recall their arguments, or the words they had used to make them.
“What really motivated me to put it out now before waiting for a full peer review,” the paper’s main author, Nataliya Kosmyna, told Time magazine, “is that I am afraid in six to eight months, there will be some policymaker who decides, ‘Let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.”
NEURAL NET WORTH
The MIT study isn’t the only one to arrive at such findings.
Reliance on AI instead of on more traditional Google search can result in a more superficial understanding of topics, found a study by researchers at the Wharton School of the University of Pennsylvania (whose findings were published in PNAS Nexus in October).
Through seven studies conducted across over 10,000 participants, researchers found that when asked for advice on subjects such as tips for a healthier lifestyle, those who used AI to seek answers offered shorter, more generic advice than those who used non-AI-led Google search to gather the information.
Another Wharton study, whose findings were published in PNAS in June, found that high school students in Turkey who used a ChatGPT math tutor aced practice problems, but eventually struggled without the tutor, performing worse than students who didn’t use AI at all.
“The real question,” Kovanović says, “is whether such an evaluation, which involves working without AI, is actually meaningful, if the subjects will always have AI in their workplace in the future.”
Imagine a situation in which one person practices sprinting 100m for two months with running shoes, while another does the same thing barefoot, Kovanović adds. “Pit them against each other in a barefoot race, and of course Player 2 will win. But in a race that involves wearing shoes, the second player would have a hard time adjusting and could fall behind.”
WHEN AI CAN HELP
What, then, is the answer? Clues lie in an interesting twist to the MIT study.
Participants who previously wrote the essays without any tools exhibited a significant increase in brain connectivity when they were eventually allowed to use an LLM on a familiar subject.
Strategic timing of AI tool introduction is key, the researchers note. If one understands the subject matter and turns to AI for collaboration, outcomes, including engagement and comprehension, could improve.
More research is certainly needed into how AI shapes user habits. But if certain large language models (LLMs) could be programmed to serve as tutors and collaborators rather than churn out finished responses. the difference, particularly for young users, could be crucial. Such models, as Wknd columnist Kashyap Kompella has previously pointed out, could revolutionise learning.(Click here to read his take on the subject.)
After all, as the Lebanese-American poet Kahlil Gibran put it, about a century ago: “If a teacher is indeed wise, he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.”