Could generative AI tools be ruining your writing skills?
On cognitive offloading, critical thinking and losing your unique voice

On a recent walk along the boardwalk by the sea, I listened to a brilliant podcast by the Center for Humane Technology, entitled Rethinking School in the Age of AI.
During the episode, the guests were discussing a type of cognitive offloading, whereby mental tasks are offloaded to an external tool. In the context of schooling, a good example of this would be a child outsourcing their homework by letting ChatGPT write their essay instead of writing the essay themselves. The hosts went on to discuss how cognitive offloading can be really dangerous for children who haven’t yet fully developed their analytical thinking and reasoning skills.
One of the guests, cognitive neuroscientist Maryanne Wolf, says:
“Now, the problem with AI for me is what we call cognitive offloading. That in the interest of efficiency, we can do all this faster and better if we're using these technological devices that augment and blah, blah, blah. The reality is what we need as learners are the efforts.”
That makes sense. If children don’t actively engage with educational materials but instead outsource the mental effort involved in writing an essay, for example, then they’re not really learning, and they’re not developing their own analytical and critical thinking skills.
But what about adults? By the time we’ve reached adulthood, we have developed sound analytical and reasoning skills – at least, we like to think so. Even so, could an over-reliance on AI tools and constant cognitive offloading actually lead to adults losing some of these skills?
And what about writing, specifically? Do people who rely heavily on generative AI tools like ChatGPT to write their emails, draft presentations, write company announcements or produce marketing content, risk losing at least some of their ability to write well? Is there a tradeoff between doing things faster with technology, and ultimately impoverishing our own writing skills?
Cognitive offloading is nothing new
In the podcast, global education expert Rebecca Winthrop argues:
”I would say we have over history as a species evolved through cognitive offloading. None of us I think could be dropped in the middle of the woods and know which berries are poisonous and which berries are not, which is something we would've known many, many years ago.”
She’s right of course. We’ve all been outsourcing more and more mental tasks as new tools have become available. Just think of simple examples like using the calculator on your phone instead of doing sums in your head, letting your calendar app remind you of where you have to be and when, or navigating just about anywhere with the help of Google Maps.
Using search engines is another way of engaging in cognitive offloading. In 2011, Betsy Sparrow, Jenny Liu and Daniel M Wegner published a study which looked at the effects of Google on memory and the cognitive consequences of having information at our fingertips:
“The results […] suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”
In other words, we’re cognitively offloading our memory, which makes us less able to recall information independent of technology. We enjoy instant access to vast amounts of information, and we’re constantly looking things up on Google (other search engines are, of course, available). And unless we think that we won’t be able to access the information again in the future, we don’t tend to commit that information to our long-term memory.*
With access to the internet and search engines we’ve become great at looking stuff up, as and when we need to. But take away that access, and we’re immediately less knowledgeable. Coming back to Winthrop’s example, we know that with a smartphone in our pocket, we can be dropped off in the middle of the woods and instantly look up how to build a shelter, collect water, make a fire, and navigate our way back to civilisation. But without that smartphone, we’d be pretty lost. (Unless we had acquired these essential skills and internalised important knowledge beforehand and can recall it from memory when needed.)
Being dropped off in the middle of the woods is an unlikely thing to happen, unless you are Ed Stafford or Bear Grylls filming a survival program. And yet… our dependency on technology – and our increasing inability to function without it – tends to become abundantly clear in situations when the tech isn’t working. When we’re in the middle of nowhere without any reception and Google Maps can’t show us the way. Or when there’s a national power cut (like we experienced in Portugal recently) and we can’t look up essential information on the internet.
All this is to say that, even well before the launch of generative AI tools, we’ve all been using technology for cognitive offloading. And in the process, we’ve become increasingly more dependent on technology, and arguably less knowledgeable individually without access to our external library of knowledge that is the internet.
Generative AI: Cognitive offloading on steroids
And then, in 2023, along came generative AI, the deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.
People around the world started outsourcing all kinds of tasks to generative AI tools like ChatGPT, Gemini and Claude. What began with excitement about testing out the coolest new tool, quickly turned into a trusted everyday companion for many.
Researching information, writing, translating, coding, generating images and videos, these AI tools have become the go-to solution for getting answers, doing homework and getting work done – quickly, quickly.
It’s cognitive offloading, but on steroids. Why? Because by using generative AI, we’re no longer just outsourcing our memory storage. We’re actually outsourcing our thinking as well – at least some of it.
The link between cognitive offloading and critical thinking
In January 2025, Michael Gerlich published the paper AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. The study, conducted with 666 participants across a diverse range of age groups and educational backgrounds, revealed a “significant negative correlation between frequent AI tool usage and critical thinking abilities.”
The study paper states:
“As individuals increasingly offload cognitive tasks to AI tools, their ability to critically evaluate information, discern biases, and engage in reflective reasoning diminishes. This relationship underscores the dual-edged nature of AI technology: while it enhances efficiency and convenience, it inadvertently fosters dependence, which can compromise critical thinking skills over time.”
The paper also mentions the correlation between trust in AI tools and cognitive offloading and decision-making, citing a 2024 study by Gerlich on the topic of Exploring Motivators for Trust in the Dichotomy of Human – AI Trust Dynamics:
“Gerlich provides valuable insights into how trust in AI tools influences cognitive offloading and decision-making. The findings suggest that as users develop greater trust in AI, they are more likely to delegate cognitive tasks to these tools, which aligns with our observation of increased cognitive offloading leading to reduced critical thinking. This trust creates a dependence on AI for routine cognitive tasks, thus reducing the necessity for individuals to engage deeply with the information they process. Increased trust in AI tools leads to greater cognitive offloading, which in turn reduces critical thinking skills. This cycle is exacerbated by the role of virtual influencers, who further reinforce the reliance on AI-generated content by acting as credible sources of information.”
The link between trust and critical thinking
The link between trust and critical thinking is particularly poignant, because many users of generative AI don’t necessarily understand how these AI tools actually work behind the scenes. They might not know that these tools are basically predictive engines. They might not appreciate that the training data is not always up to date. They might not be aware that AI tools can hallucinate information if they can’t provide a factual answer. There’s no conscious thinking going on inside the black box.
Yet, answers generated by AI tools like ChatGPT sound and look human, because they can mimic the form of our language so well. After all, that’s what they’ve been trained on. They’re so convincing, their output is easily believed. Edward Gibson, psycholinguistics professor at MIT, put it really well when he spoke on the Lex Friedman podcast in 2024, explaining that Language Models (LLMs) – the models that fuel generative AI tools – are really good at (language) form, which makes them so convincing:
“But I think that’s why these large language models are so successful, is because [they are] good at form, and form isn’t that hard in some sense. And meaning is tough still, and that’s why they don’t understand. We’re going to talk about that later maybe, but we can distinguish […] between language, which is a communication system, and thinking, which is meaning. So language is a communication system for the meaning. It’s not the meaning.”
When AI output looks good and sounds good, we are more likely to trust it, we are less likely to evaluate the output critically, and we are less likely to question it. This is particularly true for younger people, who according to Gerlich’s study “exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.”
What all this means for your writing skills
So, could generative AI tools be ruining your writing skills? They certainly have the potential to. But… there are a number of aspects that play a part in whether generative AI tools are likely going to affect your writing abilities: your age and associated life experience; the number of years you’ve spent in education and how that has contributed to your current critical thinking and analytical skills; and how good a writer you were before generative AI came along.
Copywriting agency owner Rin Hamburgh said in a recent episode of the Human(e) Language podcast:
“Fundamentally, the stuff I learned 25 years ago when I started my career, there's stuff in there that is still so important today, and people miss it. And I think the terrifying thing is that […] people coming into the industry, they're like, yes, I can use chat GPT. Yes, I know how to deal with TikTok or whatever the latest thing is. And that's great. But has anyone grounded you in the basics of persuasive language? Or how the psychology of buying works?”
In essence, your ‘starting point’ as a writer matters. How you structure your arguments logically, how you weave important information into your narrative, how you keep the reader interested in what you’ve got to say… these are all terrifically important writing skills that require not just language aptitude but also critical thinking and analytical skills.
And once those writing skills are acquired, they need to be used. Writing is a muscle that needs to be flexed – regularly – to stay in shape. If you constantly outsource writing to an AI tool, you spend less time practicing the craft yourself. And this, in turn, might impact the efficiency and eloquence with which you can write independently.
How can you use generative AI tools without ruining your writing skills?
So, is there a way to use AI for writing without losing your thinking and writing skills along the way?
I turned to my good friend and seasoned copywriter Sarah Begley to help answer this question. Sarah has been actively integrating generative AI into her writing process, and shares the following advice:
“In my content writing role, I find AI highly valuable in the following areas:
Research: It literally saves me so much time gathering key information to help with the piece. BUT... the prompt is important here, as well as challenging AI with whatever it comes back with. Not everything ChatGPT tells you is Gospel! Ultimately, it's pulling data from information we've given it and A LOT of that information is incorrect. So I challenge sources. Ask it for links to those sources, and then verify manually.
Pinning down that all-important crux of the piece: If I have a piece of research, whitepaper, article, etc. I give that to AI (along with details on my TA) and ask what the most valuable / surprising / useful / interesting point is from it. This can help me centre my piece around it to be the most appealing to my reader, saving me a bundle of time and - usually - ensuring a high click-through read rate.
Idea generation: I may have an idea on a certain topic but want to know if there's another angle to it. Again, AI is a huge time saver with this. It can present diverse perspectives or connections I hadn't considered which can lead to a more interesting piece of content.
But ultimately, it's a tool. Unless you're delivering a matter-of-fact grey report, AI should not be the writer. For writers, where's the fun in that anyway! And for clients, why would you want your brand to sound just like everyone else’s?”
Original thought, authenticity and your unique voice
A writer who doesn’t think for themself is like a chef who only ever uses a cookbook. Anyone can follow a recipe and mix ingredients together. In the same way, anyone can enter a prompt into an AI tool and create an article in seconds. There’s nothing special about it. Nothing exciting. Nothing creative. Nothing new.
Don’t outsource your thinking or your writing to AI tools, just because you can. While AI tools have their benefits when it comes to sifting through large amounts of data, ideation, or using it as a sounding board, the actual writing is best left to you. Your creativity. Your Authenticity. Your point of view.
Writing is all about sharing your own ideas and communicating your personal thoughts. The best writers are those who are able to share original views or bring a new perspective to a topic.
As Edward Gibson so eloquently put it, language is a communication system for meaning. In the process of writing, you naturally use your analytical skills to synthesise the information you want to share, you think about how best to present your thoughts and ideas, you spend time structuring your arguments so they logically follow on from one another. It’s not always a fast and easy process, but the end result is something that’s uniquely yours. Your meaning. Your reflections. Your conclusions. Your advice. Your personal take on things. And if written well, people will want to read it and engage with it.
If you outsource your critical thinking to generative AI, however, it’s unlikely that you’ll actually share anything original. As Sarah says, generative AI has been trained on everything that’s available on the internet – the good, the bad, and the ugly. If you don’t fact check and critically evaluate the AI output, you’re not creating anything new or meaningful. At best, you’re regurgitating what’s already been said before. At worst, you might be spreading falsehoods or perpetuating biases.
I leave you with this reflection which Michael Gerlich posted on LinkedIn recently. I feel it’s the perfect note to end on:
*In 2021, Sandra Grinschgl, Frank Papenmeier, and Hauke S Meyerhoff published a study on the Consequences of cognitive offloading: Boosting performance but diminishing memory. The study explored how well people were able to recall information after they completed a Copy Pattern Test, depending on whether they knew that they might be tested again on this information later on in the form of a memory test. They concluded that “cognitive offloading increases immediate task performance but also diminishes subsequent memory performance for the offloaded information.” But they also found that “announcing subsequent testing could compensate for at least some of the detrimental effects of cognitive offloading on memory acquisition.” That is to say, if we’re looking up information online with an explicit learning goal of recalling the information later on (rather than simply relying on the fact that it will be available online again later), we’re more likely to memorise it.