I summarized four recent research papers to show why and how AI erodes our critical thinking ability.
Look, I’ll admit it: I messed up.
AI outputs can disappoint, but it’s a two-way street. Yes, the models hallucinate and have their off days. But the quality of your prompts matters — and sometimes, I get lazy. Seduced by AI’s convenience, I’d rush through tasks, sending unchecked emails and publishing unvetted content.
I try my best to triple-check everything now. But those moments of exhaustion? Millions of years of evolution didn’t exactly equip humans with the robotic consistency AI can achieve.
This research from Microsoft sent a shockwave suggesting that frequent AI usage is actively reshaping our critical thinking patterns. And some groups will bear the brunt of this shift more than others.
A 2023 paper saw this coming, highlighting two skills that would become essential in the AI era. Take a guess.
Critical thinking and science.
Not coding. Not data analysis. Not even AI engineering. But the fundamental human capabilities that separate strategic thinking from mechanical execution.
In this piece, we’ll examine how Gen AI quietly reshapes our cognitive landscape, using the latest research to map this transformation. But more importantly, we’ll confront the second-order effects that nobody’s talking about.
Because in our profit-obsessed world, who’s thinking about the widening skills gap? Will business owners prioritize this issue? Or are we sleepwalking toward a future where we’re eroding the very capabilities that make us human?
Shall we?
Skills That Make You Irreplaceable
So, we’ve established that AI is shaking things up. But what does that actually mean for your job, your skills, and your future?
Researchers at OpenAI and the University of Pennsylvania decided to dig into this very question in their paper: An Early Look at the Labor Market Impact Potential of LLMs.
They didn’t just guess, of course. They took a massive database of jobs and the tasks those jobs involved (called the O*NET). Then, they asked both humans and GPT-4 to rate how much each task could be sped up by using AI.
They focused on evaluating individual tasks instead of an entire job. Think of it like this: could AI help you check grammar mistakes, even if it couldn’t write the whole report?
This table is where things get really interesting (and relevant to our topic today). Think of it like a cheat sheet revealing which skills will become less valuable and which will become your superpowers in the AI era.
Let’s break it down, plain and simple.

Think of the numbers in this table like this (we’ll focus on the “β” column, which is a good middle-ground estimate. And I’m ):
- Positive Number (like Writing’s 0.467): The more a task relies on this skill, the more likely AI impacts it.
- Negative Number (like Science’s -0.230 in the β column): The more a job relies on this skill, the less likely AI will impact it. It’s like saying, “The more a day-to-day task requires scientific reasoning, the safer this task is from direct AI impact.”
- A Bigger Number (either positive or negative, just further away from 0): Indicates a stronger, more predictable relationship between how important a skill is to a job and how likely AI is to impact that job.
Let’s look at some key skills and their scores:
- Writing (0.467): Big, positive number = a huge red flag. Tasks that involve a lot of writing are highly likely to be affected by AI. Think content creation, report writing, or crafting emails, i.e., tasks you are likely already assigned to AI
- Programming (0.623): Even bigger positive number! If your job involves coding, well… you’ve been using Github Copilot or Cursor. So you know the best. This doesn’t mean programmers are obsolete, we will discuss this in the next section.
- Critical Thinking (-0.196): Negative number. Jobs requiring critical thinking — analyzing information, making judgments, and solving complex problems without clear-cut answers — are less susceptible to AI’s impact. As I said before, AI can generate text; it can’t (yet) truly think.
- Science (-0.230): Another negative number! Jobs relying heavily on scientific methodology, experimentation, and deep domain expertise are relatively safe. AI can help with data analysis, but it can’t replace the thinking bit.
It’s not about “high-skill” versus “low-skill” tasks but the skills that make humans human.
Whereas skills that involve routine, repetitive tasks, even if they require training (like basic coding or writing formulaic reports), are the ones most at risk.
Yet, there’s a brutal irony emerging. The very tools helping us work ‘smarter’ are quietly eroding our most valuable cognitive defenses.
Let’s examine the evidence.
Trading Brainpower for AI Efficiency
The skills landscape is shifting.
Yes, critical thinking, scientific reasoning, and complex problem-solving are becoming your armor in an AI-driven world.
But what does this actually mean in practice? How is Gen AI changing how our minds work, and what are the trade-offs?
Before we dive deeper, I want you to try something. Open up your favorite AI tool — ChatGPT, Gemini, DeepSeek, or whatever you use. Give it this prompt (tweaked for your specific role):
I need to analyze the critical thinking requirements of a [YOUR JOB TITLE] role.
First, generate a comprehensive list of typical daily and weekly tasks for this position, based on standard industry expectations.Then, analyze each task and assign a "Critical Thinking Score" (0-100%) based on how much it requires:
- Analysis of complex information
- Independent judgment
- Problem-solving without clear solutions
- Strategic decision-makingFormat output as CSV with columns:
Task, Critical_Thinking_Score, ReasoningSort by Critical_Thinking_Score in descending order.
Go ahead, I’ll wait…
Does the result match your own assessment? Regardless, it’s a good sheet to keep track of. This mini exercise highlights the core dilemma we’re about to explore: the double-edged sword of AI.
The Irony. AI is Depriving Critical Thinking
ChatGPT launched two years ago. Since then, research labs have been mapping a troubling trade-off: efficiency vs. thinking capacity.
I’ve analyzed three key studies that expose this pattern:
Gen AI tools are undeniably powerful.
The “GPTs are GPTs” study, for example, found that LLMs could complete, on average, 15% of all worker tasks significantly faster at the same level of quality, just with access to an LLM. And with some additional helper tools, this increased to between 47 and 56% of all tasks. That is a massive boost! The “AI Tools in Society” paper also concludes that AI offers “enhanced efficiency and unprecedented access to information.”
But there’s a catch.
Some studies identified an urgent issue.
They found a strong negative correlation (-0.68) between AI tool use and critical thinking skills. Ie. The more often you use AI tools, the less critical thinking is involved.

The “Impact of Generative AI on Critical Thinking”, highlighted in conclusion:
Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work. It can potentially lead to long-term overreliance on the tool and diminished skills for independent problem-solving.
Take a moment to reflect:
- When was the last time you truly wrestled with a problem? The one that you needed to take a deep breath and stay focused to complete.
- How often do you verify the information AI provides?
This is less about AI. But for people like you and me, it’s our over-reliance on AI.
Less of What You Do; More About How You Do It.
Forget the outdated idea of robots stealing jobs wholesale.
The shift is subtle yet profound.
It’s a change happening on an exponential scale that perfectly aligns with Gen-AI tools’ adoption rate. Yet so gradually that many of us haven’t even noticed. We’ve been unconsciously adapting to a new way of working.
Think about your day-to-day.
- Are you spending more time editing AI-generated drafts, from emails to reports?
- Still building reports from the ground up, or are you focusing on refining AI’s analysis?
- Coding every line yourself, or are you verifying Copilot’s suggestions and integrating them into a larger project?
This isn’t about automation replacing workers entirely. A study calls this a move “from material production to critical integration.” You’re becoming less of a creator and more of a steward, a verifier, and a curator of AI-generated output.
The AI can generate text; it can’t (yet) apply the nuanced judgment needed to make that text truly effective and relevant. Hence, critical thinking comes into play. It allows you to evaluate the quality of AI’s output, identify biases, spot inaccuracies, and integrate that output into a larger, more complex context.
I reshuffled the order of these capabilities mentioned in the study so it becomes a mini framework that you can check against your current workflow:
- Task Stewardship: Many got this wrong. Questions: How often do you have a clear goal in mind when using Gen AI? Can you define AI’s limitations clearly and know when to take over?
- Information Verification: Can you distinguish between reliable information and AI-generated hallucination?
- Response Integration: How quickly and accurately can you take a piece of AI-generated content and seamlessly weave it into your own work?
Simply copy-pasting won’t cut it. You need to judge whether the output meets your goal and then adapt the output to the final result.
Let’s take the software developers’ role as an example. I wrote a piece last year about whether AI boosts developers’ productivity: AI Code Assistants Boost 26% of Productivity? Read The Small Print.
Combine this article with the findings from the “Widening Gap” study. Most senior developers follow the same three steps as I mentioned above. They understand the architecture and where a task fits in; then, they use the Gen AI tools to help them complete a small piece of work; finally, they integrate it into the existing pool.
Whereas newbie programmers using GenAI faced the following metacognitive difficulties:
- Interruption: Constant AI suggestions disrupted their thought process.
- Mislead: AI led them down the wrong path, providing incorrect or unhelpful code.
- Progression: They struggled to understand the underlying principles, even when AI provided a working solution.
So you see, the criteria to get a job in the future right now is even more demanding than ever.
But how confident are you that you aren’t over-dependent on AI? What about those who are early in their careers? Are they falling into a trap? Overconfidence in AI, fueled by inexperience and, yes, a bit of human laziness, is creating a widening gap.
Studies are already seeing the cracks.
More AI Usage = Less Thinking?
A hidden danger lurking beneath the surface: a false sense of security. A dangerous disconnect between how good we think we are at using AI and how effectively we’re actually using it.
All these studies uncovered a chilling “confidence paradox.” The more confident people were in AI’s abilities, the less likely they were to engage in critical thinking.
Two tables from separate studies explained this paradox the best.
I want you to imagine that you’re driving a car with a highly advanced autopilot system. This system can handle almost all aspects of driving. However, you, the driver, are still ultimately responsible. I categorized ‘drivers’ into two groups: those with strong critical driving skills and those with weaker ones.

Table 4: Non-standardised coefficients of the mixed-effects regressions modeling — The Impact of Generative AI on Critical Thinking
Drivers WITH Strong Critical Thinking Skills:
- Experienced, reflective drivers, even with autopilot, constantly monitor the road and the system’s actions, ready to intervene. (0.52***, Tendency to reflect)
- Confident, skilled drivers, even with autopilot, remain engaged, ready to take over if their skills are needed. (0.26*, Confidence in self)
- Drivers who are confident in judging when autopilot might be wrong are more likely to step in and correct it. (0.31*, Confidence in evaluation)
Drivers WITHOUT Strong Critical Thinking Skills:
- Drivers who trust the autopilot and believe it can handle anything are less likely to pay attention, potentially missing crucial errors. (-0.69***, Confidence in AI)
Similarly, data from another study proves the exact same thing.
Continue with our car and the advanced autopilot analogy. This table explains the relationship between a driver using the autopilot overall (“AI Tool Use”) and how much they rely on it specifically for making driving decisions (“Cognitive Offloading”).

Table 5. Correlation matrix. — AI Tools in Society.
- AI Use ↑, Cognitive Offloading ↑ (r = 0.89): More autopilot use strongly leads to more reliance on the system. Hence the very strong positive correlation of 0.89.
- AI Use ↑, Critical Thinking ↓ (r = -0.49): Frequent autopilot use is associated with a decline in core driving skills. The negative correlation of -0.49 reflects this.
In short: More AI Use→ Cognitive Offloading increase → Critical Thinking decline. i.e., the more drivers trusted the AI, the less attention they paid to the road.
It’s not a surprise that we’re happy to outsource our thinking. We’re letting AI handle tasks that we could do ourselves but choose not to.
If AI is the GPS, are you learning the route or just following the turn-by-turn directions?
How Gen AI Widen the Skills Gap?

It turns out that experience is playing a bigger role than ever, and that’s creating a widening gap. Of course, this is not a guarantee; you still need to be able to perform critical thinking.
Anyway, imagein, you’ve just graduated, landed your first job, and you’re eager to prove yourself. But you’re also, understandably, lacking in experience. Your senior colleagues, on the other hand, have been there and done that.
They’ve seen things go right, and more importantly, they’ve seen shit hits the fan and did the late-night cleanup (not literally, of course). So, the seniors developed a gut feeling — an intuition — for what works and what doesn’t.
This is what Marvin Minsky called “negative expertise,” and it’s incredibly valuable.
Now, throw GenAI into the mix.