Is AI making us lazy thinkers?

"I'm worried AI is making me lazy with my thinking."

This wasn't a headline from a tech magazine or a doomsday pronouncement from a futurist. It was a quiet confession from a communications professional over lunch at an AI workshop I had facilitated — a sentiment expressed with equal measures of curiosity and dread.

The Hidden Cost of Cognitive Convenience

Her eyes told me a story I've been encountering increasingly these days: the growing tension between our reliance on AI and our tangible fear of what we might forfeit when the machine does too much of the "thinking" for us. For a moment, it reminded me of those who have only ever taken the lift and never ascended the stairs. The arrival remains the same, but there is a certain grit, a certain heft to the act of climbing that is lost when one pushes a button for instant ascent.

There is emerging research supporting these anxieties. It demonstrates how generative AI streamlines our pursuit of facts—and that is excellent. However, there is a catch: when the path to information is seamless, a crucial element of "cognitive wrestling" vanishes. Educators and cognitive scientists refer to this as "desirable difficulty"—the productive struggle that strengthens neural pathways and deepens understanding.

The concept, pioneered by cognitive psychologists Robert and Elizabeth Bjork (Bjork & Bjork, 2011), posits that making learning temporarily more challenging leads to stronger long-term outcomes. Their research on "desirable difficulties" illustrates that conditions which create initial learning challenges often enhance long-term retention and transfer—a counterintuitive finding with profound implications for how we approach knowledge acquisition. It's akin to resistance training for your brain. Every time you swap hands-on mental labour for an AI-assisted shortcut, you skip a step in the learning dance. The post-lab confusion, the aha! moments, the re-examination of our assumptions—these are not inconveniences, but the foundational blocks of critical thinking. When we circumvent these processes with the aid of AI, it might inadvertently undermine our cognitive resilience.

Does this mean that AI is inherently bad for our brains? Not necessarily. In fact, research suggests (Singh et al., 2025) that experts with well-structured knowledge can utilise AI to free up bandwidth for more strategic, high-level thinking. However, for novices—and that applies to most of us when we venture into new territory—there is a risk. Excessive shortcuts can undermine the mental faculties we require most.

Cognitive Efficiency: Our Natural Bias

What worries me most isn't AI itself but our evolutionarily-wired tendency toward cognitive efficiency. Our brains, which remarkably consume approximately 20% of our body's energy despite constituting only 2% of our body weight (Raichle & Gusnard, 2002), have evolved sophisticated energy conservation mechanisms. This biological reality manifests in what psychologists refer to as the "cognitive miser" principle—our natural inclination to minimise mental effort whenever possible (Fiske & Taylor, 1991).

Much like water seeking the path of least resistance, our minds instinctively gravitate towards what feels easiest. When faced with a cognitively demanding task and an easier alternative promising similar outcomes, we predictably opt for the latter—a tendency extensively documented by Kahneman (2011) in his work on System 1 and System 2 thinking. Without conscious intervention, we risk outsourcing not only routine mental tasks to AI but also potentially compromising our capacity for deep, sustained thought—the kind that fuels innovation and wisdom.

This isn't mere speculation. Research on cognitive offloading—our tendency to delegate mental tasks to external tools—suggests that while it frees working memory for other purposes, excessive reliance may subtly yet significantly alter skill development (Risko & Gilbert, 2016). Just as dependence on GPS has been shown to weaken spatial navigation abilities (Dahmani & Bohbot, 2020), there is a need for measured concern about cognitive capacities that might gradually atrophy when consistently entrusted to artificial intelligence.

Balancing Automation with Intentional Engagement

So, where does that leave us in this AI-fuelled era? Perhaps we require a more nuanced framework that recognises the varied cognitive demands of our daily work.

Not every task we face as knowledge workers is a learning opportunity—nor should it be. The meeting summary you need to produce by noon, the data you must organise before Thursday's presentation, and the routine correspondence that fills your inbox—these tasks consume cognitive bandwidth without necessarily expanding your capabilities. For these, AI serves as a welcome ally, freeing mental resources for more consequential endeavours.

The true challenge of our era is not to resist AI entirely but to cultivate a more nuanced discernment regarding when to embrace cognitive efficiency and when to intentionally engage with productive struggle. It requires us to differentiate between tasks that gain from automation and moments that signify our learning edge—the frontier where growth occurs.

I often find myself posing a simple question before resorting to AI: "Is this a task I need to complete, or an opportunity to broaden my understanding?" This distinction has prompted me to adopt a contextual approach: for routine tasks, I utilise AI without hesitation; for growth opportunities, I first spend a few minutes reflecting and visualising independently. This brief period of unaided contemplation—so I hope—preserves my cognitive faculties while still allowing me to benefit from AI's capabilities—like choosing to walk up the first few flights of stairs before deciding to take the elevator the rest of the way.

Ultimately, AI does not simply provide us with answers—it offers us choices about the type of thinkers we aspire to be. If we rely on it as a crutch, we sacrifice some of our mental agility. However, if we engage with it as a partner, we can enhance our curiosity and broaden our intellectual horizons.

Perhaps the real question isn't solely about AI's capabilities. Rather, it revolves around determining which cognitive muscles we intend to strengthen and which ones we are prepared to let technology develop for us. After all, it's in the decision itself—between the lift and the stairs—that we frequently discover what we are truly made of.

REFERENCES

Bjork, E. L., & Bjork, R. A.  (2011).  Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning.  In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56-64). New York: Worth Publishers.

Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10(1). https://doi.org/10.1038/s41598-020-62877-0

Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). Mcgraw-Hill Book Company.

Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: New York: Farrar, Straus and Giroux.

Raichle, M. E., & Gusnard, D. A. (2002). Appraising the brain’s energy budget. Proceedings of the National Academy of Sciences, 99(16), 10237–10239. https://doi.org/10.1073/pnas.172399499

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002

Singh, A., Taneja, K., Guan, Z., & Ghosh, A. (2025). Protecting Human Cognition in the Age of AI. arXiv preprint arXiv:2501.01537.

Photo by prottoy hasan on Unsplash

Previous
Previous

What AI Reveals About How We Think

Next
Next

The Allure of Shortcuts in the AI Era