Thinking Fast, Slow, and Artificial

Thinking Fast, Slow, and Artificial: The Hidden Trap — and the Road Back

We are no longer living in an era where we simply use technology.
By 2026, we have entered an era where we think with it.

But what happens to human judgment when cognition itself is outsourced to algorithms?

Psychologists have long distinguished between two modes of thinking:
System 1 — fast, intuitive, automatic.
System 2 — slow, analytical, deliberate.

Now, researchers Shaw and Nave introduce a third layer: System 3 — external, artificial intelligence. A form of cognition that operates outside our biology, yet increasingly feels like an extension of our own minds.

And this is where the risk begins.

The Cognitive Surrender

System 3 is seductive. It is fast, fluent, data-driven — and often impressively accurate.

But it carries a profound risk: cognitive surrender.

When AI delivers confident answers, humans tend to disengage their critical thinking. We stop verifying. We stop reasoning. We accept.

Judgment is not formed through answers but through exposure to uncertainty, error, and reflection. When AI removes parts of this struggle, performance may improve in the short term while the development of independent judgment quietly weakens over time.

Research shows a troubling pattern. When AI is correct, performance improves dramatically. But when AI is wrong — which still happens frequently — human accuracy drops below the level of those who never used AI at all.

Even more concerning, access to AI increases confidence regardless of correctness. We do not only become wrong — we become confidently wrong, because the authority of the machine substitutes for our own judgment.

The Paradox: The Expert Atrophies, the Beginner Blooms

AI is also reshaping how competence develops.

Anthropic’s 2026 economic analysis points to a growing deskilling phenomenon in white-collar work. Complex analytical tasks in areas such as travel planning and technical writing are increasingly automated, leaving humans with simpler tasks. Over time, this erodes the expertise required to judge whether AI outputs are sound.

Recent economic data suggests this shift is no longer theoretical. New productivity figures in the US indicate that AI may be moving from an investment phase into a productivity phase — what economists describe as the “J-curve” of general-purpose technologies. Early adoption requires heavy investment in reorganizing work, retraining people, and redesigning processes, often suppressing measurable productivity. Only later do gains become visible, as organizations begin to produce more output with less labour.

This transition helps explain why AI simultaneously empowers experienced workers while reducing entry-level opportunities. The technology does not simply automate tasks — it reshapes how expertise is formed.

Yet the opposite effect appears elsewhere.

Research by Steenkamp and Terblanche shows that AI coaching can act as a powerful equalizer for entrepreneurs and small business owners. Used well, AI becomes a non-judgmental thinking partner — helping users translate theory into action through reflection and experimentation.

Here, AI does not replace thinking.
It stimulates it.

The difference lies not in the technology itself, but in how it is used.

AI therefore affects people differently across a career lifecycle. Experienced professionals can use AI to extend judgment because they have internal reference points. Less experienced workers, however, may risk outsourcing the very processes through which judgment would normally develop.

The risk is not only loss of skills, but an interruption in the formation of judgment itself — a slower, less visible erosion that becomes apparent only when systems fail or ambiguity increases.

Purpose as the Antidote

 

In a world where answers are always one click away, the question “why?” becomes more valuable than ever.

Data from Lynxeye shows that companies with clearly defined purpose are significantly more resilient and up to 2.5 times more attractive to investors. Purpose acts as a filter — helping organizations decide not only what they can do with technology, but what they should do.

Several companies illustrate this well:

  • IKEA uses its purpose — “to create a better everyday life for the many people” — as a constant North Star. Technology is adopted not for novelty, but for affordability, accessibility, and improved everyday living.
  • LEGO operationalizes purpose through empowerment. Employees are encouraged to ask whether decisions truly inspire and develop the builders of tomorrow, ensuring digital tools strengthen rather than replace human creativity and play.
  • Siemens demonstrates how industrial scale and technological sophistication can remain human-centered when anchored in a clear societal mission.

When System 3 takes over logic, System 2 must champion values.
Without internal anchors, we risk becoming passive passengers — drifting wherever algorithms steer us.

Beyond Thinking: The Human Dimension AI Cannot Replace

There is another risk hidden beneath cognitive surrender — one that is less visible, but equally important.

As AI becomes faster at producing answers, human interaction risks becoming more transactional. Conversations shorten. Reflection is outsourced. Listening becomes less patient because the answer feels already available elsewhere.

But human intelligence has never only been about solving problems.

It is formed in relationship.

Judgment develops through disagreement. Insight emerges through dialogue. Meaning is shaped through shared experience — through tone of voice, hesitation, trust, and the subtle signals that no algorithm fully captures.

Empathy cannot be generated through prediction alone. It arises from presence. From the willingness to remain with uncertainty together rather than resolving it immediately.

In leadership and governance, this matters profoundly. Many of the most important decisions are not technical questions but human ones: when to wait, when to support, when to challenge, when to change direction. These decisions depend less on data than on understanding people — their fears, motivations, and unspoken concerns.

If System 3 accelerates answers, human leadership must protect spaces where answers are not immediate.

Where listening precedes judgment.
Where relationships precede efficiency.
Where understanding is allowed to unfold slowly.

Paradoxically, as artificial intelligence becomes more capable, the distinctly human capacities — attention, empathy, trust, and shared meaning — become more valuable, not less.

The future of intelligent organizations may therefore depend less on how well they integrate AI, and more on how deliberately they preserve human connection alongside it.

The Road Back – What You Can Do: Individual, Leader, and Board

The challenge is not whether to adopt System 3, but how to do so without surrendering human judgment. The goal is augmentation — not substitution.

For the Individual — Reject Passivity and Preserve Agency

AI increases speed, but it can quietly reduce cognitive engagement. Individuals must actively counter this tendency.

  • Activate System 2 deliberately. Do not accept AI’s first answer to complex or high-impact questions. Pause, reframe the problem, and test the reasoning. Research shows that individuals with a higher Need for Cognition are less prone to cognitive surrender because they remain mentally engaged.
  • Use AI as a coach, not an oracle. Instead of asking AI to produce outcomes, use it to interrogate your thinking:
    • What assumptions am I missing?
    • Where might this reasoning fail?
    • What alternative interpretations exist?
      In this mode, AI strengthens learning rather than replacing it.
  • Protect your learning cycle. Skills atrophy when thinking is skipped. Continue practicing core reasoning, writing, and analytical tasks without AI support to maintain cognitive depth. Add on cultural and art engagement and reflection.
  • Know your life anchors. As Kets de Vries suggests, internal drivers such as autonomy, mastery, creativity, or service provide orientation when external answers become abundant. Meaning cannot be automated; it must be chosen.

For the Leader — Design Work So Thinking Remains Active

Leaders shape whether AI becomes a tool for empowerment or a pathway to organizational complacency.

  • Design human-in-the-loop workflows. AI adoption should not be frictionless. Introduce moments where employees must justify why an AI recommendation was accepted or rejected. Shaw and Nave’s findings suggest that incentives and feedback loops significantly reduce cognitive surrender.
  • Map deskilling versus upskilling explicitly. AI removes administrative burden in some roles, enabling higher-value work. In others, it removes the very tasks through which expertise develops. Leaders must identify where the organizational “training ground” is disappearing and redesign learning paths accordingly.
  • Protect capability formation. Junior employees must still practice complex tasks to develop judgment. Rotation programs, AI-free exercises, or staged autonomy may be necessary to preserve long-term competence.
  • Democratize coaching at scale. AI coaching can extend reflective support to employees who would otherwise lack access. Used well, it strengthens agency and learning rather than dependency.

For the Board — Guide Strategic Workforce Planning and Cognitive Risk

Boards face a new category of risk: erosion of human capability — particularly as productivity gains from AI begin to materialize and organizations restructure work faster than new expertise can be developed.

  • Guide a strategic workforce plan beyond headcount. Boards should require management to analyze workforce evolution at the level of tasks and capabilities, not roles alone.
    Key questions include:

    • Which capabilities are being strengthened by AI?
    • Which are being hollowed out?
    • Where might future leaders lose the experience required to exercise sound judgment?

Strategic workforce planning must explicitly address deskilling risk and long-term capability renewal.

  • Treat human capital erosion as enterprise risk. Anthropic’s findings show that AI performs well on narrow tasks but struggles with long-horizon, ambiguous problems. If human expertise declines simultaneously, organizations become fragile. Boards should require risk assessments that consider dependency on AI alongside cybersecurity and operational risk.
  • Operationalize Purpose as a governance filter. AI initiatives should be evaluated against the organization’s core purpose and value creation logic. Purpose provides continuity when technological possibilities expand faster than strategic clarity.
  • Assess cognitive dependency explicitly. Boards should periodically ask:
    If our AI systems failed tomorrow, could our organization still operate effectively?
    This question reveals whether AI is augmenting capability or quietly replacing it.

Governing AI is therefore not only about oversight of technology, but about safeguarding the conditions under which human judgment continues to develop inside the organization.

The Leadership Implication

System 3 changes how work is done.
But leadership increasingly determines how thinking is preserved.

Organizations that succeed will not be those that automate the fastest, but those that maintain the strongest human judgment while using AI to extend it.

Technology accelerates decisions.
Purpose and judgment determine direction.

References

[Anthropic] Appel, R., Massenkoff, M., et al. (2026). The Anthropic Economic Index report: Economic Primitives.

[Kets de Vries] Kets de Vries, M. F. R. (n.d.). Understanding Your Major Drivers: Do you know your Life Anchors?

[Lynxeye] Lynxeye. (2026). Lynxeye Purpose Index™ 2026: Purpose drives business success.

[Shaw & Nave] Shaw, S. D., & Nave, G. (2025). Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.

[Steenkamp & Terblanche] Steenkamp, B., & Terblanche, N. (2026). Exploring artificial intelligence coaching’s role in translating business training into real-world applications. SA Journal of Human Resource Management.

[Financial Times ] Erik Brynjolfson (2026) The AI Productivity Take-pff is finally visible

[Harvard Business Review ] David S Duncan How do workers develop good judgement in the AI era

Other Relevant Blogposts

Leadership Beyond Control

Leading Yourself in an AI-driven World

 

 

About Digoshen

This blog post was originally shared at the blog of Digoshen  www.digoshen.com,  and the blog of the Digoshen founder www.liselotteengstam.com,

At Digoshen, we work hard to increase #futureinsights and help remove #digitalblindspots and #sustainabilityblindspots. We believe that Companies, Boards, and Business Leadership Teams need to understand more about the future and the digital & sustainable world to fully leverage the potential when bringing their business into the digital & more sustainable age. If you are a board member, consider joining our international board network and master programs.

Welcome to also explore the Digoshen Chatbot on AI Leadership for Boards and Boards Impact Forum, where the Digoshen Founder is the Chair.

Find a link to Digoshen Chair Liselotte Engstam Google Scholar Page and how the Digoshen Chair have contributed to AI Value Creation.

You will find more insights via Digoshen Website, and you are welcome to follow us on LinkedIn Digoshen @ Linkedin 

Categories