Executive Summary
As AI reshapes the nature of work and leadership, organizations must go beyond reskilling. This article explores how we must rethink talent and leadership to stay relevant and responsible in the age of applied intelligence. Drawing inspiration from leaders like Ethan Mollick and Joel Hellermark, we highlight the shift from static competencies to adaptive, human-centered capabilities.
Key themes include:
-
Why technical fluency is now essential for strategic leadership
-
How model intuition helps us navigate AI without blindly trusting it
-
The evolving role of boards and leaders as catalysts for digital learning cultures
-
The need to balance automation with trust, inclusion, and experimentation
This is not just about tools—it’s about mindsets, systems, and courage.
From Playbooks to Prompts: Rethinking Talent and Leadership in the Age of AI
What if the future of your company isn’t hidden in your strategic plans—but in the next prompt your team experiments with?
That’s one of the core provocations in Ethan Mollick’s compelling talk with Joel Hellermark, Sana, on what it takes to build an AI-first organization (link below). Unlike most commentary focused on AI hype or technical deep dives, Mollick brings a refreshingly human lens—asking not just what AI can do, but how it is reshaping the very foundations of leadership, learning, and growth.
His insights are urgent. Not just for tech companies or startups, but for every organization that hopes to stay relevant in a world where intelligence is no longer limited to humans.
Let’s explore three transformative insights—and why they matter now more than ever.
Strategic Leadership Must Be Technically Fluent
In the traditional playbook, leaders set the direction, and technical teams translate that into execution. That model breaks down in the AI era.
Mollick makes it clear: strategic leaders must now be conversant in both AI capabilities and business outcomes. Not to code—but to understand enough to direct intelligent action.
What does this look like?
- Vision grounded in model reality
AI is advancing fast—weekly, not yearly. Leaders must keep up with what’s newly possible and guide their teams accordingly. For instance, the shift from static language models to dynamic agents changes what we can automate or simulate. Are your strategies updated for that shift? - Big bets, not timid tweaks
Many organizations run small pilots or “proofs of concept” that ultimately stall. Mollick argues for “use-case maximalism”: encourage bold, ambitious AI experiments. Even failures become data points. Successes can leapfrog entire product or service lines. - Empowering the Lab and the Crowd
Strategic leadership is about setting direction, then letting the ecosystem breathe. Some teams need space to experiment (the “lab”). Others need guidance to integrate (the “crowd”). Great leaders balance both—offering scaffolding without smothering creativity.
Leadership in this new paradigm is not top-down. It’s curious, participatory, and attuned to capability evolution. The best leaders aren’t just delegating AI—they’re learning it alongside their teams.
Skills Aren’t Enough. We Need Model Intuition.
The AI skill gap is real—but perhaps not in the way we think.
Mollick makes a crucial distinction: knowing how to use AI tools (skills) is different from understanding how AI tools reason (intuition).
We’ve trained people to use dashboards, spreadsheets, and templates. But prompting a generative AI system is more like collaborating with a brilliant but unpredictable colleague. It requires finesse, judgment, and iteration.
What is model intuition?
- Understanding AI’s strengths and blind spots
AI is great at pattern recognition, language synthesis, and task acceleration. But it can also be confidently wrong, hallucinate facts, or miss nuance. Knowing when and how to trust AI is a learnable, vital skill. - Experimentation as education
Mollick encourages teams to learn by doing: try prompts, test edge cases, refine instructions. This kind of engaged play builds tacit knowledge that no training manual can replicate. - Seeing workflows, not just outputs
Model intuition helps employees break their work into components—some for AI, some for human oversight. This decomposition is essential for successful AI-human collaboration.
Imagine a marketing team that learns how to generate 80% of a campaign with AI, but knows exactly where to insert emotional nuance and brand tone. That’s not automation—that’s augmentation with intelligence.
And it requires something deeper than skill. It requires judgment, confidence, and the ability to listen to the model’s logic—without over-trusting it.
AI is Reshaping Apprenticeship—But It’s Not the End
Here’s one of Mollick’s most thought-provoking observations:
“In most cases, AI is already better than your interns.”
That’s not an attack on young talent. It’s a wake-up call about how AI displaces the foundational tasks that used to serve as the entry point into professional development.
So where does this leave apprenticeship?
- It must evolve, not disappear.
Traditional apprenticeship involved learning by doing—watching experienced professionals, trying simple tasks, growing complexity. Now, AI does many of those “simple tasks.” If we don’t reimagine the journey, we risk cutting off the pipeline of human talent. - Learning through the model, not despite it
Mollick suggests turning AI into the apprenticeship environment. New hires can learn how AI reasons, explore where it fails, and build critical thinking by constantly comparing their judgment with model output. - Rotating through “AI Labs”
Imagine an apprenticeship program where junior employees rotate through short-term projects using AI in different domains—strategy, sales, product design, customer research. This builds cross-functional fluency and creates a generation of model-literate generalists. - Reverse mentoring and co-learning
Junior talent often adapts to new technologies faster than senior leaders. A powerful culture can emerge when learning flows in both directions. Younger employees bring new prompting skills; seasoned leaders offer context and ethics.
This isn’t about reducing expectations for talent. It’s about retooling learning journeys for a reality where intelligence is both human and machine-sourced.
When These Three Insights Interact
When these three insights interact, it becomes more than a checklist—it becomes a philosophy.
- Leaders fluent in AI create space for exploration.
- Teams that build intuition use AI wisely, not blindly.
- Apprentices that learn through models grow faster and more responsibly.
When leadership, talent, and learning are all reoriented toward this reality, organizations become adaptive ecosystems—not rigid hierarchies.
They stop asking, “How do we control AI?” and start asking, “How do we grow with it?”
What This Means for Leaders
The leader of the future won’t be defined by title, tenure, or even decision-making speed. Instead, they’ll be distinguished by how well they navigate uncertainty, enable experimentation, and model continuous learning in an AI-shaped world.
Here’s how:
- Be a Learning Leader, Not a Knower
Leaders must show curiosity, not certainty. AI will constantly evolve—so leaders who model experimentation (even publicly) will set the tone for adaptive culture. - Make Prompting Strategic
Knowing how to ask the right questions—of both AI and humans—is a leadership superpower. Encourage teams to share best prompts and prompt failures. Make it part of the leadership toolkit. - Democratize Innovation
The best AI ideas may not come from the top. Encourage bottom-up innovation. Identify your “AI naturals” across the organization and give them space to experiment and influence. - Build Model-Aware Teams
Train teams not just touse AI tools, but to understand how AI thinks, reasons, and errs. This is crucial to avoiding blind spots and developing truly responsible use. - Create Psychological Safety for Experimentation
AI introduces a new pace of change. Leaders must make it safe to learn in public, share failed attempts, and iterate rapidly.
What This Means for Boards
For boards, AI isn’t just a tech trend—it’s a strategic, ethical, and governance challenge. Mollick’s insights carry major implications for how boards must evolve their oversight and foresight roles.
- AI Competence Is Now Board-Relevant
Boards don’t need to be technical experts—but must understand the business implications of large language models, generative tools, and AI-enabled decision-making. Just as boards built financial literacy and cyber awareness,AI literacy is the next frontier. - Reframe Risk and Resilience
AI changes the nature of operational, reputational, and strategic risk. Boards must ask:How is the company experimenting safely? Where are AI models deployed in ways that require oversight or auditability? - Invest in Learning and Apprenticeship Pipelines
Boards should challenge executives:Are we still building talent pipelines, or has AI hollowed out our learning journeys? A reimagined apprenticeship strategy is critical to long-term value creation. - Inquire About the AI Lab and Crowd Model
Is there space in the company for breakthrough innovation and broad adoption? Boards should ask how the lab is supported and how the crowd is educated and empowered. - Ask Deeper Questions, Not Just Faster Ones
Boards must role model what Mollick implies: better prompting leads to better decisions. Curiosity, humility, and insight must be boardroom norms—not just speed and compliance.
Who Is Ethan Mollick and Joel Hellermark?
Ethan Mollick is an Associate Professor at The Wharton School of the University of Pennsylvania. He studies innovation, entrepreneurship, and more recently, the impact of AI on work, learning, and leadership. His Substack (One Useful Thing) and public talks have become go-to resources for leaders who want practical, credible, and up-to-date insights on how to integrate generative AI into real-world workflows.
What sets Mollick apart:
- He experiments with every tool he teaches.
- He promotes a balance betweenintelligent boldness and thoughtful constraint.
- He believes deeply inhuman-centered innovation, where AI doesn’t replace judgment—it helps build it.
Joel Hellermark is a Swedish entrepreneur and the founder of Sana, an AI-powered learning platform designed to personalize knowledge development at scale. While Mollick focuses on experimentation and intuition, Hellermark is pioneering the infrastructure that makes AI-augmented learning a daily habit.
Sana helps organizations:
- Provide contextual, personalized learning experiences based on role, needs, and behavior
- Use AI to accelerate learning curves in complex domains
- Ensure learning isn’t static or one-size-fits-all—but adaptive, intelligent, and evolving
Hellermark’s vision aligns beautifully with Mollick’s:
The future of work is not just AI-powered—it’s human learning augmented at every level.
Sana Labs is used by global companies to empower internal upskilling, learning-in-the-flow-of-work, and talent growth—exactly the kinds of strategies that support the AI-literate organizations Mollick describes.
A National Commitment to AI Literacy in Sweden
In a bold move to democratize access to AI knowledge, Sana recently announced that it is offering free access to its AI-powered learning platform to Swedish citizens.
The goal? To elevate AI literacy across Sweden—not just for tech workers or executives, but for every citizen willing to learn.
This initiative is:
- A strategic investment in Sweden’s future workforce and digital competitiveness
- A recognition that AI literacy must be a public good, not a privilege
- Aligned with broader national efforts to equip society with the tools to responsibly engage with AI
Hellermark’s vision is clear: in a world of accelerating change, the only sustainable advantage is the ability to learn faster than the world shifts. By opening up Sana Labs to the public, he’s making AI literacy a national movement.
Final Thought: The Age of Applied Intelligence
We are entering an era where the most valuable skill is not having all the answers—but knowing how to ask smarter questions.
Not just of each other, but of the intelligent systems that now help us think, create, and decide.
AI won’t replace leaders or talent.
But it will expose the limits of outdated leadership models, rigid learning systems, and static organizations.
Whether you follow Ethan Mollick’s path of hands-on experimentation or Joel Hellermark’s vision of scalable AI-powered learning, one truth is clear:
The future won’t be led by those who know the most—but by those who learn the fastest, adapt the deepest, and lead the most humanely.
This is not just the age of artificial intelligence.
It is the age of applied intelligence—where curiosity, trust, and human-centered growth are the real differentiators.
We don’t just need better AI strategies.
We need better human strategies—for prompting, for leading, for collaborating, and for evolving together.
The future belongs to those who don’t just use AI—but grow with it.
What You Can Do Next
Whether you’re a board director, executive, coach, or founder, here are a few immediate actions to align with Mollick’s call:
| Action | Why It Matters |
| Host a monthly AI-use case forum | Learn from your own teams. Encourage experimentation stories. |
| Launch an AI lab and rotate emerging talent through it | Builds both innovation muscle and future leaders. |
| Include “model intuition” as a learning objective | Move beyond “how-to” AI training. Promote critical AI judgment. |
| Pair senior leaders with junior AI-savvy employees | Foster reverse mentorship and fast-track leadership growth. |
| Celebrate AI-literate experiments—even failed ones | Shift the culture from perfection to discovery. |
See the interview :
of Professor Ethan Mollick by Joel Hellermark, MD Sana Labs:
Every leader needs this AI strategy | Ethan Mollick explains
Most companies are using AI to cut costs. Ethan Mollick argues that the biggest mistake companies make is thinking too small. In the first episode of Strange Loop, Wharton professor and leading AI researcher Ethan Mollick joins Sana founder and CEO Joel Hellermark for a candid and wide-ranging conversation about the rapidly changing world of AI at work.
More actions :
- Explore dedicated workshops for your board or coaching program for Chair or NED
- Explore Open Board Training Programs by Boards Impact Forum (Boards Oversight of Responsible AI for Value Creation), Scandinavian Executive Institute (Executive Board Program INSEAD), Nordic IN-Board for INSEAD Alumns (IN-Board Nordic Academy 2025)
- Check out Sana Learning Platform
- Listen to the podcast interviews of the Digitally Savvy Stephanie Woerner, Academic Director MIT CISR, and Charlotta Nilsson, COO and NED
Learn More
At Digoshen, we remain committed to supporting NEDs in this evolving landscape, ensuring AI, sustainability and governance go hand in hand.
More Webinars, Peer Exchanges and Events
Check our upcoming events, all with NED Guest Speakers and Peer Exchange:
June 26, 8-9.30 CET, The Future Boardroom – How to Transform in Turbulent Times with Helle Bank Jorgensen, Founder and Director of Board Intelligence, Håkan Broman, Vice Chair Sweish Corporate Governance Code, CEO Swedish Academy of Board Directors and professional NED, and Monica Lagercrantz, Founder and Board Director at Board Clic and international Boardroom advisor.
Board Programs
Sign up for our programs starting in September and October 2025 again:
About Digoshen
This blog post was originally shared at the blog of Digoshen www.digoshen.com, the blog of Boards Impact Forum www.boardsimpactforum.com and the blog of the Digoshen founder www.liselotteengstam.com,
At Digoshen, we work hard to increase #futureinsights and help remove #digitalblindspots and #sustainabilityblindspots. We believe that Companies, Boards, and Business Leadership Teams need to understand more about the future and the digital & sustainable world to fully leverage the potential when bringing their business into the digital & more sustainable age. If you are a board member, consider joining our international board network and master programs.
Welcome to also explore the Digoshen Chatbot on AI Leadership for Boards and Boards Impact Forum, where the Digoshen Founder is the Chair.
Find a link to Digoshen Chair Liselotte Engstam Google Scholar Page
You will find more insights via Digoshen Website, and you are welcome to follow us on LinkedIn Digoshen @ Linkedin and twitter: @digoshen and founder @liseeng








