Purpose, Power, and Humanity in an Age of Intelligent Systems
Something subtle is changing in how leadership is experienced.
Not in the language of quarterly results or transformation roadmaps, but in the quieter moments — when leaders pause mid-sentence, sensing that the old answers no longer quite fit the questions being asked.
Artificial intelligence is becoming more autonomous. Cooperation feels harder. Work is shifting faster than identities can form. Well-being, once peripheral, now sits at the centre of organisational strain. And beneath it all runs a deeper unease: what happens to human purpose when intelligence accelerates faster than meaning?
This is not only a story of disruption. It is a story of choice.
As leadership moves beyond control, responsibility cannot remain implicit or informal. It must be structurally owned. This is where people governance becomes essential — not as an HR concern, but as a core board responsibility. By dedicating explicit agenda space, roles, and guidance to people-related strategic decisions, boards translate ethical intent into accountable governance.
When Intelligence Grows
As intelligent systems move from assisting to acting — planning, deciding, and executing across digital and physical domains — leadership enters a new phase. The central challenge is no longer technical capability, but direction.
Machines can optimise and be designed to act in caring ways, but the definition of care — and responsibility for it — remains a human choice. They can act, but accountability for their actions cannot be delegated. Purpose does not emerge from code; it must be chosen.
In AI-enabled organisations, trust is shaped less by vision statements than by coherence — the alignment between what leaders say, what systems enable, and what people experience daily. People governance treats this coherence as a governance responsibility: something to be monitored, questioned, and reviewed over time, especially as technology reshapes work, roles, and power.
Leadership today is therefore less about staying ahead of technology and more about anchoring power in human intent. Without that anchoring, intelligence drifts. And drifting power — history reminds us — rarely leads somewhere we would consciously choose.
How Progress Really Happens
There is a persistent belief that smarter systems automatically produce better outcomes. In practice, technology amplifies whatever culture it touches.
Where questioning is welcomed, learning compounds. Where dissent is tolerated, explanations improve. Where errors can be examined without fear, progress accelerates. But where certainty hardens, where authority replaces curiosity, or where silence becomes a survival strategy, even the most advanced systems reproduce stagnation at scale.
Progress, then, is not about having the right answers in advance. It is about creating environments where better answers can emerge over time. AI does not change that responsibility — it intensifies it.
Culture, however, is not shaped by structures and incentives alone. It is shaped just as powerfully by everyday moments of recognition — or the lack of it.
As complexity rises, leaders often feel pressure to persuade: more evidence, clearer logic, tighter arguments. Yet resistance rarely stems from lack of information. It stems from lack of recognition.
People disengage when their experience is dismissed or when change feels imposed rather than understood. They protect themselves — quietly, efficiently, and often invisibly.
Real influence begins earlier, and more simply: when someone feels genuinely seen. When their perspective is acknowledged as coherent, even if it is not shared. This recognition lowers defensiveness and creates the conditions for learning — the same conditions progress depends on.
In periods of transformation, especially those shaped by AI, this becomes a leadership capability, not a personal style.
Power Is No Longer Individual
Complexity has outgrown individual brilliance.
The challenges shaping organisations today — from AI governance to geopolitical shifts, from workforce reinvention to planetary limits — cannot be solved from a single vantage point. Leadership increasingly becomes the work of orchestrating collective intelligence.
This is a shift from asking How do I solve this? to asking Who needs to be involved for this to be solved well?Responsibility does not diminish in this transition — it deepens. Leaders become accountable for the quality of inquiry, the diversity of perspectives invited, and the conditions under which people can contribute meaningfully.
At the same time, leadership always operates through stories.
Every organisation runs on narratives. Some are explicit; others operate quietly, through what is normalised, minimised, or left unnamed.
Whose experience counts as evidence?
Which concerns are reframed as resistance?
Which harms are absorbed as “necessary trade-offs”?
These choices shape culture as much as formal decisions do. Silence, too, is a form of authorship.
As intelligent systems learn from historical patterns, unexamined narratives risk becoming embedded in code — scaled without reflection. Leadership therefore carries a narrative responsibility: to surface assumptions before they harden into systems, and to decide consciously which futures are being legitimised.
The Limits That Matter
For a long time, well-being was treated as an individual concern or a discretionary benefit. At the same time, digitalisation created the illusion that constraints — human, organisational, planetary — were dissolving.
Both assumptions are collapsing.
Burnout, disengagement, and erosion of trust are not personal shortcomings — they are signals of systemic imbalance. They tell us that organisations, like societies, cannot outrun human limits without consequence. Well-being is not ornamental. It is infrastructure.
Reality continues to push back.
Geography still shapes supply chains. Climate still sets limits. History still influences alliances. Planetary boundaries do not negotiate. Digital abstraction may obscure these forces, but it does not remove them.
Leadership untethered from either human or systemic reality drifts into abstraction — and abstraction is a form of risk.
Effective leadership holds these constraints together: human capacity and planetary boundaries, ambition and endurance, innovation and consequence. Well-being is not the opposite of growth. It is what allows growth to persist without consuming the people — and the conditions — meant to sustain it.
Hope as Capacity
Hope, in this context, is not a feeling that rises and falls with circumstances, nor optimism that things will somehow work out. It is a capacity that can be cultivated — individually, collectively, and institutionally.
It begins with the capacity to stay curious under pressure. When systems grow complex and futures become ambiguous, hope allows leaders to remain open rather than defensive, to resist the instinct to protect certainty, and to keep asking what might still be learned.
Hope also expresses itself as the courage to ask deeper questions rather than faster answers. In moments of urgency, speed can feel like competence. Yet leadership grounded in hope knows that premature answers often close off insight.
At a relational level, hope is the willingness to distribute intelligence instead of hoarding control. It treats others not as risks to be managed, but as contributors to be trusted. In complex environments, no single perspective is sufficient.
Hope is also the discipline to anchor power in purpose. As technological capability expands, the temptation is to equate what can be done with what should be done. Hope interrupts that reflex.
Finally, hope carries the humility to remember that while machines may expand our options, humans remain responsible for direction. Intelligent systems can generate possibilities, optimise pathways, and act at scale — but they cannot choose values.
Understood this way, hope is not passive or sentimental. It is enacted daily through choices: the questions leaders ask, the voices they include, the narratives they legitimise, and the boundaries they set.
An Invitation to Reflect
These questions are not meant to be answered quickly.
They are meant to be lived with — individually, in leadership conversations, and in boardrooms.
For the Individual
- Where do I reach for certainty or speed when curiosity would serve better?
- Which assumptions do I protect — and what might open if I questioned them?
- Where must I exercise human judgment rather than defer to optimisation?
- How do I anchor my power — formal or informal — in purpose rather than position?
- What doeshope as capacity look like in my daily leadership?
For Leaders and Leadership Teams
- What aspects of our culture are being amplified by technology — and which should not be?
- Where might people be disengaging quietly because they feel unseen?
- How often do we solve problems ourselves instead of orchestrating collective intelligence?
- What deeper questions are we bypassing by moving too quickly to answers?
- How do we keep autonomy — human or AI-driven — aligned with shared purpose?
For Boards
- How clear is the human purpose guiding our use of AI and autonomous systems?
- What narratives are shaping strategy, risk, and trade-offs at board level?
- How do we treat organisational well-being as infrastructure for long-term value?
- Where do external constraints challenge our growth assumptions?
- How do our governance practices reinforce stewardship rather than control?
These questions do not offer comfort.
They offer orientation — toward what is worth sustaining and who we are becoming.
Where This Reflection Continues
This essay is part of a wider exploration of leadership, humanity, and hope in an age of intelligent systems.
The questions raised here also shape Grounded Becoming — a reflective space for slowing sense-making, reconnecting with what matters beneath action, and cultivating hope as a lived capacity.
They find visual expression in Radiant Constellations, the art exhibition in Paris, where agency, interconnection, and responsibility are explored through image, light, and relation rather than argument.
They are also anchored in research and practice through the book on AI leadership for corporate boards, and carried forward in the upcoming board programme on AI leadership, where reflection turns into governance practice.
Across these forms, the underlying inquiry remains the same:
How do we remain human and accountable as intelligence accelerates?
How do we lead in ways that honour both possibility and consequence?
And how do we design systems worthy of the futures they help create?
This is not a linear journey.
It is a constellation — inviting continued reflection, dialogue, and stewardship.
References and further reading
Snabe, J.H. (2026) — The next wave of intelligence: How human purpose must guide the future of AI (World Economic Forum, Jan 9, 2026) — An argument that human purpose and leadership are essential to guide autonomous AI systems responsibly.
World Economic Forum, 2025 — Five defining questions for the year ahead 2026 (Dec 2025) — A forward-looking piece framing key global questions for the coming year.
Markovitz, G. (2025) — Five defining questions for 2026 that leaders will address at Davos 2026 (World Economic Forum, 31 Dec 2025; updated Jan 8, 2026) — A roundup of the key global questions framing discussions for the year ahead.
Pollenne, D., Snellman, K. & Li, E. (2026) — Rethinking organisational well-being (INSEAD Knowledge, Jan 6, 2026) — Reframes well-being as foundational infrastructure for organisational resilience and performance.
Conversano, B. & Satopaa, V. (2025) — AI transformation is not about tech (INSEAD Knowledge, Oct 13, 2025) — Argues that successful AI transformation depends on culture and value creation, not technology alone.
Olbert, S. & Lehman, R. (2025) — The Power of Deep Leadership Inquiry (INSEAD Knowledge, 29 Sept 2025) — Explores why prioritising deep inquiry and the right questions is foundational to meaningful leadership.
The People Governance Institute (2025). The People Governance Charter: Strengthening Strategic Corporate Governance. Brussels: H-User Institute, in partnership with ecoDa (European Confederation of Directors’ Associations).
Torre, F., Engstam, L., Teigland, R., & Shekshnia, S. (2025). AI Leadership for Corporate Boards: Responsible AI Governance in Practice. Explores how boards can guide, govern, and oversee AI responsibly by shaping strategy, culture, and accountability for long-term value creation.
Deutsch, D. — The Beginning of Infinity (2011): Progress is driven by curiosity, explanation, and the freedom to question assumptions.
Fleck, N. — Validation, The new psychology of influence recognitions (2025): Recognition and psychological validation are essential to trust, learning, and openness to change.
Sullivan, D. & Hardy, B. — Who Not How (2020): Leadership accelerates by mobilising collective intelligence rather than solving everything individually.
Marshall, T. — The Power of Geography (2023): Geography and physical constraints continue to shape global and organisational realities despite digitalisation.










