In January, I sat down with David Green, host of the Digital HR Leaders podcast, to talk about what happens when work is no longer just human to human, but human and agent. We covered a lot of ground: how work and organizational models are changing, how to focus on distinctively human capabilities, and what agentic AI means for leadership, culture, and strategic workforce planning.
I opened with three propositions: technology democratizes; technology reduces friction; and we must hold onto what is human in a world with more technology embedded in our lives. We all want meaning, purpose, and belonging at our jobs, and that won’t change. I also explored four tensions that I think leaders need to confront as agentic AI reshapes their organizations.
Agentic AI isn't just a tool upgrade; it’s a challenge to the operating assumptions that organizations are built on. At the same time, the technology remains uneven in practice—introducing new and amplified risks around reliability, oversight, and unintended consequences. Leaders see the potential and the first proof points of value capture. The problem is that they are trying to create agency in systems explicitly designed for human control.
The four tensions I described on that January podcast remain central, and now, just a few months later, the questions around them have intensified.
1. Your org chart assumes only humans can act
Most enterprise operating models assume that humans remain firmly at the center as they initiate, decide, and escalate. Technology plays a supporting role.
Agentic AI upends that premise. It can initiate actions, sequence work, coordinate across functions, and adapt in real time. It becomes a participant in execution, not just an assistant. But most organizations are embedding agentic systems into hierarchies designed for human gatekeeping—command-and-control structures where decisions wait for steering committees and approval chains, even when the right action is clear.
Corporations originally emerged to reduce transaction costs in the market. In many large organizations today, internal bureaucracy creates more friction than the market itself. Agentic systems brutally expose that: they remove handoffs, eliminate delays, and surface redundant controls. When AI can act without waiting for human permission, vague decision rights and implicit accountability become liabilities.
2. The AI is working—the workflows aren’t
Many organizations face a familiar paradox: AI adoption is high, yet bottom-line impact remains elusive (and in some cases, early deployments are being scaled back as expected gains fail to materialize). The issue is rarely model capability, it’s work design. In most enterprises, work was built for sequential, human-driven execution with handoffs at every stage. Every transition creates delay and ambiguity.
When agentic systems orchestrate end-to-end workflows, they expose those inefficiencies. What organizations tolerated for years becomes visible in weeks. The response cannot be to layer AI onto legacy processes. Organizations must start with outcomes and work backward, redesigning how humans and AI collaborate. That means asking: What is the work to be done? Which activities are best automated—recognizing that agentic AI is not the hammer for every nail? Where does augmentation create leverage? And where is human judgment irreplaceable?
3. Don’t look to leaders to have all the answers
There are two competing visions of the future for managers. In one, fewer managers are needed because AI reduces coordination overhead. In the other, everyone becomes a manager because execution is increasingly handled by agents, and what remains is system oversight.
I lean toward the second. As agentic systems absorb task-heavy work, value shifts from execution to judgment, from doing to designing, and from control to stewardship. The question becomes less about the number of managers and more about what only they can contribute. Leading in this environment means designing the conditions under which good decisions emerge—whether made by humans, agents, or both. This requires clear boundary conditions, explicit goals, feedback loops, and a culture of experimentation. AI fluency is the foundation.
Leadership also becomes harder. Leaders must drive productivity while unleashing innovation, move fast while managing risk, and automate aggressively while keeping humans at the center of meaning and purpose. They are leading into uncertainty and designing for a world that hasn’t been shaped yet.
4. You’re evaluating the wrong half of the team
Most performance systems still evaluate individual human contributions. But in an agentic organization, outcomes are increasingly coproduced by human–AI systems. So who, exactly, are you evaluating—and on what basis?
When performance emerges from the interaction of human judgment and machine action, the focus must shift. The core capability becomes judgment. When do you trust the system? When do you intervene or redesign it? Some organizations are already incentivizing managers based on system performance or how effectively their hybrid teams improve over time. If your performance model rewards only visible human activity, you will underinvest in system learning.
What this means for organizations
I offer these tensions as points for reflection, not as prescriptions. None of us has the answers yet. But several implications are becoming hard to ignore.
Strategic workforce planning, as we knew it, is dead. Not because planning is irrelevant but because static, point-in-time planning assumes a stable future that no longer exists. “Workforce” now includes automated work, augmented roles, and hybrid systems. The planning mindset must shift from prediction to adaptability, from point solutions to scenarios, and from static models to continuous sensing. The most advanced organizations are asking how they would design work from scratch with the full technology tool kit available.
The real risk isn’t moving too fast, it’s moving with the wrong structure. Organizations that treat agentic AI solely as a technology deployment will leave value on the table. Those that treat it as an operating model transformation have a chance to reshape their competitive advantage. For every dollar invested in technology, disproportionate investment must go into employees’ AI fluency, systems thinking, and complementary skills. Value capture is behavioral before it is technical.
The questions leaders are asking have changed. In January, the dominant question was: “How do we deploy AI?” Now I hear: “How do we redesign work, not just add tools to it?” and “How fast do we need to move on reskilling—and for whom?” The data suggests 75 percent of workers will need their roles reconfigured. That’s not a training program; it’s a structural transformation. I’m also asked, “What kind of organization are we actually building?” That’s the question I find most encouraging because it means leaders are starting to see that you can’t separate the technology decision from decisions about people and strategy.
On the podcast, I said I hoped 2026 would be the year we moved from productivity to innovation. I’d add another factor now. I underestimated the transition costs for employees, including the cognitive load of constant change and the compounding of fear, not just from technology, but from the world we live in. Those worries are entirely understandable when technology changes what your work looks like.
The organizations that navigate this well will build change into how they operate—as an organizational capability, not a bolt-on program. They’ll invest in resilience as seriously as they invest in technology. And they’ll create space for people to learn by doing, to experiment, and to fail safely.
