Accountability by design in the agentic organization

For decades, McKinsey has helped design organizational structures for value, with a focus on simplicity, speed, and effectiveness.

The AI era—including the potential for agentic workflows to take on complex tasks—represents a fundamental change. AI agents are not like enterprise resource planning (ERP) systems or robotic process automation (RPA) bots. They don’t just sit in the background executing code. They have the potential to perceive, reason, and act. They may interact with colleagues and customers. They may make decisions. In short, they often feel less like tools and more like team members.

These characteristics, combined with agentic autonomy, scalability, and propensity to learn and evolve over time, mean that organizations risk increasing confusion and “AI slop” without having clear lines of accountability in place. Leaders may also be tempted to focus on value creation first and worry about accountability later. However, without it, employees may not trust the agentic workflows they are asked to rely on, regulators may not approve, customers may not engage, and organizations could exacerbate their tech debt. Airline chatbot misinformation incidents, for example, underscore the potential risks of AI output and highlight the critical need for accountable human oversight to prevent such issues.

By designing for value and accountability from the start, leaders can best capture the potential of human-agent collaboration in an AI-enabled future.

Design for value and accountability

The opportunity is enormous. Agentic organizations, freed from the constraints of functional hierarchies built to organize knowledge, may become flatter and more fluid. One manager with a small human team could orchestrate hundreds of agents acting autonomously in a workflow cutting across silos, running 24/7, with the potential to scale infinitely and resolving dependencies independently through an agentic layer.

But here is the paradox: The more fluid work becomes, the more deliberate leaders need to be about accountability. Without clear, structural accountability, fluidity can turn into chaos. Designing for value and for risk are not separate efforts—they are parts of the same design challenge.

Three agentic archetypes

A key consideration as organizations roll out agentic workflows will be distinguishing between different agentic archetypes, which vary in autonomy and complexity. Broadly, these workflows can be categorized into three levels of autonomy:

  1. Human-led workflows, enabled by agents: Low-autonomy workflows where AI acts as a co-pilot or assistant, supporting human decision making.
    • Example: A beverage company used AI agents to assist in product development by gathering customer feedback and conducting sentiment analysis based on customers’ online posts.
      • Impact: AI agents reduced time spent on repetitive tasks like data collection and analysis, enabling product managers and development teams to focus on high-value activities such as strategic decision making and creative design. This led to a 60 percent faster time-to-market and the development of a portfolio of new products and experiences, contributing to growth and market share expansion.
      • Human involvement: Product managers used agent-generated insights to prioritize features and align them with business goals. Human teams designed the overall product vision, ensuring alignment with customer needs and brand identity. Teams also reviewed agent-generated reports, validated findings, and made final decisions on product changes or campaign strategies.
  1. Agent-led with humans in the loop: Agents take on more responsibility in the workflow but still rely on human judgment for critical decisions, such as legal review.
    • Example: A Fortune 500 homebuilder used agentic AI to enhance sales capacity during the home-buying process.
      • Impact: AI agents reduced administrative burdens on sales representatives, allowing them to focus on building relationships with customers and closing deals. This led to faster response times and improved customer satisfaction.
      • Human involvement: Sales representatives remained responsible for high-touch interactions, such as negotiating contracts and addressing complex customer needs. They also provided feedback to refine AI’s performance over time.
  1. Fully agentic workflows: At the highest level of autonomy, agents operate independently, often interacting directly with clients or systems.
    • Example: A grocery retailer deployed a multi-agent cart recommender system, increasing revenue by 5-10 percent.
      • Impact: AI autonomously recommended products to customers based on their preferences and purchase history, driving higher basket sizes and revenue growth.
      • Human involvement: While the system operated autonomously, human teams monitored its performance, adjusted algorithms to reflect seasonal trends or promotions, and ensured that recommendations aligned with business goals.

As agentic workflows evolve over time, accountability structures will need to adapt to match their increasing autonomy and complexity. By clearly defining accountability for each workflow type and visualizing how workflows integrate into the organization, businesses can better manage risks, clarify responsibilities, and ensure seamless adoption.

Types of accountability

How should leaders approach accountability across these three archetypes? A useful lens distinguishes accountability for deploying as well as configuring, tuning, and training an agentic workflow.

While in agent-led (archetype two) and fully agentic (archetype three) workflows, these lines of accountability are likely to be combined, we expect two lines of accountability for human-led workflows (archetype one).

  • End-user accountability lies with the person or team deploying agents in a human-led workflow. The user owns the outputs and is accountable for oversight, alignment with goals, and compliance.
    • Using the example of a mid-sized e-commerce company that deploys a campaign management platform to run personalized email campaigns, marketing managers may be accountable for defining campaign objectives, approving AI-generated content, and ensuring alignment with the company’s brand identity and messaging.
  • Platform (build/train/tune/orchestrate) accountability sits with those who create, train, tune, or orchestrate the deployment of the agentic workflow. They are accountable for ensuring that accuracy, quality, ethics, and guardrails are built in. In every domain, at least one human role should oversee the deployment of agentic workflows, ensuring alignment with the domain's specific needs and goals. This role may change over time, reflecting the potentially rapid evolution of agentic workflows. This trainer/tuner will also assess when an agent is ready to progress from "intern" to "supervisor," adapting their oversight as agents become more capable.
    • In the e-commerce example, a marketing operations specialist may be accountable for tailoring and optimizing AI agents to meet campaign goals, such as adjusting targeted algorithms for seasonal trends or refining recommendations based on engagement metrics, ensuring the system stays effective and aligned with business objectives.

Both forms of accountability must be explicit. Without them, when things go wrong (and they will), it is unclear who should ensure learnings are built into the agentic workflows going forward. Visualizing them will be crucial to ensure accountability and clarity.

Example visualization platform-owner and end-user workflow accountability in OrgLab

There is a potential third type of accountability where an individual (typically from an IT central function) manages the building of in-house agents or curation of agents from external vendors, ensuring that they adhere to established standards, possess complementary features, and avoid duplication. This is less likely to require visualization in an org chart, as it is more akin to classic technology curation and build.

OrgLab

OrgLab

Design and implement a winning organizational structure

Wider impacts on organization design

As agentic workflows take over repetitive tasks, human roles will shift toward strategic oversight, creativity, and empathy. If humans are reduced to merely performing quality assurance on AI outputs, organizations risk losing their top talent very quickly.

Classical elements of organization design—like managerial spans and even org charts—are likely to evolve as organizations become more fluid, flatter, and outcome oriented (i.e., organized around outcomes and not knowledge hierarchies). Early in the journey, spans of control may narrow as leaders take on more direct roles, focusing on coaching their teams, managing a greater volume of work, and building critical relationships—reflecting the growing importance of "human" leadership. Over time, as organizations mature and adopt advanced tools (e.g., advanced synthesis and automated performance tracking), spans at senior levels may expand, enabling broader oversight. Progress on this journey is likely to be highly company specific and tied to transformation maturity, with change management remaining a critical unlock to realizing value.

In the AI era, accountability isn’t a compliance exercise—it is a strategic design choice. Leaders who embed accountability into structures will scale agentic workflows with trust, transparency, and confidence, ensuring teams are equipped to adapt and thrive alongside rapidly evolving agentic systems.

Agentic organization

Agentic organization

Capture enterprise-wide value from AI and agentic technologies

This blog post is part of a People and Organization Blog series that explores how organizations will be transformed by agentic AI. Follow us on LinkedIn and keep an eye on the blog for our latest insights and how these technologies will shape organizations today and tomorrow.

Learn more about our People & Organizational Performance Practice