State of AI trust in 2026: Shifting to the agentic era

AI adoption is accelerating rapidly, with organizations moving beyond experimentation toward scaled deployment of gen AI and, increasingly, agentic AI across core business functions. But as AI systems take on greater autonomy—making recommendations, triggering actions, and interacting with other systems—the consequences of failure grow materially. In this environment, AI trust and the responsible AI (RAI) practices that enable trust are no longer a tangential concern but a foundational requirement for realizing the full potential of the technology.

Trust underpins two critical outcomes. First, it enables organizations to realize value from AI investments by supporting sustained adoption and integration into core workflows. Second, it is essential if organizations are to successfully manage an expanding and evolving risk landscape. In the age of agentic AI, organizations can no longer concern themselves only with AI systems saying the wrong thing; they must also contend with systems doing the wrong thing, such as taking unintended actions, misusing tools, or operating beyond appropriate guardrails.

To understand how organizations are responding to this shift, McKinsey conducted the 2026 AI Trust Maturity Survey. Taken between December 2025 and January 2026, the survey gathered responses from approximately 500 organizations across industries and regions, with respondents who hold direct responsibility or expertise in AI governance, risk management, or AI investment decisions. Their responses were assessed using the McKinsey AI Trust Maturity Model, a framework based on five dimensions of RAI: strategy, risk management, data and technology, governance, and, new this year, agentic AI governance and controls, reflecting the growing importance of governing increasingly autonomous AI systems (Exhibit 1). We assess RAI maturity across four levels, from the development of foundational RAI practices to the implementation of a comprehensive and proactive program.

The AI Trust Maturity Model is a responsible AI framework that encompasses five dimensions.

At a glance: Key insights from the 2026 AI Trust Maturity Survey

The survey uncovers ten key insights about AI trust, which fall across three themes: the current state of AI trust, emerging risks and challenges, and how organizations are responding to close gaps and enable scale.

State of AI trust today

Despite improved maturity on the whole, significant variance still exists, but companies are increasingly recognizing the importance of RAI investment:

  1. RAI maturity continues to improve, yet strategy, governance, and agentic AI controls lag behind, with only about 30 percent of organizations reaching a maturity level of three or higher in these dimensions.
  2. RAI maturity varies by industry and region: Asia–Pacific leads globally, and technology, media, and telecommunications and financial services outperform other sectors.
  3. Investment in RAI is strongly associated with higher RAI maturity and realized value.

Emerging risks and challenges

Increased adoption or consideration of adoption is leading to more risks and challenges as well as declined confidence in handling these risks:

  1. Security and risk concerns are the top barrier to scaling agentic AI.
  2. Inaccuracy and cybersecurity remain the most frequently cited AI risks as adoption expands.
  3. Active mitigation lags behind risk awareness across nearly every AI risk category.
  4. AI incident frequency remains stable, but confidence in organizational response has declined.

How enterprises and organizations are responding

As organizations implement RAI, they’re facing barriers and differences in maturity while RAI is increasingly becomes a business enabler:

  1. Knowledge and training gaps are the leading barrier to RAI implementation.
  2. Organizations with explicit accountability for RAI achieve higher maturity scores than those without clear accountability.
  3. AI trust is increasingly viewed as a business enabler rather than a compliance exercise.

The state of AI trust today: Scrambling to keep up with the pace of change

The AI Trust Maturity Survey provides a detailed view into how organizations are approaching issues of trust and responsibility. Below, we examine the above ten insights in detail, looking at overall maturity levels, how progress differs across industries and regions, and how investment in RAI efforts relates to both trust capabilities and realized AI value.

Insight 1: RAI maturity continues to improve, yet strategy, governance, and agentic AI controls lag behind

The average RAI maturity score increased to 2.3 in 2026, up from 2.0 in 2025. However, only about one-third of organizations report maturity levels of three or higher in strategy, governance, and agentic AI governance (Exhibit 2). This imbalance suggests that while technical and risk management capabilities are advancing, organizational alignment and oversight structures are struggling to keep pace with the rapid expansion of AI use.

Responsible AI maturity is improving, but strategy, governance, and agentic AI governance and controls and lagging behind.

Insight 2: RAI maturity varies by industry and region

Technology, media, and telecommunications and financial services continue to lead in RAI maturity, driven by stronger risk management and data foundations (Exhibit 3). Regionally, Asia–Pacific leads overall maturity, while governance and agentic AI controls lag behind data and technology across all regions, indicating a globally consistent governance gap (Exhibit 4).

Responsible AI maturity varies by industry, with technology, media, and telecommunications and financial services leading.
Responsible AI maturity varies by region, with Asia-Pacific leading overall.

Insight 3: Investment in RAI is strongly associated with higher RAI maturity and realized value

Organizations investing $25 million or more into RAI initiatives report significantly higher maturity scores and are far more likely to realize material AI benefits, including EBIT impact above 5 percent (Exhibit 5). This relationship reinforces that RAI investment is not a tax on innovation but a key enabler of sustained value creation.

Investment in responsible AI is associated with greater realized AI value and higher responsible AI maturity.

Emerging risks: The key challenges organizations face today

The findings also explore the evolving risk landscape, including barriers to scaling agentic AI, the risks that organizations prioritize, and the disconnect between risk awareness, mitigation efforts, and incident preparedness.

Insight 4: Security and risk concerns are the top barrier to scaling agentic AI

Nearly two‑thirds of respondents cite security and risk concerns as the top barrier to fully scaling agentic AI, well ahead of regulatory uncertainty or technical limitations (Exhibit 6). This suggests that organizations are less constrained by experimentation capabilities and more by confidence in their ability to safely deploy autonomous systems at scale.

Security and risk concerns are the most frequently cited obstacle to fully scaling agentic AI.

Insight 5: Inaccuracy and cybersecurity remain the most frequently cited AI risks as adoption expands

As AI adoption grows, 74 percent of respondents identify inaccuracy and 72 percent cite cybersecurity as highly relevant risks (Exhibit 7). These risks remain foundational concerns even as newer agentic risks emerge, highlighting that organizations must manage both traditional model risks and the expanded threat surface introduced by autonomy.

As AI adoption expands, inaccuracy and cybersecurity stand out as top-of-mind risks.

Insight 6: Active mitigation lags risk awareness across nearly every AI risk category

Across almost all risk types, respondents report a meaningful gap between the risks they consider relevant and those they are actively mitigating (Exhibit 8). This gap is especially pronounced for intellectual property infringement and personal privacy, suggesting that risk awareness is outpacing the implementation of controls, processes, and tooling needed to manage it effectively.

Active mitigation lags behind perceived relevance across nearly every risk category.

Insight 7: AI incident frequency remains stable, but confidence in organizational response has declined

The share of organizations reporting AI-related incidents has remained steady at roughly 8 percent, but perceptions of incident response quality have deteriorated. Almost 60 percent of respondents who experienced incidents report satisfactory or negative views of their organization’s response, indicating that while incidents may not be increasing, preparedness and response capabilities are failing to keep pace with growing system complexity (Exhibit 9).

The number of AI incidents remained constant since 2025, but confidence in the response to such episodes has declined.

How enterprises and organizations are responding to emerging risks and challenges

In response to emerging risks and challenges, organizations are working to strengthen the foundations of AI trust—closing capability gaps, clarifying accountability, and building the RAI capabilities needed to ensure trust accelerates innovation rather than constrain it.

Insight 8: Knowledge and training gaps are the leading barrier to RAI implementation

Nearly 60 percent of respondents cite knowledge and training gaps as the primary barrier to implementing RAI practices, up from about 50 percent last year (Exhibit 10). While executive support has improved, the data suggests that organizations continue to struggle with building the skills, awareness, and operational muscle required to embed RAI consistently across teams.

Knowledge and training gaps and resource and budget constraints are the largest barriers to implementing responsible AI measures.

Insight 9: Organizations with explicit ownership for RAI have higher maturity than those without clear accountability

Organizations that assign clear ownership for RAI—particularly through AI‑specific governance roles or internal audit and ethics teams—exhibit the highest average maturity levels, with an average score of 2.6. In contrast, organizations without a clearly accountable function lag behind materially (scoring an average of just 1.8), reinforcing the importance of explicit ownership and decision rights (Exhibit 11).

Explicit accountability for responsible AI leads to a greater level of responsible AI maturity.

Insight 10: AI trust is increasingly viewed as a business enabler rather than a compliance exercise

Respondents report improvements in business outcomes, operational efficiency, and customer trust more frequently than negative outcomes (Exhibit 12). At the same time, the perceived influence of some regulatory frameworks has declined, suggesting a shift from compliance‑led motivation toward value‑ and performance‑driven adoption of AI trust.

AI trust is increasingly viewed as a business enabler.

Looking ahead: Trust as the enabler of scale

As AI systems become more autonomous and embedded in critical workflows, gaps in governance and risk management will become increasingly costly. Organizations that fail to establish clear accountability, robust controls, and effective monitoring mechanisms risk slower adoption, higher incident impact, and diminished stakeholder trust.

Conversely, organizations that treat AI trust as a core business capability, rather than as a compliance requirement, are better positioned to scale AI adoption to its full potential. These capabilities are not something that you can buy and install but rather require a concerted combination of policies, processes, people, and technology to build the foundations needed for agents, robots, and people to work together in new ways. Building a trustworthy innovation engine early will define those who capture long-term value from AI in the agentic era.

Gabriel Morgan Asaftei is a partner in McKinsey’s New York office, where Abby Sticha is a consultant; Roger Roberts is a partner in the Bay Area office; and Cécile Prinsen is an associate partner in the London office.