Ushering in a new era of trusted AI

Each week seems to bring another announcement about new AI tools with the potential to reshape how organizations function. This progress has been accompanied by the growing recognition of the importance of trusted AI, defined by the OECD as corresponding solutions that are reliable, transparent, fair, resilient, and accountable.1 The concept of trusted AI brings together technical robustness and security with ethical and social responsibility.

The rapid advancement and widespread adoption of AI have made data privacy and AI regulation critical topics on the global regulatory agenda. Yet the implementation of these guidelines and the regulatory landscape itself have become increasingly complex over the past several years.

Despite the higher regulatory burden, many organizations continue to depend on manual processes to implement regulatory and AI-related controls, adding layers and cost. As McKinsey’s 2025 analysis of regulatory technology (RegTech) shows, financial institutions relying on manual compliance systems often fulfill only a fraction of their obligations, leaving them at higher risk of penalties and operational inefficiencies.

There is a high degree of difficulty in striking the right balance in compliance. For example, a US-based bank’s legacy system met just 75 percent of requirements before the adoption of an automated RegTech solution streamlined data mapping, raising compliance to above 95 percent.

Given these competing challenges, companies should consider viewing compliance not as a cost but as an enabler of AI scale. This approach entails adopting a more streamlined, scalable, and digital approach to regulatory adherence. Once organizations overcome some common obstacles in the evolving regulatory environment, six levers drawn from best practices can help organizations elevate their AI compliance efforts. Companies that get it right can not only unlock faster AI deployment but also reduce their risk.

Understanding the challenges

The widespread adoption of AI chatbots and voice bots, augmenting human behavior with human voice, the rise of personalized, tailored marketing, and rapid advancements in generative AI in new products and services offered to customers have prompted regulators to establish clear guardrails, with the aim of providing customers and employees with greater control over their data and preventing fraud. Regulations such as the General Data Protection Regulation (GDPR) and the European Union AI Act (as part of the EU Data Governance Act), as well as cybersecurity frameworks such as the Digital Operational Resilience Act (DORA), have emerged as key pillars. Similar initiatives have been introduced globally, including in Brazil, Thailand, and Türkiye, as well as by individual US states, such as the California Consumer Privacy Act (CCPA).

Despite the efforts of policymakers to provide clear guidance on compliance, the relationship between various regulations remains confusing. The result is that no universal standard exists to guide companies in simultaneously managing data privacy, verifying trusted AI, and promoting resilience. Conflicting requirements across jurisdictions further complicate compliance; even within regions such as Europe, interpretations of regulations such as the GDPR vary significantly between countries.

Organizations have tended to respond to these regulatory demands in an ad hoc manner. Often, impending compliance deadlines spur companies to reactively implement transparency measures and controls. For example, some organizations have deployed surveys and interviews to compile process repositories. This fragmented and inconsistent documentation undermines the organization’s ability to maintain accurate, updated compliance records and controls as well as alignment across business units. Similarly, controls—such as those governing the use of personal data or adherence to the principles of the EU AI Act2—are frequently implemented as manual checks. Last, companies can fail to assign responsibility for compliance, which can end up distributed across multiple departments, such as legal, IT, and compliance.

Adopting sustainable solutions for digital and AI compliance

To consistently achieve digital and AI compliance with emerging regulations, organizations should consider six levers.

1. Centralize compliance within existing teams

Compliance efforts should be incorporated into established processes and overseen by teams with a holistic view of the organization. For instance, data privacy teams could also take responsibility for compliance with the EU AI Act, or teams in charge of complying with ISO 9001 (a standard for quality management systems) could oversee the creation of process repositories. Centralizing oversight can eliminate duplicate and inconsistent documentation—for example, separate process representations for ISO 9001 and GDPR—thereby improving efficiency and reducing complexity. Such approaches are increasingly visible across industries, especially in banking, where we have seen several major institutions introduce centers of excellence that orchestrate data and AI responsibility among IT, business, and risk functions.

While execution necessarily spans legal, IT, data, and business functions, organizations should consider anchoring AI compliance accountability within the risk or compliance function, given its mandate for enterprise-wide control, regulatory interpretation, and independent oversight.

2. Digitalize controls through the use of legal tech tools

Many organizations are in the early stages of adopting legal tech tools, with factors such as cultural resilience, data security concerns, and skills gaps impeding integration. Still, these tools hold significant potential. As one example, generative AI tools can automate policy adherence checks, flagging exceptions for further review. Tools that identify personally identifiable information (PII) can reduce the time and errors associated with manual processes. A comprehensive review of existing controls, with a focus on automating repetitive tasks such as data retention management, can not only streamline compliance efforts but also enhance accuracy and consistency.

3. Simplify overarching AI governance structures

Many organizations have overly complex governance setups, which hinder compliance efforts. This is especially true for multinational corporations with numerous subsidiaries, joint ventures, and regional entities. Simplifying policy frameworks across legal entities and consolidating oversight under a single chief compliance officer can dramatically reduce complexity and facilitate data transfer and compliance checks. Organizations should consider moving toward a federated model in which the risk or compliance function provides enterprise-level governance and challenge, while legal, IT, data, and business teams own implementation across the AI life cycle.

Regulators themselves are increasingly acknowledging the need for simplification and encouraging companies to streamline their approaches to compliance. For instance, the European Commission’s Digital Omnibus initiative explicitly aims to reduce the administrative burden and overlapping requirements across digital, AI, and data laws, creating a more coherent and predictable regulatory landscape. Similarly, international policy bodies such as the G7 and OECD have emphasized the importance of harmonizing AI and data governance frameworks globally to minimize fragmentation and ease cross-border compliance.

4. Design change management processes for all employees

To facilitate the integration of compliance into daily operations, organizations would need to prioritize change management for their workforce. Targeted training and awareness programs for employees can reinforce the importance of data privacy, ethical AI, and regulatory compliance in their daily roles. Tailored, role-specific guidance for functions such as IT, legal, and operations supports alignment with compliance requirements across the organization. For instance, we have seen a bank achieve a successful technological transformation by dedicating 34 percent of its investment to structured change management efforts.

In addition, fostering a culture of accountability and ethical decision-making, supported by a clear commitment and communication from leadership, helps weave these principles into the organizational fabric. Empowering employees to understand and embrace these responsibilities creates a workforce that actively supports compliance and resilience efforts.

5. Build resilient and ethical AI systems across the value chain

To ensure both resilience and ethical behavior, organizations must go beyond compliance controls to focus on the technical foundations of their AI systems. This lever includes designing technology architectures that are robust, scalable, and capable of withstanding disruptions such as cyberattacks, system failures, or regulatory changes.

Equally important is the development of ethical AI systems, which requires organizations to implement frameworks that promote transparency and fairness while mitigating bias. Such frameworks include the OECD AI Principles and the IEEE P7001 transparency standard. Critical components of this effort are regular audits of AI models, explainability mechanisms, and adherence to ethical guidelines throughout the AI life cycle. By addressing these technical aspects, organizations can build AI systems that both meet regulatory requirements and earn the trust of customers, employees, and other stakeholders.

6. Undertake continuous monitoring and improvement

Compliance is not a static goal but an ongoing process that requires continuous monitoring and improvement. Organizations can use AI tools that can proactively identify and address potential risks in real time. Establishing feedback loops with employees, customers, and other stakeholders provides valuable insights into the effectiveness of compliance practices and opportunities for refinement. Periodic reviews of policies, controls, and governance structures help them keep pace with evolving regulations and business needs to maintain operational resilience over time.

Next steps for implementation

To fully harness these levers, organizations could take the following steps:

  • Assess the current state. Conduct a thorough benchmarking exercise (such as McKinsey’s AI Trust Maturity Model) to compare the organization’s current setup against industry best practices. A full evaluation of the organizational structure, policy frameworks, and control mechanisms can highlight gaps between an organization’s AI impact aspirations and its AI trust and governance readiness.
  • Identify automation opportunities. Collect and review all relevant controls, along with their current implementation methods, to flag opportunities for automation. Organizations should prioritize controls that are time-intensive, error-prone, or repetitive. Candidates for automation are processes that occur on a defined schedule (such as monthly reviews, quarterly audits, or continuous monitoring) rather than one-off checks.
  • Develop a unified program. Translate insights from the benchmarking and automation assessments into a comprehensive program, with a central oversight team in the lead. This program should focus on simplifying governance structures, digitalizing controls, and standardizing compliance efforts across the organization.

The era of trusted AI demands a fundamental shift in how organizations approach compliance. The six levers offer a path for organizations to transition from reactive, manual compliance processes to a proactive, digital-first approach that reflects the demands of the new regulatory landscape. These changes can also reduce the burden of compliance and position organizations to thrive in an increasingly regulated digital environment.

Henning Soller is a partner in McKinsey’s Frankfurt office, Anselm Ohme is a consultant in the Berlin office, Fares Darwazeh is an alumnus of the Riyadh office, and Thao Dürschlag is an associate partner in the Munich office.

1 “AI principles,” OECD, accessed February 20, 2026.
2 The act sets requirements for all companies that use, deploy, or integrate AI systems. The obligations center on governance, accountability, and oversight: ensuring the system’s performance, addressing bias and unfairness, maintaining human oversight, and being able to demonstrate compliance.