Organizations are investing billions in AI, and employees are increasingly using the technology. Yet only a small minority of companies are reporting meaningful or measurable gains from its use. It’s the gen AI paradox: The technology can be found nearly everywhere—except on the bottom line.
This is not an AI capability problem. We’ve created systems that can reason, create, and even act. Instead, it’s an experience problem: We’re stuck using search bars and chat boxes bolted onto interaction paradigms designed for a pre-AI era. If organizations are to realize AI’s potential, they must learn to create new kinds of AI experiences that employees and customers will enthusiastically embrace.
Doing so will require leaders to rethink a host of long-standing assumptions. For decades, software operated on a basic model: users specified structured inputs, and the system responded with structured outputs. Generative and agentic AI fundamentally breaks this model. Systems now interpret intent, generate novel outputs, and require user input to interact with and refine those outputs. This is a massive interaction shift: the interface is no longer a fixed set of “command and execute” controls; rather, it is a “collaborate and iterate” model.
Yet most organizations are still designing for yesterday’s workflows—layering AI onto legacy systems rather than reimagining those systems. The result: promising tools remain disconnected and fail to deliver on transformation. The latest McKinsey Global Survey on the state of AI finds that most companies using AI remain in the pilot or experimenting phase, with two-thirds saying they have not yet begun scaling AI across the organization. The problem isn’t the models themselves—it’s that these tools exist outside the flow of work, forcing users into unfamiliar interaction patterns while offering little visibility into how decisions are made.
Fixing this will require more than training people to prompt better. It will mean designing systems that embed human judgment directly into the interaction model. In many of today’s AI tools, users tend to oscillate between accepting outputs uncritically or abandoning tools when the results are disappointing. AI-native experiences must make collaboration, review, correction, and intervention feel like a natural part of the workflow.
What follows is a framework for designing the kinds of intelligent experiences that unlock AI’s full potential.
A new technology with new design challenges
Gen AI and agentic AI tools tend to move fast, often responding to queries in seconds with a voice that brims with confidence. But speed is not the same as comprehension, and confident language can mask shallow reasoning. This quickly becomes apparent when organizations attempt to build AI experiences that are transformative, whether for the enterprise or for consumers. In general, leaders encounter four key breakdowns that must be surmounted:
- Intent ambiguity: Failure to understand what users want. Even for the most skilled communicators, language can be messy, contextual, and often underspecified. Large language models can approximate meaning, but they cannot always infer the full intent behind a prompt, resulting in misunderstandings and inaccurate outputs. In addition, while some AI systems incorporate follow-up questions, many experiences still lack effective clarification loops. As a result, ambiguity is often left unresolved, and misinterpretations go uncorrected before the task is executed.
- Context gaps: Failure to know what information is required. Systems are not designed to identify, request, or retrieve the information required to perform a task thoroughly and accurately. While users trust the system to “know what it needs,” the AI often proceeds with only a partial understanding of the context. This shifts the burden to users to anticipate problems, requiring them to supply exhaustive details through lengthy prompts—which creates friction, inefficiency, and inconsistent results.
- Generic outputs: Failure to apply standards with specificity. Systems are not designed to learn and apply organizational standards. Users expect relevant, in-depth, and high-quality answers, but because the AI does not understand business-specific patterns and requirements, it delivers generic, disappointing results that require heavy editing.
- Noncollaborative iteration: Failure to evolve the work with the user. Systems aren’t designed to invite two-way collaboration into the process. AI is biased toward delivering outputs rather than thinking alongside—and genuinely collaborating with—its human users.
Without visibility into how decisions are made, why actions are taken, or when human input is required to generate optimal results, user trust never really develops. As a result, AI tools fail to scale, and meaningful, organization-wide impact remains elusive. This misalignment is not technical—it’s experiential. With AI tools, the interface is the collaboration layer between human judgment and machine intelligence—the zone in which intent is expressed, intelligence responds, and trust is built. But we’ve barely begun designing for it.
Designing intelligent experiences that scale
Generative and agentic AI introduce behaviors, ambiguity, variability, and probabilistic reasoning that traditional user experience patterns were never built for. For these systems to deliver, they will require a new vocabulary of AI-native design patterns. This shift builds on what the McKinsey report, The business value of design, made clear nearly a decade ago: Design is a strategic capability, not an aesthetic layer. But in the AI era, those principles must evolve. We must create with clarity to ensure that AI systems evolve with users, bring depth to workflows so outputs reflect real expertise, and orchestrate cocreation so people and AI agents shape the work together. Designing the right experience becomes the connective tissue between human judgment and machine intelligence—a place where work, meaning, and confidence converge.
Four design principles to drive effective AI-native experiences
Across our AI work with leading global organizations in operations, marketing and sales, and customer experience—in sectors such as banking, life sciences, and insurance—we have developed four design principles to guide this evolution (table). These principles address the everyday breakdowns that prevent AI from becoming a trusted partner and enable systems that are intuitive, collaborative, and truly impactful. When workflows are reimagined with these principles in mind, adoption accelerates and the value of AI is unlocked. Below, we explore these four principles, illustrating what they look like in practice through the story of how we helped a marketing organization redefine the way it creates campaigns.
| AI-era design principle | Description | |
| Lead with clarity | Design systems that make their logic, assumptions, and outputs clear, enabling users to confidently understand the outputs | |
| Design for continuity | Sustain context and memory across interactions to create coherent, personalized, and seamless experiences over time | |
| Build for depth | Enable rich, multistep, domain-specific workflows that go beyond single interactions to support meaningful end-to-end outcomes | |
| Orchestrate cocreation | Create environments where human expertise and AI agents collaborate fluidly—both in real time and across disciplines—to amplify impact | |
1. Lead with clarity: Make intelligence explain itself
AI cannot earn trust if its logic and processes remain hidden. Systems must reveal how conclusions are reached, where uncertainty exists, and what trade-offs shaped the result. When reasoning becomes legible, people can engage with it, question it, and decide with confidence.
Example: A marketer asks an AI tool to suggest design and copy tweaks for a campaign. Instead of providing an immediate answer (such as specific design or copy suggestions), the AI asks clarifying questions, gathers detailed requirements, restates its understanding, and only then collaboratively works with the marketer to unpack the request.
2. Design for continuity: Carry context forward
Work rarely happens in isolation, yet many AI systems behave as if every request is a fresh start. AI should recognize progress across users and steps, remembering what came before so it can anticipate what comes next. Continuity turns disconnected outputs into momentum.
Example: A marketing campaign AI tool supports analysts in testing concepts across multiple survey rounds. When Round 2 results arrive, the AI not only summarizes the new data but also automatically connects insights from Rounds 1 and 2. This ensures the next iteration builds on prior context—highlighting what is working, what is not, and noting what should change—rather than simply reacting to the latest results in isolation. The AI then delivers holistic recommendations grounded in cumulative learning, not single-point inputs.
3. Build for depth: Automate entire workflows rather than just provide answers
The real opportunity is AI’s potential to connect multistep processes that human workers follow instinctively—such as gathering data, applying logic, testing alternatives, and refining outputs. Depth transforms AI from a rapid respondent to a capable partner.
Example: A marketer initiates a research plan, and the system automatically assembles a team of specialized AI agents to act as a critique committee. Each agent analyzes the draft of the plan through its own lens—data, audience insights, competitive context, and creative quality—and provides reasoning, recommendations, and refinements in the form of a deeply reasoned, high-quality research plan.
4. Orchestrate cocreation: Blend human judgment with machine intelligence
The future of work will depend on how effectively people and AI systems share responsibility. This goes beyond the notion of including a human in the loop. The goal is not for people to correct the system after the fact, but to design human–AI interactions that simplify, reimagine, and refine the work itself, in a way that improves with every interaction to drive real outcomes. AI systems must invite users to steer, revise, and debate, allowing solutions to emerge from collaboration rather than one-way generation.
Example: Rather than positioning AI as the primary author and the marketer as a downstream reviewer, this model reframes creation as a collaborative process. AI and human marketers generate in tandem, bringing distinct strengths—structural clarity and strategic framing from AI and contextual judgment and creative nuance from humans. The system then makes these strengths explicit, compares alternatives, and empowers the marketer to determine what works best. The final output blends both perspectives, resulting in higher-quality thinking, stronger outcomes, and a more constructive human–AI partnership.
Building AI tools according to these four design principles enabled the organization to deliver higher-quality outcomes more efficiently. For example, when the clarity principle was applied to help regional store managers retrieve insights, allowing the AI tool to ask clarifying follow-up questions led nearly 75 percent of pilot users to express enthusiasm for the tool, resulting in greater adoption and an incremental market sales uplift of more than 2 percent. In another case, when designing an experience to prepare sales reps with better talking points, integrating a new tool with their existing systems without breaking context was rated as the top desired AI feature by more than 90 percent of users, ensuring continuity and preserving workflow. When designing intelligent experiences for hotel managers, nearly all 180 pilot users reported higher trust and were more likely to use the tool in their day-to-day work as soon as the design started exposing agentic AI’s reasoning flows. Across cases, experience design that follows the right AI design principles proved critical for driving adoption.
A new era for AI experiences
For decades, AI has operated quietly in the background, while users were trained to interact within narrow input–output constraints. Now those boundaries are shifting. We are beginning to understand what the new landscape demands: designing the experience architecture between people and intelligent systems will require a new mindset across the organization.
For leaders, it’s essential to set a clear vision for how AI will reshape the way your organization creates value. This is not about adding more tools but rather aligning technology, design, data, and operations around shared workflows. Leaders must create the conditions for cross-functional orchestration, because collaboration will determine whether AI will be a strategic asset or another pilot that never scales.
For designers, the scope is shifting from shaping interfaces to designing how people and systems work together. The work is no longer to make screens intuitive but to understand the flow of judgment, correction, and coordination across humans and AI agents. Designers must devise new interaction patterns that let teams share context, negotiate intent, and build confidence as work unfolds. The user is no longer just a person; it’s a network of people, tools, and intelligent agents.
For product managers, generative AI and agentic AI fundamentally shift the logic of product definition. Requirements become outcomes, not features, and interaction models are more adaptive and less deterministic. Leading a team will be a balance of navigating ambiguity while helping users acclimate to new forms of interaction. Measures of success will change from feature delivery to systems that learn, improve, and create value across the workflow. More so than ever before, product managers must understand business outcomes and drivers so they can cocreate a reimagined experience.
For technologists, the work ahead is not just algorithmic or about curating the right set of tools and platforms. Engineers, data scientists, and platform engineers must design for legibility, auditability, and alignment with human decision-making. This requires even deeper partnership with product managers, designers, and domain experts within the business. The task is no longer to build isolated models but to create intelligent systems that integrate and adapt yet remain governable.
Organizations that break through will not be the ones that chase better models. They will be those that fundamentally rethink the way work happens. Their advantage will come from the ability to design experiences that people trust, rely on, and choose to use. That is, the next frontier of AI will be about designing the architecture of collaboration—the systems that make intelligence understandable, governable, and usable at scale. It’s a natural extension of what leading organizations have understood for years: Design is not a layer of polish; it’s a key driver of performance.


