AI tools are now widely used in software development, but questions around trust, scale, and real productivity impact are still not fully resolved. McKinsey Senior Partner Martin Harrysson and Partner Prakhar Dixit spoke with Prashanth Chandrasekar, CEO of Stack Overflow, about what he is observing across the global developer community and among enterprise users. Chandrasekar shared his perspective on why adoption has outpaced trust, how knowledge quality and management are becoming central to scaling AI effectively, and how these dynamics are shaping developer sentiment and evolving talent models.
This interview has been edited for clarity and length.
Prakhar Dixit: To start, could you share a bit about your journey at Stack Overflow and how the landscape has changed since you joined?
Prashanth Chandrasekar: I joined Stack Overflow as CEO in 2019, 11 years into the company’s history. We host one of the world’s largest software developer communities, serving roughly 100 million users globally, with about 83 million questions and answers across every technology topic.
Over my tenure, the focus has been on transforming Stack Overflow into more of an enterprise company. We launched Stack Internal, the enterprise version of Stack Overflow, which is now used by around 20,000 companies to manage internal knowledge and help teams operate with accurate information. That has become even more important with AI’s rise, because the value of AI tools depends on the quality of the information they consume and act on.
We’ve also continued to evolve our public platform by incorporating AI and forming strategic partnerships.
Martin Harrysson: From the perspectives of the two user bases you have—the public developer community and enterprise customers—where is AI being used in software development today, and where is it heading?
Prashanth Chandrasekar: Across both perspectives, AI in software development is already widely used, but it is not yet widely trusted. Adoption has moved faster than confidence, and that gap is shaping how quickly AI can scale.
On the public side, our annual Stack Overflow Developer Survey shows that roughly 80 to 85 percent of developers are using or considering AI tools,1 yet only about 30 percent trust AI-generated outputs—down from about 40 percent the year before.2 Developers are experimenting because the promise of AI is high, but their confidence in AI’s reliability has not kept pace.
On the enterprise side, the pattern is similar. Early pilot teams adopted AI enthusiastically over the past 12 to 18 months. As organizations attempt to scale usage more broadly, they are confronting the practical realities of deploying AI at scale in a trustworthy way.
Developers are experimenting because the promise of AI is high, but their confidence in AI’s reliability has not kept pace.
Prakhar Dixit: Productivity has long been debated in software engineering. How has that conversation evolved as AI has been introduced across the product development life cycle?
Prashanth Chandrasekar: The fundamentals haven’t really changed. Core engineering productivity metrics still apply, but it’s always been possible to engineer them in ways that don’t fully reflect real productivity. What really matters is which set of metrics organizations choose to optimize for.
The main shift has been the level of scrutiny. AI has put a microscope on engineering productivity, and organizations are now under more pressure to measure and report outcomes to prove the value of investments in AI tools. While metrics like DORA [named for Google’s DevOps research and assessment team] and others have been defined for years, only a minority of organizations have historically measured productivity well and consistently.
Martin Harrysson: Many organizations are seeing only marginal productivity gains so far. How do you go from individual wins to scaling impact across the organization?
Prashanth Chandrasekar: Overall, it is still early days, and the impact varies significantly by use case. There are areas where AI usage is more mature today—for example, in prototyping and early validation. Product designers and product managers are using AI tools to prototype new features, which allows them to visualize their ideas and get feedback earlier. That creates excitement, but the productivity gains tend to be marginal rather than transformative.
A big constraint is company operating models. AI has the potential to reshape the entire software and product development life cycle, but realizing that potential requires a foundational change to workflows, roles, and ownership. Steps are collapsing. For instance, product managers and designers can now get much closer to an MVP [minimum viable product] before engineering becomes involved, and organizations are still working through what those changes mean in practice.
Prakhar Dixit: You’ve highlighted trust as a central issue. How do you define trust in the context of AI, and what does it take to build it?
Prashanth Chandrasekar: Trust is multidimensional. One of the most important dimensions is knowledge quality. If the underlying data or context that AI tools rely on is not curated or accurate, the outputs will be unreliable. It’s a classic case of “garbage in, garbage out.”
That’s why having high-quality, expert-verified knowledge inside organizations has become critical, not a nice-to-have. Many companies have rushed to implement enterprise search, but if an AI assistant is pulling from an unverified or outdated search index, the results will be inconsistent at best. What we are seeing is that organizations are increasingly recognizing this and investing more seriously in robust internal knowledge management, because it materially improves trust in AI outputs.
Beyond knowledge management, trust also depends on security and privacy, as well as how developers themselves perceive these tools, including concerns about reliability, control, and, for some, job security.
If the underlying data or context that AI tools rely on is not curated or accurate, the outputs will be unreliable. It’s a classic case of “garbage in, garbage out.”
Prakhar Dixit: Given all of this, what are you seeing in terms of developer sentiment, and in what direction is it trending?
Prashanth Chandrasekar: Sentiment varies by experience level. Junior developers are generally very enthusiastic, viewing AI as an opportunity to upskill quickly and be productive early in their careers.
In contrast, senior developers are naturally more skeptical. AI tools are probabilistic rather than deterministic, and senior developers are used to writing precise, predictable code. That’s not how AI works. It’s a very different way of writing software. That said, many very senior developers have embraced generative AI because of the promise of productivity gains over time, even if those gains have not yet been fully realized.
Martin Harrysson: Many companies are also thinking about how to move beyond chat-based tools and adopt more autonomous AI agents. What advice would you give organizations as they make that shift?
Prashanth Chandrasekar: As AI becomes more agentic, trust becomes even more critical. Agents need the right context in order to take the right actions, which makes knowledge infrastructure foundational.
Companies, including us, have been innovating at multiple levels to support this. For example, we’ve built an MCP [Model Context Protocol] server that allows knowledge from our platform to be surfaced directly into the environments where developers already work. The goal is to ensure that agents understand where to go inside a company to find the right information.
As AI becomes more agentic, trust becomes even more critical.
In parallel, knowledge graphs are becoming increasingly important. They help organizations understand how systems and information connect, what is accurate, and where information resides. As companies deploy more agents, they need those agents to be trained on the right sources and to understand the structure of knowledge inside the organization.
Without a system of record for knowledge, it becomes very difficult to give agents the autonomy organizations are beginning to expect.


